code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="anHe8KleuzQV"
# ## Importing Modules
# + colab={"base_uri": "https://localhost:8080/", "height": 33} id="byt6dQK5mjy3" outputId="97381b47-8585-4909-e2f2-ad34bdcc647c"
import io
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
print(tf.__version__)
# + [markdown] id="O3qQxFd0u2FO"
# ### Downloading Dataset form tensorflow_datasets
# + id="hPycC4r7qlyY"
imdb , info = tfds.load('imdb_reviews' , as_supervised = True , with_info = True)
train_data , test_data = imdb['train'] , imdb['test']
# + [markdown] id="R7OF41IbsObt"
# ## Extracting Train and Test Sentences and their corresponding Labels
# + id="ERqMANSIwAfU"
training_sentences = []
training_labels = []
testing_sentences = []
testing_labels = []
for sentence,label in train_data:
training_sentences.append(sentence.numpy().decode('utf8'))
training_labels.append(label.numpy())
for sentence,label in test_data:
testing_sentences.append(sentence.numpy().decode('utf8'))
testing_labels.append(label.numpy())
training_labels_final = np.array(training_labels)
testing_labels_final = np.array(testing_labels)
# + [markdown] id="JqdHTx3isW9W"
# ## Initializing Tokenizer and converting sentences into padded sequences
# + id="AMs3v-11y_hD"
vocab_size = 20000
embed_dims = 16
truncate = 'post'
pad = 'post'
oov_token = '<OOV>'
max_length = 150
tokenizer = Tokenizer(num_words=vocab_size , oov_token = oov_token)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index
train_sequences = tokenizer.texts_to_sequences(training_sentences)
padded_train = pad_sequences(train_sequences , truncating = truncate , padding = pad , maxlen = max_length)
test_sequences = tokenizer.texts_to_sequences(testing_sentences)
padded_test = pad_sequences(test_sequences , truncating=truncate , padding = pad , maxlen = max_length)
# + [markdown] id="_Rwhz2KWtHg1"
# ## Decoding Sequences back into Texts by creating a reverse word index
# + colab={"base_uri": "https://localhost:8080/", "height": 70} id="qL7Yi0DH0F6I" outputId="89ee028e-560d-412a-e2c2-a015db4d1186"
reverse_word_index = dict([(values , keys) for keys , values in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i , '?') for i in text])
print(decode_review(padded_train[3]))
print(training_sentences[3])
# + [markdown] id="KIxlRg64tRGO"
# ### Making DNN Model
# + id="a18EUUuF1cGs"
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size , embed_dims , input_length= max_length),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(units = 16 , activation = 'relu'),
tf.keras.layers.Dense(units = 1 , activation = 'sigmoid')
])
model.compile(optimizer = 'adam' , loss = 'binary_crossentropy' , metrics = ['accuracy'])
# + [markdown] id="9o373EBNtVHA"
# ### Summary of the Model's Processing
# + colab={"base_uri": "https://localhost:8080/", "height": 284} id="yt3Q8DDG4zIs" outputId="ca5a5012-9113-49d2-a0e7-42c28fecd402"
model.summary()
# + [markdown] id="bffilsJ7tYav"
# ### Initialzing a callback to avoid overfitting , Fitting the data on the model
# + colab={"base_uri": "https://localhost:8080/", "height": 200} id="LWIVGJnE42ZW" outputId="0e622b19-3132-41e2-ee63-65e10c552909"
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.99):
print("\nReached 99% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
history = model.fit(
padded_train,
training_labels_final,
epochs = 15,
validation_data = (padded_test , testing_labels_final),
callbacks = [callbacks]
)
# + [markdown] id="wP7YtuEbtl8j"
# ### Extracting Embeddings from the Model
# + colab={"base_uri": "https://localhost:8080/", "height": 33} id="6L20qim27BLZ" outputId="0185560f-cbe9-4ff2-8fe2-8874ca124f88"
embed_layer = model.layers[0]
embed_weights = embed_layer.get_weights()[0]
print(embed_weights.shape)
# + [markdown] id="sYOR3Aixtpst"
# ### Exporting meta.tsv and vecs.tsv (embeddings) to visualize it in tensorflow projector in spherical form
# + id="ObD811Xb-9uA"
out_v = io.open("vecs.tsv" , mode = 'w' , encoding='utf-8')
out_m = io.open("meta.tsv" , mode = 'w' , encoding='utf-8')
for word_num in range(1,vocab_size):
word = reverse_word_index[word_num]
embeddings = embed_weights[word_num]
out_m.write(word + '\n')
out_v.write('\t'.join([str(x) for x in embeddings]) + '\n')
out_m.close()
out_v.close()
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="GeleEsu-_xFD" outputId="ceee492a-f8d4-46e1-ef98-35e7960300c1"
try:
from google.colab import files
except ImportError:
pass
else:
files.download('vecs.tsv')
files.download('meta.tsv')
# + [markdown] id="-H__hp6UsNX4"
# ### Testing the model on different Sentences ( if y_hat is above 0.5 review has been predicted positive and below 0.5 is a negative predicted review)
# + [markdown] id="ryijmApru6fQ"
# ### Creating function to convert sentences into padded sequences with the same hyperparameters
# + id="y-VDeDNWAP91"
def get_pad_sequence(sentence_val):
sequence = tokenizer.texts_to_sequences([sentence_val])
padded_seq = pad_sequences(sequence , truncating = truncate , padding = pad , maxlen = max_length)
return padded_seq
# + [markdown] id="X31zY3CtsNH7"
# Trying Positive Review
# + id="31YSo2T8r881"
sentence = "I really think this is amazing. honest."
padded_test_1 = get_pad_sequence(sentence)
# + [markdown] id="MR2xWWbzvr0l"
# 0.99 means its a very positive review and the classifier is good in predicting posiitve reviews
# + colab={"base_uri": "https://localhost:8080/", "height": 33} id="NpCR9rssrI_V" outputId="eea8d5fa-e0fd-4764-fb0b-0b274c383e33"
model.predict(padded_test_1)
# + [markdown] id="cYTaATujvG8Y"
# Trying Negative Review
# + id="0qxfLFYMrPJg"
sentence = "The movie was so boring , bad and not worth watching. I hated the movie and no one should have to sit through that"
padded_test_2 = get_pad_sequence(sentence)
# + colab={"base_uri": "https://localhost:8080/", "height": 33} id="gNJvaE7XvT74" outputId="cbd5bc50-166a-485b-c05d-a2e2b2f6f315"
model.predict(padded_test_2)
# + [markdown] id="r_rsIl87vY_o"
# 0.009 means its a very negative review and the model is correct
# -
# ## Analysis of Loss and Accuracy
#
# +
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
num_epochs = range(len(acc))
# -
def plot_graph(x,y1,y2 , string_lst):
plt.plot(x , y1 , x ,y2)
plt.title(string_lst[0])
plt.xlabel(string_lst[1])
plt.ylabel(string_lst[2])
plt.show()
plt.figure()
plot_graph(num_epochs , acc , val_acc , ['Accuracy Plot' , 'Accuracy' , 'Epochs'])
plot_graph(num_epochs , loss , val_loss , ['Loss Plot' , 'Loss' , 'Epochs'])
| nlp_imdb_reviews.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CME 193 - Introduction to Scientific Python
#
# The course website is at [icme.github.io/cme193](https://icme.github.io/cme193/index.html)
#
# Please check there for any materials related for the course (and let me know if something needs to be updated).
# # Overview of Course
#
# You can find a list of topics we plan to cover on the [course website](https://icme.github.io/cme193/syllabus.html). There will be a survey in a week or two to determine what additional topic(s) will be covered in the last week of the class.
#
# The goal of the course is to get you started with using Python for scientific computing.
# * We **are** going to cover common packages for linear algebra, optimization, and data science
# * We are **not** going to cover all of the Python language, or all of its applications
# * We will cover aspects of the Python language as they are needed for our purposes
#
# This course is designed for
# * People who already know how to program (maybe not in Python)
# * People who want to use Python for research/coursework in a scientific or engineering discipline for modeling, simulations, or data science
#
# You don't need to have an expert background in programming or scientific computing to take this course. Everyone starts somewhere, and if your interests are aligned with the goals of the course you should be ok.
# ## Structure of the Course
#
# Class: We'll intersperse lecture with breaks to work on exercises. These breaks are good chances to try out what you learn, and ask for help if you're stuck. You aren't required to submit solutions to the exercises, and should focus on exercises you find most interesting if there are multiple options.
#
# Homework: We'll have 3 or 4 homeworks, which should not be difficult or time consuming, but a chance to practice what we cover in class.
#
# Grading: This class is offered on a credit/no-credit basis. Really, you're going to get out of it what you put into it. My assumption is that you are taking the course because you see it as relevant to your education, and will try to learn material that you see as most important.
# # Python
print("Hello, world!")
# 
# (From [xkcd](https://xkcd.com/))
import antigravity
# # Using Python
#
# We're going to use Python 3 for this course. See the course website for details/instructions for installing python on your machine if you don't already have it.
#
# I'm running a [Jupyter](https://jupyter.org/) notebook. To start up this notebook, from a terminal, `cd` to the directory you want to be in, and start up a notebook server using `jupyter notebook`. This should pop up something in your web browser that lets you run notebooks. We're going to use these notebooks throughout the course, so if you don't have Jupyter yet, install it at the end of class or when you get home.
#
# If you don't have Jupyter notebooks running yet, you can run a Python REPL in a terminal using the `python` command, or `python3`
#
# If you're using a terminal, you may also want to try `ipython` or `ipython3` which will give you things like tab-completion and syntax highlighting
#
# If you don't have python on your computer yet, ssh into [farmshare2](https://srcc.stanford.edu/farmshare2) and run Python 3 from a terminal remotely
# ```bash
# > ssh <suid>@rice.stanford.edu
# ```
# # Virtual Environments
#
# At some point, you may run into a situation where different projects require different versions of the same package. One way to manage this is through virtual environments. This also has the benefit of allowing others to reproduce the state of your system, and (ideally) makes your code/projects reproducible.
#
# We're going to cover how to do this with Anaconda python, using `conda`, but you can also do this using `pipenv` or `virtualenv` - for example, see [here](https://docs.python-guide.org/dev/virtualenvs/).
#
# You can find the documentation for managing environments using `conda` [here](https://conda.io/docs/user-guide/tasks/manage-environments.html)
#
# 1. Create your environment
# ```bash
# conda create --name cme193 python=3.6
# ```
# `cme193` is the name of our virtual environment. `python=3.6` indicates that we are going to use this specific version of python. Generally you can specify which versions of packages should be used in your environment this way.
#
# 2. Activate your environment
# ```bash
# source activate cme193
# ```
# to deactivate an environment, you can use
# ```bash
# source deactivate cme193
# ```
#
# 3. Install packages and run python as you usually would
# ```bash
# conda install numpy # can also use pip
# conda install scipy
# conda install matplotlib
# ```
#
# Environments are something you *really* should use, but because there are a variety of ways to do this in practice, we're not going to enforce that you do it any particular way. Sometimes you may see a reference to this process in lecture or homework, and if you're not using `conda` to manage environments, just translate the statements into your situation.
#
# A final comment on environments - you have now seen the (minimal) basics. If you think there's something environments should reasonably do, chances are that you can actually do it. In particular, automating the setup of environments using environment files is something you may one day wish to use to share your work.
# # Exercise 1
#
# (10 minutes)
#
# We're going to let you take a first stab at Python syntax. If you already know Python, you can use this as a way to check your knowledge. If you don't already know Python, search online or talk to a neighbor to work through the list.
#
# 0. (optional) set up a new environment for CME 193
# 1. output the string "Hello, World!" using Python
# 2. import a Python Module
# * Try importing the `os` module, and printing [your current path](https://docs.python.org/3/library/os.path.html#module-os.path)
# 3. numeric variables
# * assign a variable $x$ to have value 1
# * increment $x$
# * print the product of $x$ and 2
# 4. write a for-loop that prints every integer between 1 and 10
# 5. write a while-loop that prints every power of 2 less than 10,000
# 6. write a function that takes two inputs, $a$ and $b$ and returns the value of $a+2b$
# 7. How do you concatenate strings?
# 8. How can you format a string to print a floating point number? Integer?
# 9. Write a function takes a number $n$ as input, and prints all [Fibonacci numbers](https://en.wikipedia.org/wiki/Fibonacci_number) less than $n$
# # Basic Arithmetic
#
# Operators for integers:
# `+ - * / % **`
#
# Operators for floats:
# `+ - * / **`
#
# Boolean expressions:
# * keywords: `True` and `False` (note capitalization)
# * `==` equals: `5 == 5` yields `True`
# * `!=` does not equal: `5 != 5` yields `False`
# * `>` greater than: `5 > 4` yields `True`
# * `>=` greater than or equal: `5 >= 5` yields `True`
# * Similarly, we have `<` and `<=`.
#
# Logical operators:
# * `and`, `or`, and `not`
# * `True and False`
# * `True or False`
# * `not True`
#
# # Strings
#
# Concatenation: `str1 + str2`
#
# Printing: `print(str1)`
str1 = "Hello, "
str2 = "World!"
print(str1 + str2)
# Formatting:
str1 = "float %.2f" % 1.0
print(str1)
str2 = "integer %d" % 2
print(str2)
str3 = "input {}".format(5) # try this with different types
print(str3)
# some methods
str1 = "Hello, "
str2 = "World!"
str3 = str1 + str2
print(str3)
print(str3.upper())
print(str3.lower())
# # Control Flow
#
# If statements:
x = 1
y = 2
z = 2
if x == y:
print("Hello")
elif x == z:
print("Goodbye")
else:
print("???")
# **For loops**
#
# If you've used python 2, you may remember using a function called `xrange`. This is deprecated in python 3, and you should just use range. For the curious, [here](https://www.geeksforgeeks.org/range-vs-xrange-python/) is some more information.
# +
print("loop 1")
for i in range(5): # default - start at 0, increment by 1
print(i)
print("\nloop 2")
for i in range(10,2,-2): # inputs are start, stop, step
print(i)
# -
# **while loops**
#
# When you don't know how to enumerate iterations
i = 1
while i < 100:
print(i**2)
i += i**2 # a += b is short for a = a + b
# **continue** - skip the rest of a loop
#
# **break** - exit from the loop
for num in range(2, 10):# <---------------------------\
if num % 2 == 0:# \
print("Found {}, an even number".format(num))# |
continue # this jumps us back to the top -----/
print("Found {}, an odd number".format(num))
max_n = 10
for n in range(2, max_n):
for x in range(2, n):
if n % x == 0: # n divisible by x
print(n, 'equals', x, '*', n/x)
break
# else loop with no if statement!!!
else: # executed if no break in for loop
# loop fell through without finding a factor
print(n, 'is a prime number')
# **pass** does nothing
if False:
pass # to implement
else:
print('True!')
# # Functions
#
# Functions are declared with the keyword `def`
# +
# def tells python you're trying to declare a function# def t
def triangle_area(base, height):
# the function takes input arguments
# (these variables are def'd in the function scope)
# return keyword shoots the result out of the function
return 0.5 * base * height
print(type(triangle_area))
triangle_area(1,2)
# +
# everything in python is an object, and can be passed into a function
def f(x):
return x+2
def twice(f, x):
return f(f(x))
twice(f, 2) # + 4
# +
def n_apply(f, x, n):
for _ in range(n): # _ is dummy variable in iteration
x = f(x)
return x
n_apply(f, 1, 5) # 1 + 2*5
# -
# # Lists
#
# A list in Python is an ordered collection of objects
a = ['x', 1, 3.5]
print(a)
# You can iterate over lists in a very natural way
for elt in a:
print(elt)
# Python indexing starts at 0.
a[0]
# You can append to lists using `.append()`, and do other operations, such as `push()`, `pop()`, `insert()`, etc.
a.append('Hello')
a
while len(a) >0 :
elt = a.pop()
print(elt)
# Python terminology:
# * a list is a "class"
# * the variable `a` is an object, or instance of the class
# * `append()` is a method
# ## List Comprehensions
#
# Python's list comprehensions let you create lists in a way that is reminiscent of set notation
#
# $$ S = \{ x ~\mid~ 0 \le x \le 20, x\mod 3 = 0\}$$
S = [i for i in range(20) if i % 3 == 0]
S
# you aren't restricted to a single for loop
S = [(i,j,k) for i in range(2) for j in range(2) for k in range(2)]
S
# Syntax is generally
# ```python3
# S = [<elt> <for statement> <conditional>]
# ```
# # Other Collections
#
# We've seen the `list` class, which is ordered, indexed, and mutable. There are other Python collections that you may find useful:
# * `tuple` which is ordered, indexed, and immutable
# * `set` which is unordered, unindexed, mutable, and doesn't allow for duplicate elements
# * `dict` (dictionary), which is unordered, indexed, and mutable, with no duplicate keys.
# # Exercise 2
#
# **Lists**
# 1. Create a list `['a', 'b', 'c']`
# 2. use the `insert()` method to put the element `'d'` at index 1
# 3. use the `remove()` method to delete the element `'b'` in the list
#
# **List comprehensions**
# 1. What does the following list contain?
# ```python
# X = [i for i in range(100)]
# ```
# 2. Interpret the following set as a list comprehension:
# $S_1 = \{x\in X \mid x\mod 5 = 2\}$
# 3. Intepret the following set as a list comprehension: $S_2 = \{x \in S_1 \mid x \text{ is even}\}$
# 4. generate the set of all tuples $(x,y)$ where $x\in S_1$, $y\in S_2$.
#
# **Other Collections**
# 1. Try creating another type of collection
# 2. try iterating over it.
# # NumPy
#
# [NumPy](http://www.numpy.org/) brings numeric arrays to Python, with many matlab-like functions
import numpy as np
np.array([1.0, 2.0, 3.0])
# NumPy arrays are *not* the same as python lists
# list
l = [1., 2., 3.]
print(l)
print(type(l))
# numpy array
a = np.array(l)
print(a)
print(type(a))
# ## Creating Arrays
# 1-dimensional arrays
x = np.linspace(0,2*np.pi,100) # linear spacing of points
print(len(x))
y = np.random.rand(100) # numbers betwen 0 and 1
print(y.shape)
z = np.random.randn(100) # normal random variables
print(len(z))
# n-dimensional arrays
y = np.random.rand(10,10)
print(y.shape)
z = np.random.randn(10,9, 8)
print(z.shape)
# ## Indexing Arrays
#
# You can index arrays using a Matlab-like syntax
#
# `x[i,j]` returns value at $i$th row and $j$th column of $x$
#
# **slicing**
#
# `x[i,:]` returns entire $i$th row
#
# `x[:,j]` returns entire $j$th column
x = np.arange(9) # 0, 1, ... , 9
x = np.resize(x, (3,3))
print(x[0,:])
print(x[:,0])
print(x[0,1:]) # indices 1 to last index
print(x[0,-2:]) # last 2 indices
print(x[1:3])
y = np.arange(10)
print(y[0:10:2]) # start:stop:stride
print(y[::2]) # start:stop:stride
# ## Vecorization
#
# Numpy provides a suite of vectorized functions that you can apply to arrays
#
# "vectorized" just refers to applying a function to each element of an array (like Matlab "dot" notation)
x = np.arange(4)
print(x)
y = np.square(x)
print(y)
np.ma
# +
# you can also vectorize functions that are not already vectorized
# note that some functions may automatically vectorize
def f(x):
y = x*x
return y + 2
x = np.arange(4)
print(x)
vf = np.vectorize(f)
print(vf(x))
# -
# ## Plotting with PyPlot
#
# PyPlot is a plotting library that is popular (especially in conjunction with numpy), and easy to use.
#
# Today we'll just see some basics.
import matplotlib.pyplot as plt
x = np.linspace(-1,1,100)
y = np.power(x,2)
plt.plot(x, y)
theta = np.linspace(0,2*np.pi,100)
x = np.cos(theta)
y = np.sin(theta)
plt.scatter(x, y)
# # Exercise 3
#
# **Environment**
# 1. Troubleshoot any issues you are having with Python installation on your computer
# 2. Troubleshoot any issues you are having running a Jupyter notebook.
#
# **Numpy/Pyplot**
# 1. Choose your favorite function $f:x \to \mathbb{R}$.
# 1. find the numpy version of your function, or write your own vectorized version
# 2. plot your function on a reasonable domain.
#
# 2. Add some Gaussian random noise to points on a circle, and generate a scatter plot.
#
# **Review**
# Go back to any previous exercise or topic you didn't understand, and try it again
| nb/Lecture_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: LSST
# language: python
# name: lsst
# ---
# # LSST DM Stack Image Quality Walkthrough
#
# <br>Author(s): **<NAME>** ([@bechtol](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@bechtol))
# <br>Maintainer(s): <NAME> ([@douglasleetucker](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@douglasleetucker)), <NAME> ([@kadrlica](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@kadrlica))
# <br>Last Verified to Run: **2021-09-11**
# <br>Verified Stack Release: **w_2021_33**
#
# This notebook provides a walkthrough of several image quality measurements in the LSST DM Stack. The walkthrough is largely based on the following resources, which should be consulted for further details:
#
# **For the DESC DC2 data set:**
#
# 1. _The LSST DESC DC2 Simulated Sky Survey_: https://arxiv.org/abs/2010.05926
# 2. _DESC DC2 Data Release Note_: https://arxiv.org/abs/2101.04855
#
# This notebook uses a recent processing of data ("Run2.2i") performed at the CC-IN2P3:
# 1. The DESC DRP DC2 Run2.2i Processing Confluence page: https://confluence.slac.stanford.edu/pages/viewpage.action?pageId=293309149
#
#
# **An older HSC data set came from:**
#
# 1. _The Hyper Suprime-Cam Software Pipeline_: https://arxiv.org/abs/1705.06766
# 2. _The first-year shear catalog of the Subaru Hyper-Suprime-Cam SSP Survey_: https://arxiv.org/abs/1705.06745
# 3. Systematics Tests In LEnsing (STILE): https://github.com/msimet/Stile
#
# and used a reprocessing of data from the first HSC Strategic Survey Program data release 1 (HSC-SSP DR1):
# 1. _HSC Public Release Webpage_: https://hsc-release.mtk.nao.ac.jp/doc/
# 2. _HSC-SSP DR1 Paper_: http://ads.nao.ac.jp/abs/2018PASJ...70S...8A
# 3. Processing information available here: https://confluence.lsstcorp.org/display/DM/S18+HSC+PDR1+reprocessing
#
#
# ### Learning Objectives
# After working through and studying this notebook you should be able to
# 1. Access PSF model sizes and ellipticities
# 2. Access several different shape measurements with and without PSF correction for single-visit source catalogs and object catalogs derived from the full image stack
# 3. Evaluate and visualize PSF model ellipticity residuals
#
# Other techniques that are demonstrated, but not empasized, in this notebook are
# 1. Use `pandas` to compute aggregate statisitics and visualize the output
# 2. Use `treecorr` to compute correlation functions
#
# ### Logistics
# This notebook is intended to be run at `lsst-lsp-stable.ncsa.illinois.edu` or `data.lsst.cloud` from a local git clone of the [StackClub](https://github.com/LSSTScienceCollaborations/StackClub) repo.
#
# ### Setup
# You can find the Stack version by using `eups list -s` on the terminal command line.
# Site, host, and stack version
# ! echo $EXTERNAL_INSTANCE_URL
# ! echo $HOSTNAME
# ! eups list -s | grep lsst_distrib
import os
import numpy as np
from matplotlib import pyplot as plt
# %matplotlib inline
# Matplotlib in Jupyter notebooks is a little fussy: let's ignore warnings from this and other modules.
import warnings
warnings.filterwarnings("ignore", category=UserWarning) # eg warning from mpl.use(“Agg”)
# ## Prelude: Data Sample
#
import lsst.daf.persistence as daf_persistence
# +
# Define the data set
URL = os.getenv('EXTERNAL_INSTANCE_URL')
if URL.endswith('data.lsst.cloud'): # IDF
repo = "s3://butler-us-central1-dp01"
elif URL.endswith('ncsa.illinois.edu'): # NCSA
repo = "/repo/dc2"
else:
raise Exception(f"Unrecognized URL: {URL}")
dataset='DC2'
collection='2.2i/runs/DP0.1'
dataId={'band':'i','visit':260,'detector':43}
# -
# Instantiate the Butler
from lsst.daf.butler import Butler
butler = Butler(repo,collections=collection)
# +
# Use that dataid to get the image and associated source catalog
calexp = butler.get('calexp', **dataId)
src = butler.get('src', **dataId)
# Note that the calexp contains a PSF object
psf = calexp.getPsf()
# -
# ## Part 1: PSF Model
shape = psf.computeShape()
# ### PSF Size
#
# This walkthrough focuses on the computation of PSF models and shape measurements using adaptive moments. Two common methods of computing the PSF size from adaptive moments are the trace radius and determinant radius.
#
# Adaptive second moments:
#
# $Q = \begin{bmatrix}
# I_{xx} & I_{xy} \\
# I_{xy} & I_{yy}
# \end{bmatrix}$
#
# The trace radius is defined as $\rm{Tr}(Q) = \sqrt{\frac{I_{xx} + I_{yy}}{2}}$ and determinant radius is defined as $\rm{Det(Q)} = (I_{xx} I_{yy} - I_{xy}I_{xy})^\frac{1}{4}$
i_xx, i_yy, i_xy = shape.getIxx(), shape.getIyy(), shape.getIxy()
# Put in a few assert statements to show exactly what the Stack is doing
# Trace radius
assert np.isclose(shape.getTraceRadius(), np.sqrt((i_xx + i_yy) / 2.))
print('trace radius =', shape.getTraceRadius())
# Determinant radius
assert np.isclose(shape.getDeterminantRadius(), (i_xx * i_yy - i_xy**2)**(1. / 4.))
print('determinant radius =', shape.getDeterminantRadius())
# ### Comment Regarding Units
#
# What are the units of the trace radius and determinant radius? How do we convert the trace radius or determinant radius into FWHM in arcseconds? To do this, we need to quickly preview some functionality available from the calibrated image and source catalog.
#
# First, let's make sure that the PSF shape second moments from the PSF model in the calibrated exposure (calexp) match the entry for the source catalog for the PSF at the source position.
# +
from lsst.geom import Point2D
# PSF shape at coordinates of first indexed souce
shape = psf.computeShape(Point2D(src['slot_Centroid_x'][0], src['slot_Centroid_y'][0]))
print(shape.getIxx())
# Compare to catalog entry for PSF shape
print(src['slot_PsfShape_xx'][0])
# -
# Pixel scale in arcseconds
calexp.getWcs().getPixelScale().asArcseconds()
# Units of the second moments
#src.schema.find('base_SdssShape_psf_xx').getField().getUnits()
src.schema.find('slot_PsfShape_xx').getField().getUnits()
# We find that the second moments are in units of pixels$^2$, so the trace radius and determinant radius are in units of pixels. We found the pixel scale in arcseconds
#
# For a one-dimensional Gaussian, the FWHM = $2 \sqrt{2 \ln 2}$ RMS = 2.35 RMS
#
# **THIS APPROXIMATION FOR THE FWHM SHOULD BE VERIFIED**
# Approximate FWHM in arcseconds
fwhm = 2 * np.sqrt(2. * np.log(2)) * shape.getTraceRadius() * calexp.getWcs().getPixelScale().asArcseconds()
print('FWHM = %.3f arcsec'%(fwhm))
# ### PSF Ellipticity
#
# **IMPORTANT:** Two conventions are commonly used in weak gravitational lensing to define ellipticity. The LSST Stack uses the
# "distortion" convention as oppose to the "shear" convention. See https://en.wikipedia.org/wiki/Gravitational_lensing_formalism#Measures_of_ellipticity for both definitions.
#
# The Science Requirements Document (https://docushare.lsst.org/docushare/dsweb/Get/LPM-17) also uses the distortion convention.
#
# Formalism ellipticities in the distortion convention:
#
# $e_1 = \frac{I_{xx} - I_{yy}}{I_{xx} + I_{yy}}$
#
# $e_2 = \frac{2 I_{xy}}{I_{xx} + I_{yy}}$
#
# $\tan(2 \theta) = \frac{2 I_{xy}}{I_{xx} - I_{yy}} $
#
# $e = \sqrt{e_1^2 + e_2^2}$
# Put in a few assert statements to explicitly show what the Stack is doing
# +
# Ellipticity
from lsst.afw.geom.ellipses import Quadrupole, SeparableDistortionTraceRadius
q = Quadrupole(i_xx, i_yy, i_xy)
s = SeparableDistortionTraceRadius(q)
assert np.isclose(s.getE1(), (i_xx - i_yy) / (i_xx + i_yy)) # e1
print('e1 =', s.getE1())
assert np.isclose(s.getE2(), (2. * i_xy) / (i_xx + i_yy)) # e2
print('e2 =', s.getE2())
assert np.isclose(s.getEllipticity().getTheta(), np.arctan2(2. * i_xy, i_xx - i_yy) / 2.) # theta
print('theta =', s.getEllipticity().getTheta())
# An alternative way to compute the angle
e1, e2 = s.getE1(), s.getE2()
assert np.allclose(np.arctan2(e2, e1) / 2., np.arctan2(2. * i_xy, i_xx - i_yy) / 2.)
# -
# For visualization purposes, let's evaluate the PSF model at grid of points across the image
# +
n = 100
x_array = np.arange(0, calexp.getDimensions()[0], 200)
y_array = np.arange(0, calexp.getDimensions()[1], 200)
xx, yy = np.meshgrid(x_array, y_array)
print(calexp.getDimensions())
size = []
i_xx = []
i_yy = []
i_xy = []
for x, y in zip(xx.flatten(), yy.flatten()):
point = Point2D(x, y)
shape = psf.computeShape(point)
size.append(shape.getTraceRadius())
i_xx.append(shape.getIxx())
i_yy.append(shape.getIyy())
i_xy.append(shape.getIxy())
size = np.reshape(size, xx.shape)
i_xx = np.reshape(i_xx, xx.shape)
i_yy = np.reshape(i_yy, xx.shape)
i_xy = np.reshape(i_xy, xx.shape)
theta = np.arctan2(2. * i_xy, i_xx - i_yy) / 2.
e1 = (i_xx - i_yy) / (i_xx + i_yy)
e2 = (2. * i_xy) / (i_xx + i_yy)
theta_alternate = np.arctan2(e2, e1) / 2.
assert np.allclose(theta, theta_alternate)
e = np.sqrt(e1**2 + e2**2)
ex = e * np.cos(theta)
ey = e * np.sin(theta)
# -
# and then plot the results in a few different ways
# +
plt.figure(figsize=(20, 6))
plt.subplots_adjust(wspace=0.5)
plt.subplot(1, 4, 1)
plt.quiver(xx, yy, ex, ey, e, headlength=0., headwidth=1., pivot='mid', width=0.01)
#plt.quiver(xx, yy, scale=e, angles=phi, headlength=0., headwidth=1., pivot='mid', width=0.005)
#colorbar = plt.colorbar(label='r$\sqrt(e1^{2} + e2^{2})$')
colorbar = plt.colorbar(label='e')
plt.clim(0., 0.05)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Ellipticity Sticks')
plt.subplot(1, 4, 2)
plt.scatter(xx, yy, c=e1, vmin=-0.05, vmax=0.05, cmap='bwr')
colorbar = plt.colorbar(label='e1')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Value of e1')
plt.subplot(1, 4, 3)
plt.scatter(xx, yy, c=e2, vmin=-0.05, vmax=0.05, cmap='bwr')
colorbar = plt.colorbar(label='e2')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Value of e2')
plt.subplot(1, 4, 4)
plt.scatter(xx, yy, c=size)
colorbar = plt.colorbar(label='Trace Radius')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Ellipticity Modulus');
# -
# ## Part 2: Source Catalog Shape Measurements
#
# The source catalogs include many measurements of source shapes and PSF model shapes. Let's look at the available columns relating to object shape:
# +
# I'm still learning how to use regular expressions...
#import re
#list(filter(re.compile(".*shape*"), src.schema.getOrderedNames()))
# -
# Columns for shape measurements
for name in src.schema.getOrderedNames():
if 'shape' in name.lower():
print(name)
# Columns for calibration measurements, including indicators of the stars using for PSF modeling and held in reserve for testing the PSF model.
# Columns for calibration measurements
for name in src.schema.getOrderedNames():
if 'calib' in name.lower():
print(name)
# There are also "slots" defined for default measurements
# How are the slots defined?
for slot in ['slot_PsfShape', 'slot_Shape']:
print('%s -> %s'%(slot, src.schema.getAliasMap()[slot]))
# To figure out what these columns mean, look at https://arxiv.org/abs/1705.06745
#
# **Non PSF-corrected shapes**
#
# SDSS algorithm:
#
# `base_SdssShape_xx/xy/yy` = Adaptive moments in arcsec$^2$, SDSS algorithm
#
# `base_SdssShape_psf_xx/xy/yy` = Adaptive moments of PSF evaluated at object position in arcsec$^2$, SDSS algorithm
#
# HSM algorithm:
#
# _Galaxy shapes are estimated on the coadded i-band images using the re-Gaussianization PSF correction method (Hirata & Seljak 2003)... In the course of the re-Gaussianization PSF-correction method, corrections are applied to account for dilution of the observed shape by the PSF, including the non-Gaussianity of both the PSF and the galaxy surface brightness profiles._
#
# `ext_shapeHSM_HsmPsfMoments_xx/xy/yy` = Adaptive moments of PSF evaluated at object position in arcsec$^2$, HSM algorithm
#
# `ext_shapeHSM_HsmSourceMoments_xx/xy/yy` = Adaptive moments in arcsec$^2$, not PSF-corrected , HSM algorithm
#
# `ext_shapeHSM_HsmSourceMomentsRound_xx/xy/yy` = ??
#
# **Regaussianization shapes based on data alone, PSF-corrected**
#
# `ext_shapeHSM_HsmShapeRegauss_e1/e2` = distortion in sky coordinates estimated by regaussianization method defined in distortion, HSM algorithm (**NEED TO CHECK IF THESE ARE SKY OR INSTRUMENT COORDINATES**)
#
# `ext_shapeHSM_HsmShapeRegauss_sigma` = non-calibrated shape measurement noise, HSM algorithm
#
# `ext_shapeHSM_HsmShapeRegauss_resolution` = resolution of galaxy image defined in equation (4) of https://arxiv.org/abs/1705.06745, HSM algorithm
# With those definitions in mind, now verify that the PSF models from SDSS and HSM algorithms are consistent by comparing adaptive moments of the PSF. The plot below shows the ratio of adaptive moment sizes is very nearly unity. The plot also shows no trend in adaptive moments with respect to how well the source is resolved (i.e., source size), which is as expected since we are comparing PSF models as oppose to measurements for individual sources.
plt.figure()
plt.scatter(src['ext_shapeHSM_HsmShapeRegauss_resolution'],
src['base_SdssShape_psf_xx'] / src['ext_shapeHSM_HsmPsfMoments_xx'], s=1, label='xx')
plt.scatter(src['ext_shapeHSM_HsmShapeRegauss_resolution'],
src['base_SdssShape_psf_yy'] / src['ext_shapeHSM_HsmPsfMoments_yy'], s=1, label='yy')
plt.xlim(0., 1.)
plt.ylim(1. - 2.e-3, 1. + 2.e-3)
plt.legend(loc='upper left', markerscale=4)
plt.xlabel('ext_shapeHSM_HsmShapeRegauss_resolution')
plt.ylabel('base_SdssShape_psf / ext_shapeHSM_HsmPsfMoments');
# To illustrate the difference between the PSF-corrected and non-PSF-corrected quantities within the HSM algorithm, now compare the ellipticity computed from the adaptive moments (not-PSF corrected) with the ellipticity including PSF-corrections. In the plot below, we see a clear trend with the amount of source resolution. For sources that are increasingly resolved, the PSF-corrected and non-PSF-corrected ellipticities converge. For sources that are not well resolved, the PSF-corrected and non-PSF-corrected ellipticities diverge.
# +
i_xx, i_yy, i_xy = src['ext_shapeHSM_HsmSourceMoments_xx'], src['ext_shapeHSM_HsmSourceMoments_yy'], src['ext_shapeHSM_HsmSourceMoments_xy']
e1_non_psf_corrected = (i_xx - i_yy) / (i_xx + i_yy)
ratio = e1_non_psf_corrected / src['ext_shapeHSM_HsmShapeRegauss_e1']
#trace_radius = np.sqrt((src['base_SdssShape_xx'] + src['base_SdssShape_yy']) / 2.)
plt.figure()
plt.scatter(src['ext_shapeHSM_HsmShapeRegauss_resolution'], ratio, s=1)
plt.xlim(0., 1.)
plt.ylim(0., 1.2)
plt.xlabel('ext_shapeHSM_HsmShapeRegauss_resolution')
plt.ylabel('HsmSourceMoments_e1 / HsmShapeRegauss_e1');
# -
# ## Intermission
#
# In the next section of the notebook, we'll use a full tract worth of data for some of the examples. It will take about 10 minutes to ingest the object catalogs from each patch in the tract. If you don't mind waiting, set the SHORTCUT variable below to `False`.
import pandas as pd
# +
# Browse the deepCoadd_meas schema
coadd_dataid = {'band':'i', 'tract':4024, 'patch': (3) + 7*(4)}
coadd_skymap = butler.get('skyMap')
coadd_meas = butler.get('deepCoadd_meas', dataId=coadd_dataid)
# +
# Set SHORTCUT = True for quick evaluation but lower statistics,
# SHORTCUT = False to get all the objects from all the patches in the tract (~10 mins)
# If you don't take the shortcut, youll need to do some uncommenting below as well.
SHORTCUT = True
# Pick a random tract and collect all the patches
tract = coadd_dataid['tract']
patch_array = []
for ii in range(coadd_skymap.generateTract(tract).getNumPatches()[0]):
for jj in range(coadd_skymap.generateTract(tract).getNumPatches()[1]):
patch_array.append((ii) + 7*(jj))
tract_array = np.tile(tract, len(patch_array))
if SHORTCUT:
# Only get three patches
#df_tract_patch = pd.DataFrame({'tract': [9936, 9936, 9936],
# 'patch': ['0,0', '0,1', '0,2']})
df_tract_patch = pd.DataFrame({'tract': tract_array[:3],
'patch': patch_array[:3]})
else:
# Get all the object catalogs from one tract
df_tract_patch = pd.DataFrame({'tract': tract_array,
'patch': patch_array})
# +
selected_columns = ['id',
'coord_ra',
'coord_dec',
'base_SdssCentroid_x',
'base_SdssCentroid_y',
'base_ClassificationExtendedness_value',
'base_ClassificationExtendedness_flag',
'ext_shapeHSM_HsmPsfMoments_x',
'ext_shapeHSM_HsmPsfMoments_y',
'ext_shapeHSM_HsmPsfMoments_xx',
'ext_shapeHSM_HsmPsfMoments_yy',
'ext_shapeHSM_HsmPsfMoments_xy',
'ext_shapeHSM_HsmPsfMoments_flag',
'ext_shapeHSM_HsmPsfMoments_flag_no_pixels',
'ext_shapeHSM_HsmPsfMoments_flag_not_contained',
'ext_shapeHSM_HsmPsfMoments_flag_parent_source',
'ext_shapeHSM_HsmShapeRegauss_e1',
'ext_shapeHSM_HsmShapeRegauss_e2',
'ext_shapeHSM_HsmShapeRegauss_sigma',
'ext_shapeHSM_HsmShapeRegauss_resolution',
'ext_shapeHSM_HsmShapeRegauss_flag',
'ext_shapeHSM_HsmShapeRegauss_flag_no_pixels',
'ext_shapeHSM_HsmShapeRegauss_flag_not_contained',
'ext_shapeHSM_HsmShapeRegauss_flag_parent_source',
'ext_shapeHSM_HsmShapeRegauss_flag_galsim',
'ext_shapeHSM_HsmSourceMoments_x',
'ext_shapeHSM_HsmSourceMoments_y',
'ext_shapeHSM_HsmSourceMoments_xx',
'ext_shapeHSM_HsmSourceMoments_yy',
'ext_shapeHSM_HsmSourceMoments_xy',
'ext_shapeHSM_HsmSourceMoments_flag',
'ext_shapeHSM_HsmSourceMoments_flag_no_pixels',
'ext_shapeHSM_HsmSourceMoments_flag_not_contained',
'ext_shapeHSM_HsmSourceMoments_flag_parent_source',
'slot_Centroid_x',
'slot_Centroid_y',
'slot_Shape_xx',
'slot_Shape_yy',
'slot_Shape_xy',
'slot_PsfShape_xx',
'slot_PsfShape_yy',
'slot_PsfShape_xy',
]
if dataset == 'HSC':
fluxCol = 'slot_PsfFlux_flux'
fluxErrCol = 'slot_PsfFlux_fluxSigma'
modelFluxCol = 'slot_ModelFlux_flux'
modelFluxErrCol = 'slot_ModelFlux_fluxSigma'
calibUsedCol = 'calib_psfUsed'
calibCandCol = 'calib_psfCandidate'
calibResCol = 'calib_psf_reserved'
elif dataset == 'DC2':
fluxCol = 'slot_PsfFlux_instFlux'
fluxErrCol = 'slot_PsfFlux_instFluxErr'
modelFluxCol = 'slot_ModelFlux_instFlux'
modelFluxErrCol = 'slot_ModelFlux_instFluxErr'
calibUsedCol = 'calib_psf_used'
calibCandCol = 'calib_psf_candidate'
calibResCol = 'calib_psf_reserved'
selected_columns += [fluxCol, fluxErrCol, modelFluxCol, modelFluxErrCol, calibUsedCol, calibCandCol, calibResCol]
fluxRatioMin = 100.
# +
# %%time
coadd_array = []
for ii in range(0, len(df_tract_patch)):
tract, patch = df_tract_patch['tract'][ii], df_tract_patch['patch'][ii]
print(tract, patch)
dataid = dict(coadd_dataid, tract=tract)
coadd_ref = butler.get('deepCoadd_ref', dataId=dataid)
coadd_meas = butler.get('deepCoadd_meas', dataId=dataid)
coadd_calib = butler.get('deepCoadd_calexp', dataId=dataid).getPhotoCalib()
selected_rows = (coadd_ref['detect_isPrimary']
& ~coadd_meas['base_SdssCentroid_flag']
& ~coadd_meas['base_PixelFlags_flag_interpolated']
& ~coadd_meas['base_PixelFlags_flag_saturated']
& ~coadd_meas['base_PsfFlux_flag']
& ~coadd_meas['modelfit_CModel_flag'])
coadd_array.append(coadd_meas.asAstropy().to_pandas().loc[selected_rows, selected_columns])
coadd_array[-1]['detect_isPrimary'] = coadd_ref['detect_isPrimary'][selected_rows]
coadd_array[-1]['psf_mag'] = coadd_calib.instFluxToMagnitude(coadd_meas[selected_rows], 'base_PsfFlux')[:,0]
coadd_array[-1]['cm_mag'] = coadd_calib.instFluxToMagnitude(coadd_meas[selected_rows], 'modelfit_CModel')[:,0]
df_coadd = pd.concat(coadd_array)
# -
# If you just waited 10 minutes to get all the patches, it could be a good idea to write out your tract-level catalog for re-use later.
# +
if not SHORTCUT:
outfile = os.path.expandvars('$HOME/DATA/temp.h5')
print(outfile)
df_coadd.to_hdf(outfile, 'df')
# Then, to read in this file, do:
# df_coadd = pd.read_hdf(outfile, 'df')
# -
plt.figure()
plt.hexbin(np.degrees(df_coadd['coord_ra']), np.degrees(df_coadd['coord_dec']))
#plt.hexbin(df_coadd['base_SdssCentroid_x'], df_coadd['base_SdssCentroid_y'])
plt.xlim(plt.xlim()[::-1])
plt.xlabel('RA (deg)')
plt.ylabel('Dec (deg)');
# ## Part 3: Characterization
#
# Show a few examples of how to characterize shape measurements.
# ### Ellipticity residuals for PSF calibration stars and reserve stars
#
def ellipticity(i_xx, i_yy, i_xy):
e1 = (i_xx - i_yy) / (i_xx + i_yy)
e2 = (2. * i_xy) / (i_xx + i_yy)
return e1, e2
# Ellipticity residuals
e1_star, e2_star = ellipticity(df_coadd['slot_Shape_xx'], df_coadd['slot_Shape_yy'], df_coadd['slot_Shape_xy'])
e1_psf, e2_psf = ellipticity(df_coadd['slot_PsfShape_xx'], df_coadd['slot_PsfShape_yy'], df_coadd['slot_PsfShape_xy'])
e1_res, e2_res = e1_star - e1_psf, e2_star - e2_psf
# + active=""
# # Plots to investigate the S/N selection
# from matplotlib.colors import LogNorm
# plt.figure()
# plt.scatter(df_coadd['psf_mag'], df_coadd['psf_mag'] - df_coadd['cm_mag'],
# c=df_coadd[fluxCol] / df_coadd[fluxErrCol],
# vmin=1, vmax=500, s=1, norm=LogNorm())
# plt.colorbar(label='S/N')
# plt.xlim(16., 30.)
# plt.ylim(-0.1, 1.)
# plt.ylabel('psf - model')
# plt.xlabel('mag')
#
# plt.figure()
# plt.hist(df_coadd[fluxCol]/df_coadd[fluxErrCol], \
# bins=100, range=(0, 1000), \
# histtype='step', lw=2, density=True, log=True)
# plt.xlim(0, 1000.)
# plt.xlabel('S/N')
# +
cut_star = ((df_coadd[fluxCol] / df_coadd[fluxErrCol]) > 3.0) & (df_coadd['base_ClassificationExtendedness_value'] == 0)
print("Number of stars:",cut_star.sum())
plt.figure()
#plt.yscale('log')
plt.hist(e1_res[cut_star] / e1_psf[cut_star], bins=np.linspace(-1., 1., 51), range=(-1., 1.), histtype='step', lw=2, density=True, label='e1')
plt.hist(e2_res[cut_star] / e2_psf[cut_star], bins=np.linspace(-1., 1., 51), range=(-1., 1.), histtype='step', lw=2, density=True, label='e2')
plt.xlim(-1., 1.)
plt.legend(loc='upper right')
plt.xlabel('e_star - e_psf / e_psf')
plt.ylabel('PDF');
# -
# Repeat the exercise with the single-sensor, but compare the stars used for PSF modeling and those held in reserve for testing. In this case, there are only ~100 stars total on the single sensor so not enough statistics to see any more interesting trends. In practice, one would want to do this test with many more stars.
# Ellipticity residuals
e1_star, e2_star = ellipticity(src['slot_Shape_xx'], src['slot_Shape_yy'], src['slot_Shape_xy'])
e1_psf, e2_psf = ellipticity(src['slot_PsfShape_xx'], src['slot_PsfShape_yy'], src['slot_PsfShape_xy'])
e1_res, e2_res = e1_star - e1_psf, e2_star - e2_psf
# +
cut_psf_star = src[calibUsedCol]
cut_reserved_star = src[calibResCol]
print('Number of stars used for PSF modeling =', np.sum(cut_psf_star))
print('Number of reserved stars =', np.sum(cut_reserved_star)) # Should be about 20% of the sample used for PSF modeling
plt.figure()
#plt.yscale('log')
plt.hist(e1_res[cut_psf_star] / e1_psf[cut_psf_star], bins=np.linspace(-1., 1., 21), range=(-1., 1.), histtype='step', lw=2, density=True, label='PSF')
plt.hist(e1_res[cut_reserved_star] / e1_psf[cut_reserved_star], bins=np.linspace(-1., 1., 21), range=(-1., 1.), histtype='step', lw=2, density=True, label='Reserved')
plt.xlim(-1., 1.)
plt.legend(loc='upper right')
plt.xlabel('e_star - e_psf / e_psf')
plt.ylabel('PDF');
# -
# ### Compute aggregate statistics across the focal plane / tract
#
# This example shows how to use pandas to compute aggregate statistics across the focal plane. In this case, we plot the mean PSF model size (trace radius) computed in spatial bins over the sensor.
#Convert the src object to a pandas dataframe
df_sensor = src.asAstropy().to_pandas()
# Define a helper function to aggregate statistics in two-dimensions.
def aggregateStatistics2D(df, x_name, x_bins, y_name, y_bins, z_name, operation):
"""
This function preserves empty bins, which get filled by nan values
"""
x_centers = x_bins[:-1] + 0.5 * np.diff(x_bins)
y_centers = y_bins[:-1] + 0.5 * np.diff(y_bins)
grouped = df.groupby([pd.cut(df[y_name], bins=y_bins),
pd.cut(df[x_name], bins=x_bins)])
xx, yy = np.meshgrid(x_centers, y_centers)
zz = grouped[['id', z_name]].agg(operation)[z_name].values.reshape(xx.shape)
return xx, yy, zz
# Define a new column for the PSF model trace radius
df_sensor['psf_trace_radius'] = np.sqrt((src['slot_PsfShape_xx'] + src['slot_PsfShape_yy']) / 2.)
# The next plot should look the same as the plot above that shows the PSF model size evaluated in a grid across the sensor
# +
bin_size = 200 # pixels
x_bins = np.arange(0, calexp.getDimensions()[0] + bin_size, bin_size)
y_bins = np.arange(0, calexp.getDimensions()[1] + bin_size, bin_size)
xx, yy, zz = aggregateStatistics2D(df_sensor,
'slot_Centroid_x', x_bins,
'slot_Centroid_y', y_bins,
'psf_trace_radius', 'mean')
plt.figure()
plt.pcolormesh(xx, yy, zz)
plt.colorbar(label='Trace Radius')
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(0, calexp.getDimensions()[0])
plt.ylim(0, calexp.getDimensions()[1]);
# -
# Now repeat the aggregaion exercise for the coadd tract.
# Define a new column for the PSF model trace radius
df_coadd['psf_trace_radius'] = np.sqrt((df_coadd['ext_shapeHSM_HsmPsfMoments_xx'] + df_coadd['ext_shapeHSM_HsmPsfMoments_yy']) / 2.)
# +
bin_size = 200 # pixels
x_bins = np.arange(np.min(df_coadd.base_SdssCentroid_x), np.max(df_coadd.base_SdssCentroid_x) + bin_size, bin_size)
y_bins = np.arange(np.min(df_coadd.base_SdssCentroid_y), np.max(df_coadd.base_SdssCentroid_y) + bin_size, bin_size)
xx, yy, zz = aggregateStatistics2D(df_coadd,
'base_SdssCentroid_x', x_bins,
'base_SdssCentroid_y', y_bins,
'psf_trace_radius', 'mean')
plt.figure()
plt.pcolormesh(xx, yy, zz)
plt.colorbar(label='Trace Radius')
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(np.min(df_coadd.base_SdssCentroid_x), np.max(df_coadd.base_SdssCentroid_x))
plt.ylim(np.min(df_coadd.base_SdssCentroid_y), np.max(df_coadd.base_SdssCentroid_y));
# -
# ### Ellipticity residual correlation functions
#
# Example of how to evaluate correlation functions, in this case, for ellipticity residuals on the coadd. This section reproduces the analysis of `validate_drp` (https://github.com/lsst/validate_drp) in a condensed form for easier reading.
import treecorr
import astropy.units as u
# Ellipticity residuals
e1_star, e2_star = ellipticity(df_coadd['slot_Shape_xx'], df_coadd['slot_Shape_yy'], df_coadd['slot_Shape_xy'])
e1_psf, e2_psf = ellipticity(df_coadd['slot_PsfShape_xx'], df_coadd['slot_PsfShape_yy'], df_coadd['slot_PsfShape_xy'])
e1_res, e2_res = e1_star - e1_psf, e2_star - e2_psf
# +
cut_star = ((df_coadd[fluxCol] / df_coadd[fluxErrCol]) > fluxRatioMin) & (df_coadd['base_ClassificationExtendedness_value'] == 0)
cut_star = cut_star & ~np.isnan(e1_res) & ~np.isnan(e2_res)
nbins=20
min_sep=0.25
max_sep=20
sep_units='arcmin'
verbose=False
catTree = treecorr.Catalog(ra=df_coadd['coord_ra'][cut_star], dec=df_coadd['coord_dec'][cut_star],
g1=e1_res[cut_star], g2=e2_res[cut_star],
dec_units='radian', ra_units='radian')
gg = treecorr.GGCorrelation(nbins=nbins, min_sep=min_sep, max_sep=max_sep,
sep_units=sep_units,
verbose=verbose)
gg.process(catTree)
r = np.exp(gg.meanlogr) * u.arcmin
xip = gg.xip * u.Unit('')
xip_err = np.sqrt(gg.varxip) * u.Unit('')
# -
plt.figure()
plt.xscale('log')
plt.yscale('log')
plt.errorbar(r.value, xip, yerr=xip_err)
plt.xlabel('Separation (arcmin)')
plt.ylabel('Median Residual Ellipticity Correlation');
| Validation/image_quality_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (tensorflow)
# language: python
# name: tensorflow
# ---
from ISR.models import RRDN
from ISR.models import Discriminator
from ISR.models import Cut_VGG19
# +
lr_train_patch_size = (200, 300)
layers_to_extract = [5, 9]
scale = 2
hr_train_patch_size = lr_train_patch_size * scale
rrdn = RRDN(arch_params={'C':4, 'D':3, 'G':64, 'G0':64, 'T':10, 'x':scale}, patch_size=lr_train_patch_size)
f_ext = Cut_VGG19(patch_size=hr_train_patch_size, layers_to_extract=layers_to_extract)
discr = Discriminator(patch_size=hr_train_patch_size, kernel_size=3)
# -
from ISR.models import RDN
rrdn = RDN(arch_params={'C': 3, 'D':10, 'G':64, 'G0':64, 'x':2})
rrdn.model.summary()
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
fashion_mnist = tf.keras.datasets.mnist.load_data()
cifar100 = tf.keras.datasets.cifar100.load_data()
(train_image, train_label), (test_image, test_label) = cifar100
plt.imshow(train_image[52])
from ISR.utils import image_processing as im
a = im.split_image_into_overlapping_patches(train_image, 32, 24)
train_image[52].shape
im.process_array(train_image, expand)
plt.imshow(a[0])
| .ipynb_checkpoints/ISR_model_test-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# metadata:
# interpreter:
# hash: dca0ade3e726a953b501b15e8e990130d2b7799f14cfd9f4271676035ebe5511
# name: python3
# ---
import numpy as np
import pandas as pd
import sympy as sp
import scipy.integrate
import altair as alt
import diaux.viz
colors, palette = diaux.viz.altair_style()
# +
nu_max = sp.Symbol(r'{{\nu^{(max)}}}')
gamma_max = sp.Symbol(r'{{\gamma^{(max)}}}')
theta_a = sp.Symbol(r'{{\theta_a}}')
theta_0 = sp.Symbol(r'{{\theta_0}}')
K_M = sp.Symbol(r'{{K_M}}')
c = sp.Symbol('c')
beta = sp.Symbol(r'\beta')
phi_R = sp.Symbol('{{\phi_R}}')
phi_P = sp.Symbol('{{\phi_P}}')
nu_exp = nu_max * (1 + K_M / c)**-1
gamma_exp = gamma_max * (1 + theta_0/theta_a)**-1
eq = theta_a - beta * ((nu_exp/gamma_exp) * (phi_P/phi_R) - 1)
soln = sp.solve(eq, theta_a)
# -
c_x = sp.Symbol('c_x')
km_x = sp.Symbol('{{K_{M,x}}}')
phiy_steady = sp.Symbol('{{phi_y}}')
soln[0]
# +
def single_nutrient(params, t0, gamma_max, nu_max, phi_R, phi_P, theta_0, Km, Y):
"""
Defines the complete dynamical model for growth on a single carbon source.
Parameters
----------
params: list, of order [M, theta_a, c, M_P]
List of the model parameters which
"""
M, theta_a, c, M_P = params
# Define the expressions for the translational and nutritional capacities
gamma = gamma_max * theta_a / (theta_a + theta_0)
nu = nu_max * c / (c + Km)
# Equation 1: Proteome Mass Dynamics
dM_dt = phi_R * M * gamma
# Metabolic protein synthesis
dMp_dt = phi_P * dM_dt
# Nutrient influx
dtheta_dt = nu * M_P
# Nutrient consumption
dc_dt = - dtheta_dt / Y
return [dM_dt, dtheta_dt - dM_dt, dc_dt, dMp_dt]
# +
gamma_max = (17.1 * 3600 / 7459)
nu_max = 2.4
theta_0 = 0.0013 * 20 # in M
Km = 0.005 # in mM
Y = 0.377
phi_R = 0.35
phi_P = 0.45
# Define the intial conditions
M = 0.001
theta_a = 0.0001
M_P = phi_P * M
c = 0.010
# Integrate
n_timesteps = 500
t_stop = 20
delta_t = t_stop/n_timesteps
t = np.linspace(0, t_stop, n_timesteps)
out = scipy.integrate.odeint(single_nutrient, [M, theta_a, c, M_P], t, args=(gamma_max,nu_max, phi_R, phi_P, theta_0, Km, Y))
# -
out
_df = pd.DataFrame(out, columns=['dM_dt', 'dtheta_dt', 'dc_dt', 'dMp_dt'])
_df['rel_M'] = _df['dM_dt'] / M
_df['time'] = t
alt.Chart(_df).mark_line().encode(x='time:Q', y='rel_M:Q')
| code/analysis/modelling_growth.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ### 1. Which of the following would be effective in resolving a large issue if it happens again in the future?
# - [ ] Incident controller
# - [x] **Postmortem**
# - [ ] Rollbacks
# - [ ] Load balancers
#
# ### 2. During peak hours, users have reported issues connecting to a website. The website is hosted by two load balancing servers in the cloud and are connected to an external SQL database. Logs on both servers show an increase in CPU and RAM usage. What may be the most effective way to resolve this issue with a complex set of servers?
# - [ ] Use threading in the program
# - [ ] Cache data in memory
# - [x] **Automate deployment of additional servers**
# - [ ] Optimize the database
#
# ### 3. It has become increasingly common to use cloud services and virtualization. Which kind of fix, in particular, does virtual cloud deployment speed up and simplify?
# - [x] **Deployment of new servers**
# - [ ] Application code fixes
# - [ ] Log reviewing
# - [ ] Postmortems
#
# ### 4. What should we include in our postmortem? (Check all that apply)
# - [x] **Root cause of the issue**
# - [x] **How we diagnosed the problem**
# - [x] **How we fixed the problem**
# - [ ] Who caused the problem
#
# ### 5. In general, what is the goal of a postmortem? (Check all that apply)
# - [ ] To identify who is at fault
# - [x] **To allow prevention in the future**
# - [x] **To allow speedy remediation of similar issues in the future**
# - [ ] To analyze all system bugs
| troubleshooting-debugging-techniques/week-3/quiz-handling-bigger-incidents.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib notebook
import control as c
import ipywidgets as w
import numpy as np
from IPython.display import display, HTML
import matplotlib.pyplot as plt
import matplotlib.animation as animation
#display(HTML('<script> $(document).ready(function() { $("div.input").hide(); }); </script>'))
# Toggle cell visibility
from IPython.display import HTML
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
Promijeni vidljivost <a href="javascript:code_toggle()">ovdje</a>.''')
display(tag)
# -
# ## Stvaranje PD-regulatora korištenjem operacijskih pojačala
#
# U analognoj elektronici, operacijska pojačala uobičajeno se koriste za realizaciju proporcionalno-integracijski-derivacijskih (PID) regulatora. Dok matematički modeli linearnih vremenski-nepromjenjivih (LTI) sustava pretpostavljaju idealne uvjete, realni sklopovi možda im ne odgovaraju u potpunosti.
#
# U većini slučajeva idealni model daje prihvatljive rezultate, ali frekvencijske karakteristike mogu se bolje aproksimirati proširivanjem modela s pojačanjem otvorene petlje:
# <br><br>
# $$G_{ideal}(s)=\frac{V_{out}}{V_{in}}=-\frac{Z_F}{Z_G}\qquad\qquad G_{approx}(s)=\frac{V_{out}}{V_{in}}=-\frac{\frac{-A\cdot Z_F}{Z_G+Z_F}}{1+\frac{A\cdot Z_G}{Z_G+Z_F}}$$
# <br>
#
# U ovom ćemo primjeru istražiti neke od konfiguracija PD regulatora zasnovanih na operacijskim pojačalima.<br>
# <b>Prvo, odaberite vrijednost pojačanja otvorene petlje za prikazane izračune!</b>
#
#
# +
# Model selector
opampGain = w.ToggleButtons(
options=[('10 000', 10000), ('50 000', 50000), ('200 000', 200000),],
description='Pojačanje operacijskog pojačala: ', style={'description_width':'30%'})
display(opampGain)
# -
# Najjednostavnija implementacija PD regulatora sadrži kondenzator u unaprijednoj vezi i otpornik u povratnoj vezi. Idealni model točno odgovara matematičkom obliku regulatora. No, nakon uključivanja pojačanja otvorene petlje, pojavljuje se komponenta prvog reda koja služi kao niskopropusni filtar. Ovaj oblik PD regulatora često se koristi umjesto idealne verzije prilikom dizajniranja ciljanih aplikacija.
# <br><br>
# <img src="Images/diff1.png" width="30%" />
# <br>
# <b>Prilagodite pasivne komponente tako da neidealni model bude najbliži idealnom! Gdje se pojavljuje značajni granični efekt (engl. cutoff)? Što se može reći o grafu faze?</b>
# +
# Figure definition
fig1, ((f1_ax1), (f1_ax2)) = plt.subplots(2, 1)
fig1.set_size_inches((9.8, 5))
fig1.set_tight_layout(True)
l1 = f1_ax1.plot([], [], color='red')
l2 = f1_ax2.plot([], [], color='red')
l3 = f1_ax1.plot([], [], color='blue')
l4 = f1_ax2.plot([], [], color='blue')
f1_line1 = l1[0]
f1_line2 = l2[0]
f1_line3 = l3[0]
f1_line4 = l4[0]
f1_ax1.legend(l1+l3, ['Ne-idealno', 'Idealno'], loc=1)
f1_ax2.legend(l2+l4, ['Ne-idealno', 'Idealno'], loc=1)
f1_ax1.grid(which='both', axis='both', color='lightgray')
f1_ax2.grid(which='both', axis='both', color='lightgray')
f1_ax1.autoscale(enable=True, axis='x', tight=True)
f1_ax2.autoscale(enable=True, axis='x', tight=True)
f1_ax1.autoscale(enable=True, axis='y', tight=False)
f1_ax2.autoscale(enable=True, axis='y', tight=False)
f1_ax1.set_title('Bodeov graf amplitude', fontsize=11)
f1_ax1.set_xscale('log')
f1_ax1.set_xlabel(r'$f\/$[Hz]', labelpad=0, fontsize=10)
f1_ax1.set_ylabel(r'$A\/$[dB]', labelpad=0, fontsize=10)
f1_ax1.tick_params(axis='both', which='both', pad=0, labelsize=8)
f1_ax2.set_title('Bodeov graf faze', fontsize=11)
f1_ax2.set_xscale('log')
f1_ax2.set_xlabel(r'$f\/$[Hz]', labelpad=0, fontsize=10)
f1_ax2.set_ylabel(r'$\phi\/$[°]', labelpad=0, fontsize=10)
f1_ax2.tick_params(axis='both', which='both', pad=0, labelsize=8)
# System model
def system_model(cg, rf, a):
Rf = rf / 1000 # Convert to Ohm
Cg = cg * 1000000 # Convert to Farad
W_ideal = c.tf([-Rf*Cg, 0], [1])
W_ac = c.tf([-Rf*Cg*a, 0], [Rf*Cg, a+1])
global f1_line1, f1_line2, f1_line3, f1_line4
f1_ax1.lines.remove(f1_line1)
f1_ax2.lines.remove(f1_line2)
f1_ax1.lines.remove(f1_line3)
f1_ax2.lines.remove(f1_line4)
mag, phase, omega = c.bode_plot(W_ac, Plot=False) # Non-ideal Bode-plot
f1_line1, = f1_ax1.plot(omega/2/np.pi, 20*np.log10(mag), lw=1, color='red')
f1_line2, = f1_ax2.plot(omega/2/np.pi, phase*180/np.pi, lw=1, color='red')
mag, phase, omega = c.bode_plot(W_ideal, omega=omega, Plot=False) # Ideal Bode-plot at the non-ideal points
f1_line3, = f1_ax1.plot(omega/2/np.pi, 20*np.log10(mag), lw=1, color='blue')
f1_line4, = f1_ax2.plot(omega/2/np.pi, phase*180/np.pi, lw=1, color='blue')
f1_ax1.relim()
f1_ax2.relim()
f1_ax1.autoscale_view()
f1_ax2.autoscale_view()
print('Prijenosna funkcija za idealni PD:')
print(W_ideal)
print('\nPrijenosna funkcija za ne-idealni PD:')
print(W_ac)
# GUI widgets
rf_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description=r'$R_f$$\ [k\Omega]\ :$', continuous_update=False,
layout=w.Layout(width='75%'), style={'description_width':'30%'})
cg_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description=r'$C_g$$\ [\mu H]\ :$', continuous_update=False,
layout=w.Layout(width='75%'), style={'description_width':'30%'})
input_data = w.interactive_output(system_model, {'rf':rf_slider, 'cg':cg_slider, 'a':opampGain})
display(w.HBox([cg_slider, rf_slider]), input_data)
# -
# Ova implementacija PD regulatora, iako zaista jednostavna, ima nekoliko nedostataka, uključujući visoku osjetljivost na šum. Da bi se smanjio ovaj problem, otpornik se može uključiti u unaprijednu vezu, dok prijenosna funkcija sustava zadržava svoj oblik.
# <br><br>
# <img src="Images/diff2.png" width="30%" />
# <br>
# <b>Prilagodite pasivne komponente tako da neidealni model bude najbliži idealnom! Koje su razlike s obzirom na prethodni model?</b>
# +
# Filtered PD - serial (with "stop")
fig2, ((f2_ax1), (f2_ax2)) = plt.subplots(2, 1)
fig2.set_size_inches((9.8, 5))
fig2.set_tight_layout(True)
l1 = f2_ax1.plot([], [], color='red')
l2 = f2_ax2.plot([], [], color='red')
l3 = f2_ax1.plot([], [], color='blue')
l4 = f2_ax2.plot([], [], color='blue')
f2_line1 = l1[0]
f2_line2 = l2[0]
f2_line3 = l3[0]
f2_line4 = l4[0]
f2_ax1.legend(l1+l3, ['Ne-idealno', 'Idealno'], loc=1)
f2_ax2.legend(l2+l4, ['Ne-idealno', 'Idealno'], loc=1)
f2_ax1.grid(which='both', axis='both', color='lightgray')
f2_ax2.grid(which='both', axis='both', color='lightgray')
f2_ax1.autoscale(enable=True, axis='x', tight=True)
f2_ax2.autoscale(enable=True, axis='x', tight=True)
f2_ax1.autoscale(enable=True, axis='y', tight=False)
f2_ax2.autoscale(enable=True, axis='y', tight=False)
f2_ax1.set_title('Bodeov graf amplitude', fontsize=11)
f2_ax1.set_xscale('log')
f2_ax1.set_xlabel(r'$f\/$[Hz]', labelpad=0, fontsize=10)
f2_ax1.set_ylabel(r'$A\/$[dB]', labelpad=0, fontsize=10)
f2_ax1.tick_params(axis='both', which='both', pad=0, labelsize=8)
f2_ax2.set_title('Bodeov graf faze', fontsize=11)
f2_ax2.set_xscale('log')
f2_ax2.set_xlabel(r'$f\/$[Hz]', labelpad=0, fontsize=10)
f2_ax2.set_ylabel(r'$\phi\/$[°]', labelpad=0, fontsize=10)
f2_ax2.tick_params(axis='both', which='both', pad=0, labelsize=8)
# System model
def system2_model(cg, rg, rf, a):
Rf = rf / 1000 # Convert to Ohm
Rg = rg / 1000
Cg = cg * 1000000 # Convert to Farad
W_ideal = c.tf([-Rf*Cg, 0], [1])
W_ac = c.tf([-Rf*Cg*a, 0], [(Rf+Rg*(a+1))*Cg, a+1])
global f2_line1, f2_line2, f2_line3, f2_line4
f2_ax1.lines.remove(f2_line1)
f2_ax2.lines.remove(f2_line2)
f2_ax1.lines.remove(f2_line3)
f2_ax2.lines.remove(f2_line4)
mag, phase, omega = c.bode_plot(W_ac, Plot=False) # Non-ideal Bode-plot
f2_line1, = f2_ax1.plot(omega/2/np.pi, 20*np.log10(mag), lw=1, color='red')
f2_line2, = f2_ax2.plot(omega/2/np.pi, phase*180/np.pi, lw=1, color='red')
mag, phase, omega = c.bode_plot(W_ideal, omega=omega, Plot=False) # Ideal Bode-plot at the non-ideal points
f2_line3, = f2_ax1.plot(omega/2/np.pi, 20*np.log10(mag), lw=1, color='blue')
f2_line4, = f2_ax2.plot(omega/2/np.pi, phase*180/np.pi, lw=1, color='blue')
f2_ax1.relim()
f2_ax2.relim()
f2_ax1.autoscale_view()
f2_ax2.autoscale_view()
print('Prijenosna funckija za idealni PD (diferencijator sa zaustavljanjem):')
print(W_ideal)
print('\nPrijenosna funckija za ne-idealni PD (diferencijator sa zaustavljanjem):')
print(W_ac)
# GUI widgets
rg2_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description=r'$R_g$$\ [k\Omega]\ :$', continuous_update=False,
layout=w.Layout(width='75%'), style={'description_width':'30%'})
rf2_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description=r'$R_f$$\ [k\Omega]\ :$', continuous_update=False,
layout=w.Layout(width='75%'), style={'description_width':'30%'})
cg2_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description=r'$C_g$$\ [\mu H]\ :$', continuous_update=False,
layout=w.Layout(width='75%'), style={'description_width':'30%'})
input_data = w.interactive_output(system2_model, {'rg':rg2_slider, 'rf':rf2_slider, 'cg':cg2_slider, 'a':opampGain})
display(w.HBox([rg2_slider, cg2_slider, rf2_slider]), input_data)
| ICCT_hr/examples/03/.ipynb_checkpoints/FD-18-PD_regulator_s_operacijskim_pojacalom-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Extracting Histogram XY Series
#
# This notebook illustrates steps to extract series of data to list format. Experiment dataset is using "Student Performance in exam", which can found at [Kaggle](https://www.kaggle.com/spscientist/students-performance-in-exams).
import pandas as pd
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
df_raw = pd.read_csv("./dataset/StudentsPerformance.csv")
df_raw.shape
df_raw.describe()
df_raw.dtypes
# ## 1. Global Variable Setup
# +
BIN_SIZE = 10
SR_MATHSCORE = df_raw['math score']
MEAN = df_raw['math score'].describe()['mean']
STD = df_raw['math score'].describe()['std']
# -
# ## 2. Basic Dataframe Histogram for 'math score' Column
hist = df_raw['math score'].hist(bins=BIN_SIZE)
# ## 3. Histogram with Bell Curve
sr = df_raw['math score']
# +
# Fit a normal distribution to the data:
mu = sr.describe()['mean']
std = sr.describe()['std']
print("mu = {}, std = {}".format(mu, std))
# +
# Plot the histogram.
plt.hist(sr.values, bins=BIN_SIZE, histtype='stepfilled', density=True, alpha=0.6, color='g')
# Plot the PDF.
xmin, xmax = plt.xlim()
x = np.linspace(0, 100, 1000)
p = norm.pdf(x, mu, std)
plt.plot(x, p, 'r', linewidth=2)
title = "Fit results: mu = %.2f, std = %.2f" % (mu, std)
plt.title(title)
plt.axvline(x=mu, color='orange', linestyle='dashed', lw=2, alpha=0.7)
plt.axvline(x=mu-std, color='blue', linestyle='dashed', lw=2, alpha=0.7)
plt.axvline(x=mu+std, color='blue', linestyle='dashed', lw=2, alpha=0.7)
plt.show()
# -
# ### 3.1 Experimenting the bell curve probability values
x
p
max(p)
0.025 * 10000
p_exp = p * 10000
p_exp
min(p_exp) * 10000
# #### Plotting bell curve
#
# By multiplying 10000 to the bell curve y-series data, the y-values are closed to histogram y-series values. However, the histogram bars are missing. Still not resolve this yet.
plt.plot(x, p_exp)
# ## 4. Zip XY value to for Highchart's data series format
# ### 4.1 Generate XY series for Histogram Bar
# +
# Histogram Data
hist_result = pd.cut(df_raw['math score'], BIN_SIZE).value_counts()
# Get x_series of data
list_x = []
for item in hist_result.index.values:
list_x.append(item.left)
# Create list of y_series data
list_y = hist_result.values.tolist()
series_bar_xy = [list(a) for a in zip(list_x, list_y)]
series_bar_xy
# -
# ### 4.2 Generate XY series for Bell Curve
bcx = np.linspace(0, 100, 10)
bcp = (norm.pdf(bcx, MEAN, STD)) * 10000
bellcurve_xy = [list(a) for a in zip(bcx.tolist(), bcp.tolist())]
bellcurve_xy
plt.plot(bcx, bcp)
hist = df_raw['math score'].hist(bins=BIN_SIZE)
| simple_ipynb/Histogram_To_HighchartFormat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Python Asynchronous I/O Tutorial
import time
import httpx
import aiohttp
import asyncio
import requests
import multiprocessing
from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
urls = [
"https://www.google.com",
"https://www.microsoft.com",
"https://www.twitter.com",
"https://www.netflix.com",
"https://www.facebook.com",
"https://www.apple.com",
"https://www.amazon.com",
"https://www.reddit.com",
"https://www.sony.com",
"https://www.nintendo.com",
"https://www.oracle.com",
"https://www.intel.com",
"https://www.samsung.com",
"https://www.ibm.com",
]
# fix needed for Python 3.8+ on mac
# alternative is to use the 'multiprocess' library
multiprocessing.set_start_method('fork')
# ## Sequential
# +
start_time = time.time()
results = []
for each_url in urls:
try:
each_response = requests.get(each_url)
each_content = each_response.content
results.append(each_content)
except Exception as e:
print(f"{each_url} {e}")
results.append(None)
time.time() - start_time
# -
# ## Multi-Processing
# +
start_time = time.time()
def example_multiprocessing_function(each_url):
try:
each_response = requests.get(each_url)
each_content = each_response.content
return each_content
except Exception as e:
print(f"{each_url} {e}")
return None
with Pool(processes=multiprocessing.cpu_count()) as pool:
results = pool.map(example_multiprocessing_function, urls)
time.time() - start_time
# -
# ## Multi-Threading
# +
start_time = time.time()
with ThreadPool(processes=multiprocessing.cpu_count()) as pool:
results = pool.map(example_multiprocessing_function, urls)
time.time() - start_time
# -
# ## Asyncio
# ### Tutorial
# +
start_time = time.time()
# 1. add "async" to def to create a coroutine
# 2. use "await" for concurrent/coroutine calls
# note: await calls and packages must support asyncio
async def example_asyncio_function(x):
await asyncio.sleep(x)
return x
# 3. create concurrent main coroutine
# 4. use asyncio.create_task() to create tasks
# 5. use asyncio.gather() to schedule concurrent tasks
# note: can also use asyncio.wait() and asyncio.wait_for()
async def concurrent_main_function():
inputs = [1, 2, 3, 4, '5', 6]
tasks = []
for each_input in inputs:
each_coroutine = example_asyncio_function(each_input)
each_task = asyncio.create_task(each_coroutine)
tasks.append(each_task)
results = await asyncio.gather(*tasks, return_exceptions=True)
return results
# 6. await main coroutine
# note: if outside of a notebook, use asyncio.run()
results = await concurrent_main_function()
time.time() - start_time
# -
# ### HTTP Requests
# #### Option 1: httpx
# +
start_time = time.time()
async def example_asyncio_get_request_httpx(session, each_url):
response = await session.get(each_url)
each_content = response.content
return each_content
async def main():
async with httpx.AsyncClient() as session:
tasks = []
for each_url in urls:
each_coroutine = example_asyncio_get_request_httpx(session, each_url)
each_task = asyncio.create_task(each_coroutine)
tasks.append(each_task)
results = await asyncio.gather(*tasks, return_exceptions=True)
return results
results = await main()
time.time() - start_time
# -
# #### Option 2: aiohttp
# +
start_time = time.time()
async def example_asyncio_get_request_aiohttp(session, each_url):
async with session.get(each_url) as resp:
result = await resp.text()
return result
async def main():
async with aiohttp.ClientSession() as session:
tasks = []
for each_url in urls:
each_coroutine = example_asyncio_get_request_aiohttp(session, each_url)
each_task = asyncio.create_task(each_coroutine)
tasks.append(each_task)
results = await asyncio.gather(*tasks, return_exceptions=True)
return results
results = await main()
time.time() - start_time
# -
| python_asyncio.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## T-SNE for tabular playground series
# imports for T-SNE
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
# warning and plotting preferences
import warnings
warnings.simplefilter("ignore")
sns.set_style("white")
sns.set_context("paper")
pal = dict(Class_1="#3b6e8c", Class_2="#efc758", Class_3="#bc5b4f", Class_4="#8fa37e")
pd.options.display.max_columns = None
pd.options.display.float_format = "{:.2f}".format
df = pd.read_csv("../data/raw/train.csv")
df.head(3)
# Taking a subsample for fast iteration
sub_df = df.drop(columns=['id']).sample(10000, random_state=42)
# Let the t-sne begin
tsne = TSNE()
df_embedded = tsne.fit_transform(sub_df)
df_embedded.shape
df_result = pd.DataFrame({'tsne_1': df_embedded[:,0], 'tsne_2': df_embedded[:,1], 'label': df.target})
fig, ax = plt.subplots(figsize=(8, 8))
sns.scatterplot(x='tsne_1', y='tsne_2', hue='label', data=df_result, ax=ax, s=10, palette=pal)
sns.despine()
plt.show()
tsne = TSNE(init='pca')
df_embedded = tsne.fit_transform(sub_df.iloc[:,:-1])
df_result = pd.DataFrame({'tsne_1': df_embedded[:,0], 'tsne_2': df_embedded[:,1], 'label': sub_df.target})
fig, ax = plt.subplots(figsize=(8, 8))
sns.scatterplot(x='tsne_1', y='tsne_2', hue='label', data=df_result, ax=ax, s=10, palette=pal)
sns.despine()
plt.show()
sns.jointplot(x='tsne_1', y='tsne_2', hue='label', data=df_result, s=10, palette=pal)
plt.show()
# +
# Next, to look at different values of perplexity, iteration and
# perplexities = [5, 30, 50, 100]
| notebooks/t-sne_for_tps_202105.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Gathering OTLY Twitter Mention Data
# In this section we will be using SNScrape to gather twitter data surrounding OTLY's IPO date.
# Import libraries.
import os
import pandas as pd
# Using OS library and SNScrape to gather twitter mention data on OTLY, Esports Technologies, ABNB, LegalZoom, and FXLV
os.system('snscrape --jsonl --max-results 100000 --since 2021-04-20 twitter-search \"OTLY until:2021-08-20\"> text-query-otly.json')
os.system("snscrape --jsonl --max-results 100000 --since 2020-11-10 twitter-search \"ABNB until:2021-03-10\" > text-query-abnb.json")
# +
# Reads the json generated from the CLI command above and creates a pandas dataframe
otly_preipotweets_df = pd.read_json('text-query-otly.json', lines=True)
abnb_preipotweets_df = pd.read_json('text-query-abnb.json', lines=True)
# Displays first 5 entries from dataframe
display(otly_preipotweets_df.describe())
display(abnb_preipotweets_df.describe())
# +
# Reduce dataframe to just include date.
tweetsbyday_otly = otly_preipotweets_df["date"]
tweetsbyday_abnb = abnb_preipotweets_df["date"]
# Drop time form date
tweetsbyday_otly = pd.to_datetime(tweetsbyday_otly).dt.date
tweetsbyday_abnb = pd.to_datetime(tweetsbyday_abnb).dt.date
# preview a couple of the DF's to make sure they are formatted correctly
display(tweetsbyday_otly.head())
display(tweetsbyday_abnb.head())
# -
# Export twitter data to CSV.
tweetsbyday_otly.to_csv("Twitter Data/tweets_preipo_otly.csv")
tweetsbyday_abnb.to_csv("Twitter Data/tweets_preipo_abnb.csv")
| Jake_Code_Files/PreIpoTwitterScrape.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('firsttree').master('local[4]').getOrCreate()
from pyspark.ml import Pipeline
from pyspark.ml.classification import (
RandomForestClassifier,
GBTClassifier,
DecisionTreeClassifier)
# ### Load Data
data = spark.read.format('libsvm').load('../data/sample_libsvm_data.txt')
data.printSchema()
data.select('features').take(1)
# ### Test Train Split
train_data, test_data = data.randomSplit([0.8, 0.2])
# ### Three Default Trees
dtc = DecisionTreeClassifier()
rfc = RandomForestClassifier(numTrees=100)
gbt = GBTClassifier()
dtc_model = dtc.fit(train_data)
rfc_model = rfc.fit(train_data)
gbt_model = gbt.fit(train_data)
dtc_preds = dtc_model.transform(test_data)
rfc_preds = rfc_model.transform(test_data)
gbt_preds = gbt_model.transform(test_data)
dtc_preds.show()
# ### Evaluate
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
acc_eval = MulticlassClassificationEvaluator(metricName='accuracy')
print('DT Accuracy', acc_eval.evaluate(dtc_preds))
print('RFC Accuracy', acc_eval.evaluate(rfc_preds))
print('GBT Accuracy', acc_eval.evaluate(gbt_preds))
# ### Analyzing Feature Importance
rfc_model.featureImportances
| apache-spark/python/decision-trees/Decision Tree Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
x = np.array([1., 2., 3., 4., 5.])
y = np.array([1., 3., 2., 3., 5.])
plt.scatter(x, y)
plt.axis([0, 6, 0, 6])
# 
# 
x_mean = np.mean(x)
y_mean = np.mean(y)
x_mean
# +
# 分子
num = 0.0
# 分母
d = 0.0
# -
for x_i, y_i in zip(x, y):
num += (x_i - x_mean) * (y_i - y_mean)
d += (x_i - x_mean) ** 2
a = num / d
b = y_mean - a * x_mean
a
b
y_hat = a * x + b
plt.scatter(x, y)
plt.plot(x, y_hat, color='r')
plt.axis([0, 6, 0, 6])
x_predict = 6
y_predict = a * x_predict + b
y_predict
# # 使用自己的SimpleLinearRegression
from SimpleLinearRegression import SimpleLinearRegression1
reg1 = SimpleLinearRegression1()
reg1.fit(x, y)
reg1.predict(np.array([x_predict]))
reg1.a_
reg1.b_
y_hat1 = reg1.predict(np.array(x))
plt.scatter(x, y)
plt.plot(x, y_hat1, color='r')
plt.axis([0, 6, 0, 6])
# # 向量化实现SimpleLinearRegression
from SimpleLinearRegression import SimpleLinearRegression2
reg2 = SimpleLinearRegression2()
reg2.fit(x, y)
reg2.a_
reg2.b_
# +
y_hat2 = reg2.predict(np.array(x))
plt.scatter(x, y)
plt.plot(x, y_hat1, color='r')
plt.axis([0, 6, 0, 6])
# -
# # 向量化实现的性能测试
m = 1000000
big_x = np.random.random(size=m)
big_y = big_x * 2.0 + 3.0 + np.random.normal(size=m)
# %timeit reg1.fit(big_x, big_y)
# %timeit reg2.fit(big_x, big_y)
reg1.a_
reg1.b_
reg2.a_
reg2.b_
| 03Linear-Regression/01Simple-Linear-Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys, io, os
import numpy as np
from tqdm import trange
from matplotlib import pyplot as plt
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
import pickle as pk
from joblib import dump
X = np.load('elmoised.npy')
# If I download more lyrics(>1024) then mle will be applicable here, but I'd expect it to be too slow.
# Also on my computer just computing ELMo for 730 songs takes ~30 minutes.
# +
red_dim = 16
svd = PCA(n_components =red_dim, svd_solver = 'randomized')
X2 = svd.fit_transform(X)
# -
variances = []
for i in trange(16):
variances+=[sum(svd.explained_variance_ratio_[:i])]
plt.plot(variances)
plt.show()
svd = PCA(n_components =8, svd_solver = 'randomized')
X2 = svd.fit_transform(X)
np.save(X2, 'findata')
dump(svd, 'svd.joblib')
#svd=load(svd.joblib)
| .ipynb_checkpoints/MakePCA-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (greenbuildings)
# language: python
# name: greenbuildings
# ---
# ## GreenBuildings3: Build & Deploy Models With MLflow & Docker
#
# ------
#
# ## Contents
#
# __[1. Introduction](#intro)__
#
# __[2. Intro To MLflow](#mlflow-one)__
#
# __[3. Linear Regression & Logging A Simple Run](#mlflow-two)__
#
# __[4. XGBoost & Logging Nested Runs for GridSearchCV](#mlflow-three)__
#
# __[5. MLflow Models: Model Serving With REST APIs](#mlflow-four)__
#
# __[6. Deploying to Google App Engine with Docker](#mlflow-five)__
#
# __[7. Conclusions ](#fifth-bullet)__
#
#
# --------------
#
# ## Introduction <a class="anchor" id="intro"></a>
# -------------
#
#
# This is the third and final post in a series of blog posts about energy usage and green house gas emissions of buildings in New York City. In the [first post](http://michael-harmon.com/blog/GreenBuildings1.html) I covered exploratory data analysis and outlier removal. In the [second post](http://michael-harmon.com/blog/GreenBuildings2.html) I covered imputing missing values. These topics make up the majority of what is called "data cleaning". This last post will deal with model building and model deployment. Specifically I will build a model of New York City building green house gas emissions based on the building energy usage metrics. After I build a sufficiently accurate model I will convert the model to [REST API](https://restfulapi.net/) for serving and then deploy the REST API to the cloud.
#
# The processes of model development and deployment are made a lot easier with [MLflow](https://mlflow.org/) library. Specifically, I will cover using the [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html) framework to log all the diffent models I developed as well as their performance. MLflow tracking acts a great way to memorialize and document the model development process. I will then use [MLflow Models](https://www.mlflow.org/docs/latest/models.html) to convert the top model into a [REST API](https://restfulapi.net/) for model serving. I will go over two ways MLflow Models creates REST API including the newly added method that uses [Docker](https://www.docker.com/). Finally I will show how to simply deploy the "Dockerized" API to the cloud through [Google App Engine](https://cloud.google.com/appengine).
#
# Note the MLflow library is still *relatively new* and the APi may change, for this purpose I should remark that I am working with MLflow version 1.8.0. I should also point out that model serving through Docker was [experimental](https://www.mlflow.org/docs/latest/cli.html#mlflow-models-build-docker) in MLflow 1.8.0 and may have changed since I finished this project.
# ## Working With MLflow <a class="anchor" id="mlflow-one"></a>
# -----------------
#
# [MLflow](https://mlflow.org/) is an open source tool to make machine learning easier and more reproducible and was create by [Databricks](https://databricks.com/) (the same people who created [Apache Spark](https://spark.apache.org/)). There are many components to MLflow, but the two I will be looking at are,
#
# - [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html) : A tool for logging modeling experiments
# - [MLflow Models](https://www.mlflow.org/docs/latest/models.html) : A tool for serving models as REST APIs
#
# I will stick to using MLflow locally instead of a production set up. You can start the Web UI with the command:
#
# mlflow ui --host=0.0.0.0 --port=5050
#
# Then going to the website http://0.0.0.0:5050 in your web broswer where we will see the following:
#
#
# 
#
#
# We can see the generic MLflow website without any modeling experiment data. This will change soon enough. We can collect modeling information into "*experiments*" that will contain "*runs*". Each run could be one model or a series of different models each trained with different parameter values. In this way MLflow tracking is great for organizing and maintaining as much information about the development process as you like.
#
# Locally, MLflow will create a file directory called,
#
# mlruns
#
# that will be housed in the same path that the `mlflow ui` was run in. Let's import the library along with some other basic libraries:
# +
import mlflow
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
sns.set()
# -
# We can see what the MLflow tracking url is to see where all the data for MLflow will be stored (i.e. where `mlruns` directory is):
mlflow.get_tracking_uri()
# Now let's create an experiment for this project and get started!
try:
mlflow.create_experiment("greenbuildings")
experiment = mlflow.get_experiment_by_name("greenbuildings")
except:
experiment = mlflow.get_experiment_by_name("greenbuildings")
# We can see that we get an "experiment" that has a number of attributes:
print(experiment)
# These attributes include:
#
# - **artifact_location** (where the metadata + models will be stored)
# - **experiment_id** (id to help us track the experiment)
# - **lifestyle_stage** (whether its active, deleted, etc.)
# - **name** (experiment name)
# - **tag**
#
#
# The `experiment_id` is an important attribute and will be used quite frequently to know where to log and organize all the modeling information. Let's set that number as a variable to use later:
exp_id = experiment.experiment_id
# Let's move on to building our first model for predicting green house gas emission of buildings.
# ## Linear Regression & Logging A Simple Run <a class="anchor" id="mlflow-two"></a>
#
# Let's build a predictive model for green house gas emissions by multifamily homes and offices in New York City. We'll do this at first using a simple linear regression model. While not the best in terms of predictive performance it is often a great first step since it allows us to interpet the effect each feature has on the predicted green house gas emissions. We'll discuss this more later, but for now lets import our data from [Google BigQuery](https://cloud.google.com/bigquery) using the set up from the [previous posts](http://michael-harmon.com/blog/GreenBuildings1.html):
# +
from google.oauth2 import service_account
from google.cloud import bigquery
import json
import pandas_gbq
credentials = service_account.Credentials\
.from_service_account_file('./derby.json')
pandas_gbq.context.credentials = credentials
pandas_gbq.context.project = credentials.project_id
# -
df = pandas_gbq.read_gbq("""
SELECT
CAST(Energy_Star AS INT64) AS Energy_Star,
Site_EUI,
NGI,
EI,
GHGI,
CAST(Residential AS INT64) AS Residential,
FROM
db_gb.clean_data
""")
# And get the target variable and features:
X = df.drop("GHGI",axis=1)
Y = df["GHGI"]
# Let's remind ourselves what the distribution of the target variable and predictors look like using the pairplot shown in the [last post](http://michael-harmon.com/blog/GreenBuildings2.html):
sns.pairplot(df,
vars=["Energy_Star","Site_EUI","NGI","EI","GHGI"],
size=2,
hue='Residential')
# **We can from the last row in this graph that the relationship between `GHGI` and `Site_EUI`, `NGI`, as well as `EI` is somewhat linear, but the relationship of `GHGI` and `Energy_Star` is less well defined.**
#
# Let's create our train and test set as well fix our random state (to have repeatable datasets)
# +
from sklearn.model_selection import train_test_split
random_state = 93
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=random_state)
# -
# As we stated earlier we'll start out with a <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html">Linear Regression model</a> since it is simple and interpertable. We can easily implement a least squares regression model using Scikit-learn:
# +
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score, mean_squared_error
from sklearn.compose import ColumnTransformer
cat_feats = ['Residential']
num_feats = ['Energy_Star','Site_EUI','NGI','EI']
scaler = ColumnTransformer(transformers=[('num_transform', StandardScaler(), num_feats)],
remainder='passthrough')
pipe = Pipeline([('preprocessor', transformer),
('reg', LinearRegression())])
model = pipe.fit(X_train, y_train)
# -
# Notice we use a [ColumnTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html) to only normalize the numerical features and not `Residential` which is categorical. We can then evaluate the model performance ($R^{2}$-score) on the test set to *see how much variance in the model we are able to explain* as well as the mean square error (MSE):
# +
y_pred = model.predict(X_test)
print("R2 score: {}".format(r2_score(y_test, y_pred)))
print("MSE: {}".format(mean_squared_error(y_test, y_pred)))
# -
# We can explain 64.76% of the variance which is pretty good, but definitely leaves room for improvement. Let's take a look at the coefficients of the linear regression model.
# +
# get the last stage of the pipeline which is the model
reg = pipe.steps[1][1]
# print the coefficients
print(pd.Series(reg.coef_,index=X_train.columns))
# -
# The model coefficents can be interpreated as folllows: for continuous feautres, an increase in one of their in units yields an increase in the unit of green house emissions that is equal to the coefficent. For example, increasing `Site_EUI` by 1 unit increase `GHGI` by 0.00164 units. We can see that increasing the electricty, energy intensity, and natural gas intensity increases green house gas emissions which makes sense. Increasing the Energy Star rating of the building tends to decrease the greenhouse gas emissions which makes sense. It also seems that residential buildings tend to emit more green house gases than office space buildings, albiet weakly.
#
# We can measure the p-values for coefficents by using Scikit-Learns's <a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_regression.html">f_regression</a> function.
# +
from sklearn.feature_selection import f_regression
f_stats, pvals = f_regression(scaler.fit_transform(X_train), y_train)
print("P-Values:")
print(pd.Series(pvals,index=X_train.columns))
# -
# **We see that even though the coeficients of the regression model are rather small, their small p-values show that they are still signifcant and should be included in our model.** Overfitting a linear model can be quite obvious from the coefficients when one of the features has a large absolute value. In our model this does not seem to be the case and we don't have to consider overfitting or using regularization further.
# Let's add a run to the MLflow experiment that corresponds to this model. We use the `start_run` function and pass the experiment id along with the name for this run being "Linear Regression"
run = mlflow.start_run(experiment_id=exp_id, run_name="Linear Regression")
# We can see that we have an active run that is a [RunInfo](https://www.mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.RunInfo) entity that maintains information about the run:
run.info
# We can add the metrics for our model using the `add_metrics` functions:
mlflow.log_metric("r2" ,r2_score(y_test, y_pred))
mlflow.log_metric("mse", mean_squared_error(y_test, y_pred))
# Let's look at some of the residuals in the continuous features to see if we can find any non-linear patterns that might signal ways improve the model.
# +
from utilities.PlottingFunctions import plot_residuals
f = plot_residuals(X_test = X_test,
y_test = y_test,
y_pred = y_pred)
# -
# There are no obvious patterns in the residuals, but at the same time they **do not appear to be normally distributed as the theory says they should be.** This tells me that we might be able to use a more flexible model to capture the nonlinearities in the relationships.
#
# We can log this image as well using the `log_artifact` method:
f.savefig("resid.png")
mlflow.log_artifact("resid.png")
# For the time being let's log the model using the so called [scikit-learn flavor](https://mlflow.org/docs/latest/models.html#scikit-learn-sklearn) and end the run:
import mlflow.sklearn
mlflow.sklearn.log_model(model, "LinearModel")
mlflow.end_run()
# We can go to the MLflow UI to see that the run has been added with its metrics:
# 
# Clicking on the run we can see the model performance metrics, logged model and artifacts:
#
# 
#
# The `LinearModel` folder under the artifacts tab contains the [conda environment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) (`conda.yml`), the pickled model (`model.pkl`) and associated metadata (`MLModel`). We should note that the `conda.yml` file is used to package all the necessary libraries for serving the model `model.pkl`.
# ## XGBoost & Logging Nested Runs for GridSearchCV <a class="anchor" id="mlflow-three"></a>
#
# Let's try another model to see if we cant improve the $R^2$ score and MSE. One algorithm that performs quite well is [XGBoost](https://xgboost.readthedocs.io/en/latest/). XGBoost is a based on gradient boosted decision trees and is one of the best performing machine learning models avaiable. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function. You can read more about gradient boosting [here](https://machinelearningmastery.com/gentle-introduction-gradient-boosting-algorithm-machine-learning/).
#
#
# Let's import the XGBoost Regressor and then run a small grid search using cross-valiation to find the optimal parameter values:
# +
from xgboost import XGBRegressor
from sklearn.model_selection import GridSearchCV
# define the parameter values
paramters = {"n_estimators" :[10,15,25,50,100,150],
"max_depth" :[3,5,7,10],
"loss" :["ls", "lad"]
}
# define the grid search and optimization metric
grid = GridSearchCV(estimator=XGBRegressor(),
param_grid=paramters,
scoring="r2",
cv=5,
n_jobs=-1)
# perform the grid search
xgb_model = grid.fit(X_train, y_train)
# -
# **NOTE: Scaling is unnecessary for tree based methods**.
#
# Now that we have the best model from our grid search over the trainin set let see how it performs on the test set:
# +
y_pred = xgb_model.predict(X_test)
print("R^2 score: {}".format(r2_score(y_test, y_pred)))
print("MSE {}".format(mean_squared_error(y_test, y_pred)))
# -
# A definite improvement in the $R^2$ score and MSE! Let's take a look at the residuals:
# +
f = plot_residuals(X_test = X_test,
y_test = y_test,
y_pred = y_pred)
f.savefig("resid.png")
# -
# Not terribly different than the linear regression model, but we see less outliers in the relationship between `NGI` and the residuals.
#
#
# While we have improved our prediction capabilities, one draw back to more complex models like XGBoost is that they are less interperable. Despite this draw back, XGBoost still allows to the find the relative importance of the features:
# +
model = grid.best_estimator_
for coef in zip(X_train.columns, model.feature_importances_):
print(coef)
# -
# While we know the importance of the features we don't now how they effect the `GHGI`, i.e. if the `NGI` does the `GHGI` go up or down? By how much does it go up or down? Given what we see just from the above these are not questions the model can answer in general terms. This is what we mean by model interperability!
#
#
# Now let's log all the information from the grid search and each of the models performances using a nested run using a "with" statement:
with mlflow.start_run(
experiment_id=exp_id,
run_name="XGBoostRegressor",
nested=True
):
# Get the grid cell results
cv_results = grid.cv_results_
# loop over each of the parameters and log them along
# with the metric and rank of the model
for params, metric, rank in zip(cv_results['params'],
cv_results['mean_test_score'],
cv_results["rank_test_score"]):
with mlflow.start_run(experiment_id=exp_id,
nested=True):
# log the parameters
mlflow.log_params(params)
# log the R2 score
mlflow.log_metric("r2",metric)
# set the rank
mlflow.set_tag("rank",rank)
# For the best estimator (xbg_model)
# let's log the parameters for the best model and
# its metric artifacts
mlflow.log_params(grid.best_params_)
mlflow.log_metrics({"r2" : r2_score(y_test, y_pred),
" mse": mean_squared_error(y_test, y_pred)})
mlflow.log_artifact("resid.png")
mlflow.log_model("XGBoost",xbg_model)
# Let's take a look at the MLflow UI again:
#
# 
#
# We can see the + symbol on the left side of the `Run Name` corresponding to the run "XGBoostRegresor". Clicking on the symbol shows the rest of the results from the grid search as a dropdown:
# 
# We can see the different parameters used in each of the runs, the $R^2$ value for that model as well as the ranking of that model. Notice that the first model and the last of the models with rank 1 are the same. However, their $R^2$ values are different. This is because the one is the model perfomance on the test set, while the other is the average model of the model performances in the 5-fold cross validation.
#
# Another nice feature of MLflow is the ability to compare model runs, to see the effect of say a hyper-parameter on model performance. You can select the model runs from the drop down and click the "compare" button displayed in the pictures above. I took a sample from the grid search above to see the effect of `max_depth` on the model with 100 estimators and `ls` for its log function:
#
#
# 
#
#
# We can see that the $R^{2}$ decreases as the `max_depth` increases. This makes sense as taking larger values of the `max_depth` generally leads to overfitting.
# ## MLflow Models: Serving With REST APIs & Docker<a class="anchor" id="mlflow-four"></a>
#
# Now that we have built a good model for green house gas emission let's to deploy this model. One popular mechanism for deploying (or serving) a model is using a [REST API](https://restfulapi.net/). Deploying a model as an API means that we create a [webserver](https://en.wikipedia.org/wiki/Web_server) with a [url](https://en.wikipedia.org/wiki/URL) that accepts requests. End users or "clients" can make requests to the API and pass a list of data points containing features (usually as [json](https://www.json.org/json-en.html)). This list of features is fed into the model and the model spits out a list of predictions corresponding to those features. These list of predictions are sent back to the client (again usually as json).
#
# The first step in the process is to save the model using the [XGBoost](https://www.mlflow.org/docs/latest/python_api/mlflow.xgboost.html) module:
# +
import mlflow.xgboost
model = mlflow.xgboost.load_model("/Users/mukeharmon/Documents/DS_Projects/NYCEnergyUsage/notebook/model/XGBModel")
# -
import mlflow
try:
mlflow.create_experiment("some")
experiment = mlflow.get_experiment_by_name("greenbuildings")
except:
experiment = mlflow.create_experiment("some2")
experiment
mlflow.tracking.set_tracking_uri("sqlite:////Users/mukeharmon/Documents/DS_Projects/NYCEnergyUsage/notebook/mlflow.db")
mlflow.get_artifact_uri()
run = mlflow.start_run(experiment_id=experiment)
mlflow.log_metric("mse",2)
mlflow.xgboost.log_model(model,"XGBoost")
mlflow.end_run()
mlflow.end_run()
# This creats a folder similar to the `LinerModel` one we discussed above. The next step is to use the MLflow commnand line to serve the model. From the directory where we saved the above model we can use the command:
#
# mlflow models serve --m XGBModel
#
# This will initially show the following:
#
# 
# If everything builds properly we will then see the following:
#
# 
#
# Notice the "Listening at: http://127.0.0.1:5000", this is the url for our webserver. We will make requests to get model predictions at the url. We have built a REST API that uses [flask](https://flask.palletsprojects.com/en/1.1.x/) and [gunicorn](https://gunicorn.org/) with a one line command using MLflow! To see how difficult this be to do by hand see my other [github repo](https://github.com/mdh266/DockerMLRestAPI).
#
# Let's get some test data to try out model REST API:
test_df = X_test.head(2)
test_df
# Let's get the predictions that the XGBoost model gives us as is to what the REST API returns:
xgb_model.predict(test_df)
# Now let's convert the test data from a [Pandas](https://pandas.pydata.org/) dataframe to json:
test_json = test_df.to_json(orient='split')
test_json
# We can submit the json as request for predictions to the REST API:
# +
import requests
result = requests.post(url="http://127.0.0.1:5000/invocations",
data=test_json,
headers={'Content-Type':'application/json'})
# -
result.json()
# The results are the same! Now let's take this one step further and use MLflow to build a [Docker](https://www.docker.com/) image so that we can deploy our REST API as a container. This again, is only one lineand that command is,
#
# mlflow models build-docker -m XGBModel -n xgbmodel
#
# Where the `xgbmodel` is the tag for the Docker image and `XGBModel` is the folder we saved our model as. If the image is built properly we can see the following:
#
# 
# The image is `fafed3745d54` and the tag is `xgbmodel:latest`. We can start up our containerized REST API using the command:
#
# docker run -ip 8000:8080 fafed3745d54
#
# The `-p 8000:8080` is for [port forwarding](https://runnable.com/docker/binding-docker-ports). Notice that we have to use 8000 because the the results show:
#
# 
#
# We can the make a request to that url and port:
# +
result = requests.post(url="http://127.0.0.1:8000/invocations",
data=test_json,
headers={'Content-Type':'application/json'})
result.json()
# -
# The reults are the same as expected!
#
# There is one downside for using MLflow to build a docker image and that is your image turns out to be quite large. We can see this from the command:
#
# docker images
#
# which shows,
#
# 
#
# Our model takes up 2.72 GB!
# ## 6. Deploying to Google App Engine with Docker <a class="anchor" id="mlflow-five"></a>
# We have come to the last topic of this post which is deploying our model as REST API to the cloud. We can easily deploy the "Dockerized" model API to [Google Cloud App](https://cloud.google.com/appengine) Engine using the Docker image we created.
#
# The first step is to follow the instructions [here](https://cloud.google.com/container-registry/docs/quickstart) for copying the local Docker image to [Google Cloud Registry (GCR)](https://cloud.google.com/container-registry). For me the command was :
#
# 
#
# Once that is done we can check GCR to make sure the image has been pushed, you can see the results below:
#
# 
#
# Next I built the `app.yaml` for a custom runtime and using flexibly environment as described [here](https://cloud.google.com/appengine/docs/flexible/custom-runtimes/build). The contents of my `app.yaml` are:
#
# runtime: custom
# env: flex
# service: xgbmodel
# env_variables:
# DISABLE_NGINX: "true"
#
# It's important to note using the container will start [nginx](https://www.nginx.com/) and [gunicorn](https://gunicorn.org/) processes which we DO NOT want and therefore chose `DISABLE_NGINX: "true"` as discussed [here](https://www.mlflow.org/docs/latest/cli.html#mlflow-models-build-docker).
#
#
# I then ran the command to deploy the app (`gcloud app deploy`) using `--image-url` with the address for my image in GCR:
#
# 
#
#
# One everything is completed I can check my app was created in the App Engine tab:
#
# 
#
# Now I can use the `target url` as pictured above to run a request against as shown below:
# +
target_url = "https://xgbmodel-dot-advance-sonar-232016.uc.r.appspot.com/invocations"
result = requests.post(url = target_url,
data = test_json,
headers = {'Content-Type':'application/json'})
result.json()
# -
# It worked! To see how difficult this be to do by hand see my other [github repo](https://github.com/mdh266/DockerMLRestAPI). That's enough for this post!
#
# --------------
# ## Conclusions <a class="anchor" id="fifth-bullet"></a>
# --------------
#
# This blog post ends the series of blog posts that starts with a real life dataset on building energy usage and green house gas emissions. In previous posts we covered cleaning the dataset, but in this blog post we covered using the cleaned dataset and building a model with $R^2 \, = \, 0.776$ and deploying it to Google App Engine as an API using Docker. This process was made signifcantly easier using the MLflow library for model development and serving. This project was a lot of fun and I learned a ton workng on it. I hope you found this useful!
| notebooks/GreenBuildings3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # EXPLORING FURTHER RELATIONS OF METRICAL STRENGTH AND PITCH RELATIONS IN TURKISH MAKAM MUSIC 2019
# ## <NAME>, <NAME>
#
# This research aims to apply a three-fold approach to further explore relations of metrical strength in the music of Turkish Makam: (1) assess the extent to which findings and conclusions drawn in Holzapfel et al’s work [1] are reproducible on a different data-set, (2) further extend Holzapfelet al’s methodology by exploring correlation relationships across different Makams to observe if any styles of Makam may also act to reinforce or diminish metrical strength, (3) pitch correlation analysis to observe if the makam structure reinforce or dimish the usul metrical structure.
# The dataset used for this work is the **SymbTr** which is a collection machine readable symbolic scores aimed at performing computational studies of Turkish Makam music.
#
# [1] http://mtg.upf.edu/node/3886
# You can download the full pdf score dataset here: https://github.com/MTG/SymbTr-pdf \
# You can download the full xml files dataset here: https://github.com/MTG/SymbTr/releases/tag/v2.4.3
from music21 import *
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
# TODO: change paths to your own dirs
xml_path = 'path to MusicXML dir'
pdf_path = 'path to SymbTr-pdf dir'
#musescore_path = 'path to musescore'
# +
# Config Muse Score
#us = environment.UserSettings()
#us.create()
#us['musicxmlPath'] = musescore_path # Path to MuseScore3.exe
#us['ipythonShowFormat'] = 'musicxml'
# -
# ## Pre-processing and Balancing of Data
#
# We will not work on the full dataset in order to have balance data.
# For this reason, we will select only a subset of the usul and makam available. In particular, we will select the most popular usul based on value count.
# Once we have decided which usul when want to work on, we will select only makam which share all the usul which has been selected.
#getting all the list of the different makam and usul used for each pdf file
usul = []
makam = []
for makam_file in os.listdir(pdf_path):
if makam_file[-3:] == 'pdf':
usul.append(makam_file.split('--')[2])
makam.append(makam_file.split('--')[0])
#value count of different usul
pd.Series(usul).value_counts()
# As we can see, the most popolar usul in the dataset are:
# * aksak
# * sofyan
# * duyek
# * aksaksemai
# * curcuna
#
# Anyway, we have been not able to retrieve the weigths of düm, tek and ke for the aksaksemai usul, we decided to discart it and focused the research on aksak, sofyan, duyek and curcuna.
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
a = pd.Series(usul).value_counts()
# +
# Output makam / ursul distribution for all ursuls which have > 5 occurances in a makam
makams = pd.DataFrame(np.array([usul, makam]).T, columns=['usul', 'makam'])
makamsdf = makams.groupby(['makam', 'usul']).filter(lambda x: len(x) > 5)
makamsdf = makamsdf.groupby(['makam', 'usul'])['makam'].count()
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
print(makamsdf)
# -
# In the next step, only the makam which share all the four usul will be selected from the dataset and the research will be focused on those makams.
# +
high_pop_usuls = ['aksak', 'sofyan', 'duyek', 'curcuna'] #select 4 most popular ursuls for meaningful analysis
filtered_makams = makams.loc[makams['usul'].isin(high_pop_usuls)] #select makams which contain those usul
filtered_makams = filtered_makams.groupby(['makam', 'usul']).filter(lambda x: len(x) > 3) #remove usul occurances with <= 3
df = pd.DataFrame({'count' : filtered_makams.groupby( [ "makam", "usul"] ).size()}).reset_index()
# remove all makams which do not appear in all 4 usuls
vc = df['makam'].value_counts()
vc = vc[vc == 4].index
df = df.loc[df['makam'].isin(vc)]
# output chosen makams and usul counts
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
print(df)
# -
# We consider only makams which shared the same four usul selected.
#makams that will be considered for this research
makam_considered = df.makam.unique()
print("Makam considered for the research: {}".format(makam_considered))
# ## Hard Coded Usul Weights
#
# All weights for the usul we considered are already defined in the following paper which we consider as a baseline [1]. In this paper, the weighted metrical distribution are taken form the Mus2okr software [2].
# We consider the same weigths for our research.
#
# [1] http://mtg.upf.edu/node/3886
# [2] http://www.musiki.org/
# +
#weigths_dictionary
weights = {
"aksak": [3, 0, 0, 0, 2, 0, 1, 0, 3, 0, 0, 0, 2, 0, 0, 0, 1, 0],
"sofyan": [3, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 1, 0, 0, 0],
"duyek": [2, 0, 3, 0, 0, 0, 2, 0, 3, 0, 0, 0, 1, 0, 0, 0],
"curcuna": [3, 0, 0, 0, 1, 0, 2, 0, 0, 0, 3, 0, 0, 0, 2, 0, 0, 0, 1, 0]
}
# meter for usul
usul_tempo = {
"aksak": [9, 8],
"sofyan": [4, 4],
"duyek": [8, 8],
"curcuna": [10, 8]
}
#creating a dictionary to save bins needed to plot the histogram for every usul
usul_bins = {}
for usul in usul_tempo:
y_bins = (16/usul_tempo[usul][1]) * usul_tempo[usul][0]
usul_bins[usul] = int(y_bins)
# -
# Let's plot and see how the weights distribution looks like for each usul.
# +
#plotting weigths for each usul
plt.style.use('ggplot')
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2)
fig.set_figheight(10)
fig.set_figwidth(15)
#create plot
# usul aksak
ax1.bar(np.arange(1, usul_bins["aksak"]+1, dtype=int), weights["aksak"])
ax1.set_xlabel("Weight location (1/16)")
ax1.set_xlim(0.5, usul_bins["aksak"])
ax1.set_ylabel("Weight")
ax1.set_title('Aksak meter weights')
#usul sofyan
ax2.bar(np.arange(1, usul_bins["sofyan"]+1, dtype=int), weights["sofyan"])
ax2.set_xlabel("Weight location (1/16)")
ax2.set_xlim(0.5, usul_bins["sofyan"])
ax2.set_ylabel("Weight")
ax2.set_title('Sofyan meter weights')
#usul duyek
ax3.bar(np.arange(1, usul_bins["duyek"]+1, dtype=int), weights["duyek"])
ax3.set_xlabel("Weight location (1/16)")
ax3.set_xlim(0.5, usul_bins["duyek"])
ax3.set_ylabel("Weight")
ax3.set_title('Duyek meter weights')
#usul curcuna
ax4.bar(np.arange(1, usul_bins["curcuna"]+1, dtype=int), weights["curcuna"])
ax4.set_xlabel("Weight location (1/16)")
ax4.set_xlim(0.5, usul_bins["curcuna"])
ax4.set_ylabel("Weight")
ax4.set_title('Curcuna meter weights')
#Create names on the x-axis
#plt.xticks(y_pos, bars)
#Show graphic
plt.tight_layout()
plt.show()
# -
# ## Working with makam scores
# Most of makam scores have non standard key signatures, and this might cause problems. We will show you an example of this problema and a work around we will apply to all the xml file we will process, in order to be able to work with MusicXML files.
# Music21 was not developed with makam music in mind, and many of the accidentals used in makam music are not recognized by music21.
# path to test xml file
makamScore = xml_path + 'acemasiran--nakis--yuruksemai--ne_hevayi--dede_efendi.xml'
# +
# these are the names of all the accidentals used in makam scores, as contained in the MusicXML files
makamAccidentals = ['double-slash-flat', 'flat', 'slash-flat', 'quarter-flat', 'quarter-sharp', 'sharp', 'slash-quarter-sharp', 'slash-sharp']
# create a stream to contained altered notes
makamNotes = stream.Stream()
for i in range(len(makamAccidentals)): # create a note per accidental
try:
n = note.Note()
n.pitch.accidental = makamAccidentals[i] # add one accidental from the list
n.addLyric(makamAccidentals[i], applyRaw=True) # add the name of the accidental as lyric
n.addLyric(n.pitch.accidental.name, applyRaw=True) # add the name used by music21 as lyric
n.addLyric(n.pitch.accidental.alter) # add the number of semitones of the accidental as lyric
makamNotes.append(n)
except:
print("music21 doesn't accept {} as accidental".format(makamAccidentals[i]))
print('done')
makamNotes.show()
# -
# Since the problem when loading the score with music21 is the non standard key signature, one solution might be to manipulate the MusicXML file to get rid of the time signature.
# <br/>
# We use ElementTree to find it.
# +
import xml.etree.ElementTree as ET
tree = ET.parse(makamScore)
root = tree.getroot()
notes = []
accidentals = []
alter = []
for k in root.iter('key'):
for ks in k.findall('key-step'):
notes.append(ks.text)
for ka in k.findall('key-accidental'):
accidentals.append(ka.text)
for kalt in k.findall('key-alter'):
alter.append(kalt.text)
print('The key signature of this score has:')
for i in range(len(notes)):
print('-', notes[i], accidentals[i])
# -
# Now we can remove it from the MusicXML file and create a new file without key signature
# +
for k in root.iter('key'):
print(k)
for ks in k.findall('key-step'):
k.remove(ks)
for ka in k.findall('key-accidental'):
k.remove(ka)
for kalt in k.findall('key-alter'):
k.remove(kalt)
newMakamScore = makamScore[:-4] + '-withoutKeySignature.xml'
print(newMakamScore)
tree.write(newMakamScore)
# -
# And now, music21 will load the score
s = converter.parse(newMakamScore)
s.show()
# ## Pre-processing all the file
#
# Now that we have seen how it works with one score, we will go through all the scores, applying the work around we have just seen to remove accidentals to all of them, to be able to work with MusicXML.
# We will also remove scores which contain more than a time siganture per score and score which time signature is different compared to the usul time signature considered on this research.
#
def check_time_signature(usul, new_path):
'''
The function checks if the time signature of the current score is inline with the time signature declared
for the usul considered.
Input:
--------
:usul: usul considered
:new_path = path of the score considered
Output:
--------
return 0, the file can be keep
return 1: the file need to be removed, either for different time signature or multiple time siganture
'''
s = converter.parse(new_path)
p = s.parts # taking the first part of the score which contain the time signature information
tS = p.flat.getElementsByClass(meter.TimeSignature).stream()
#only scores with only 1 signature
if len(tS) == 1:
score_time = [tS[0].numerator, tS[0].denominator]
if score_time != usul_tempo[usul]:
#different meter
return 1
else:
#more than a key siganture in the score
return 1
return 0
# +
makam_out_folder_path = 'path to processed score dir (where you want to save the processed scores)'
if not os.path.exists(makam_out_folder_path):
os.makedirs(makam_out_folder_path)
def remove_accidentals(makam_score):
'''
The function removes all the accidentals from the score.
Input:
-------
:makam_score: path of the makam score
'''
tree = ET.parse(makam_score)
root = tree.getroot()
for k in root.iter('key'):
print(k)
for ks in k.findall('key-step'):
k.remove(ks)
for ka in k.findall('key-accidental'):
k.remove(ka)
for kalt in k.findall('key-alter'):
k.remove(kalt)
makam_score = makam_score.split('/')[-1]
new_Makam_Score = makam_out_folder_path + makam_score[:-4] + '-withoutKeySignature.xml'
tree.write(new_Makam_Score)
return new_Makam_Score
# +
usul_considered = high_pop_usuls
makam_init_folder_path = xml_path
makam_count_before = 0
makam_count_noaccidentals = 0
makam_count_different_time = 0
counter = 0
#loop through the makam dataset
for makam_file in os.listdir(makam_init_folder_path):
#for mac and .DS_store file
if not makam_file.startswith('.'):
usul = makam_file.split('--')[2]
makam = makam_file.split('--')[0]
counter = counter + 1
#if the score refers to one of the usul considered and one the makam considered
if usul in usul_considered and makam in makam_considered:
makam_count_before = makam_count_before + 1
#remove accidentals
path_score = makam_init_folder_path + makam_file
new_path = remove_accidentals(path_score)
#check time signature for the current xml
different_time = check_time_signature(usul, new_path)
if different_time:
print("The file {} will be removed for different time signature".format(new_path))
os.remove(new_path)
makam_count_different_time = makam_count_different_time + 1
# -
print("Total number of makams in the dataset {}".format(counter))
# We only analyzed makams between ['hicaz' 'hicazkar' 'huseyni' 'huzzam' 'mahur' 'muhayyer' 'nihavent'
# 'rast' 'segah' 'ussak'] and usul ['aksak', 'sofyan', 'duyek', 'curcuna'].
print("We analyzed {} makams but we kept only {} for time signature".format(makam_count_before, (makam_count_before-makam_count_different_time)))
# ### Scaling to k
# Definition of function we will need to scale the bin according to the different usul metric.
#scale note ditribution to k (bin of the usul)
def scale_to_k(OldMin, OldMax, OldValue, k):
'''
The function scales note ditribution to k (bin of the usul considered)
Input:
-------
:oldMin: previous minimum for old value
:OldMax: previous maximum for old value
:OldValue: previous value to scale to new value
:k: usul bins
Return:
--------
:newValue: value scaled
'''
NewMin = 0
NewMax = k
NewValue = (((OldValue - OldMin) * (NewMax - NewMin)) / (OldMax - OldMin)) + NewMin
return NewValue
# function to calc measure seperation discarding measures which are glissandos
def calc_measure_length(m1, m2, measures):
'''
The function calculates measure seperation discarding measures which are glissandos
Input:
-------
:m1: first measure
:m2: second measure
:measures: measures
Return:
--------
distance between measures
'''
if isinstance(measures[m1], m21.spanner.Glissando):
# shift m1 and m2
m1+=1
m2+=1
return calc_measure_length(m1,m2,measures)
if isinstance(measures[m2], m21.spanner.Glissando):
m2+=1
return calc_measure_length(m1,m2,measures)
else: return measures[m2].offset - measures[m1].offset
# ## Analysis of dataset: note distributiona and pitch correlation
#
# We will first declare all the functions and variable we need to do the analyze note distribution and pitch correlation. Then we will perform the analysis of the dataset.
# +
#counting how many beat there are per usul and note definition
from collections import defaultdict
#note considered for the pitch correlation task considerng only tone
note_considered = {
'C': 0,
'D': 1,
'E': 2,
'F': 3,
'G': 4,
'A': 5,
'B': 6
}
octaves = np.arange(7)
usul_weights_bins = defaultdict(dict)
for usul in usul_bins.keys():
for o in octaves:
usul_weights_bins[usul][o] = np.zeros(shape = (len(weights[usul]), len(note_considered)), dtype=int)
# -
#check if a bin is a weigth bin in the usul considered
def check_bin(usul, bin_value):
'''
The function checks if a bin is a weigthed bin in the usul considered
Input:
-------
:usul: usul considered
:bin_value: bin to check
Return:
--------
return 1: the bin is a weighted bin
return 0: the bin is not a weighted bin
'''
return weights[usul][bin_value] != 0
# +
import music21 as m21
df_makam_bin_vals = pd.DataFrame(columns=['usul', 'makam', 'bin_vals', 'file_path'])
def analyse_note_dist(k, makam, usul, scoreParts, makam_score):
'''
The function analyses the distance between notes
Input:
-------
:k: usul bin
:makam: makam considered
:scoreParts: parts of the score
:makam_score: makam score path
'''
note_offsets = np.array([])
meter_bins = np.zeros(k)
measures0 = scoreParts[0].elements[1:]
beats_in_bar = calc_measure_length(1,2,measures0)
for m in measures0:
if isinstance(m, m21.spanner.Glissando) == False:
for n1 in m.elements:
# only consider notes, not rests, time sigs ect.
if isinstance(n1, m21.note.Note):
#offset of the note
note_offset = n1.offset
note_offsets = np.append(note_offsets, note_offset)
#scaling to bins distribution of usul
scaled_offset = scale_to_k(0, beats_in_bar, note_offset, k)
bin_val = math.floor(scaled_offset)
if check_bin(usul, bin_val):
#it is a weigth bin, we it will considered for pitch correlation
pitch = note_considered[n1.pitch.step]
octave = n1.octave
usul_weights_bins[usul][octave][bin_val][pitch] += 1
meter_bins[bin_val] += 1
else:
print(makam_score)
print('glissando found:', m)
# add row to df
df_makam_bin_vals.loc[len(df_makam_bin_vals)] = [usul, makam, meter_bins, makam_score]
# +
counter = 0
for makam_score in os.listdir(makam_out_folder_path):
usul = makam_score.split('--')[2]
makam = makam_score.split('--')[0]
k = usul_bins[usul] #value to scale to
makam_path = makam_out_folder_path + makam_score
counter = counter + 1
s = converter.parse(makam_path)
scoreParts = s.parts.stream()
analyse_note_dist(k, makam, usul, scoreParts, makam_score)
# -
# ## Pitch correlation
# In the next step, we will gather information related to which note and which octave as been played for each weighted bin.
# +
from operator import itemgetter
from heapq import nlargest
# Analyse the top 3 pitchs for each bin (across all octaves)
notes = ['C', 'D', 'E', 'F', 'G', 'A', 'B']
usul_pitch_bins_dfs = {} # collection of all df for pitch/bin info for all usuls
for usul in usul_considered:
print('---------------------------')
print("Usul:", usul)
df_data = {'bin' : [], 'note' : [], 'octave' : [], 'count' : []} # reset df
# for every bin...
for i in range(len(weights[usul])):
# init new variable to append max vals to
max_vals_list = []
print('\nBIN:', i)
non_zero = False
max_val = 0
max_octave = 0
for o in octaves:
bin_array = usul_weights_bins[usul][o][i]
print('octave {}: {}'.format(o, bin_array))
if np.sum(bin_array) != 0:
largest3 = nlargest(3, enumerate(bin_array), key=lambda x: x[1])
largest3 = [x[::-1] for x in largest3] #reverse list to get value first
# append octave also to tuple
max_vals = [list(tup)+[o] for tup in largest3]
max_vals_list += max_vals
index_max = np.where(bin_array == np.max(bin_array))
max_val = np.max(bin_array)
non_zero = True # flag to consider if at least one octave has value counts in
if non_zero:
for max_note in nlargest(3,max_vals_list):
# returns [note_counts, index, octave] for top 3 note counts
df_data['bin'].append(i)
df_data['note'].append(notes[max_note[1]])
df_data['octave'].append(max_note[2])
df_data['count'].append(max_note[0])
usul_pitch_bins_dfs[usul] = pd.DataFrame.from_dict(df_data)
# -
# Now that we know which note and which octave as been played for each weighted bin, we can have a look at which are the most played for each weigthed bin. We will considered only the 3 most popular ones.
#pitch correlation for usul aksak
usul_pitch_bins_dfs['aksak']
#pitch correlation for usul sofyan
usul_pitch_bins_dfs['sofyan']
#pitch correlation for usul duyek
usul_pitch_bins_dfs['duyek']
#pitch correlation for usul curcuna
usul_pitch_bins_dfs['curcuna']
print('File that have been considered: {}'.format(counter))
# ## Metrical strenght correlation
# Next, we will have a look at the bin values for each score. The bin_vals represent how many notes have been played on each bin.
df_makam_bin_vals
#normalization to 3
def scale_to_3(x):
'''
The function scales the input value to between 0 and 3
Input:
-------
:x: value to scale
Return:
--------
:x: value scaled
'''
NewMin = 0
NewMax = 3
OldMin = x.min()
OldMax = x.max()
x = (((x - OldMin) * (NewMax - NewMin)) / (OldMax - OldMin)) + NewMin
#print(x)
return x
# normalise bin_vals to be 0 - 3
df_makam_bin_vals['bin_vals'] = df_makam_bin_vals['bin_vals'].apply(scale_to_3)
# Bins value have been normalized between 0 and 3, so we can calculate the correlation between usul and makam pattern.
df_makam_bin_vals
# ## Correlation evaluation
#
# We will first plot the note distribution for each makam (only the first four will be plot, but all the plots will be saved in an external folder. The user can define the path in the variable: plot_folder_path).
#
# Then, the spearmanr correlation between the makam score note distribution and the relative usul will be calculated.
#
# The same will be done in relation with makam, to see if there is any makam which contribute the most to the correlation between note distribution and usul pattern.
#
# The correlation is firstly evaluated between each score and usul.
# The final correlation is the average of the sum of previous correlation for each usul.
# +
#plot folder
plot_folder_path = 'path to dir where to save plots (the dir will be created next)'
if not os.path.exists(plot_folder_path):
os.makedirs(plot_folder_path)
#plot function
def plot_hist(x, makam, usul, i, file_path):
# PLOT
plt.style.use('ggplot')
fig = plt.figure()
bars = np.arange(1, len(x) + 1)
y_pos = np.arange(len(bars))
# Create bars
plt.bar(y_pos, x)
plt.xlabel("Location (1/16)")
plt.ylabel("Normalized count")
plt.title('Metrical strength for {}'.format(file_path))
# Create names on the x-axis
plt.xticks(y_pos, bars)
#save plot inside the folder
fig.savefig(plot_folder_path + '/' + file_path + '.png', dpi=200)
#plotting 4 just to give an idea, all the plot will be saved into the folder defined
if i < 4: plt.show()
plt.close()
# +
import scipy.stats as sc
#correlations dictionary
correlation = {
"aksak": [0, 0, 0],
"sofyan": [0, 0, 0],
"duyek": [0, 0, 0],
"curcuna": [0, 0, 0]
}
# first key = makam, second key = usul
correlation_makam = {
"hicaz": {"aksak": [0, 0, 0], "sofyan": [0, 0, 0], "duyek": [0, 0, 0], "curcuna": [0, 0, 0]},
"hicazkar": {"aksak": [0, 0, 0], "sofyan": [0, 0, 0], "duyek": [0, 0, 0], "curcuna": [0, 0, 0]},
"huseyni": {"aksak": [0, 0, 0], "sofyan": [0, 0, 0], "duyek": [0, 0, 0], "curcuna": [0, 0, 0]},
"huzzam": {"aksak": [0, 0, 0], "sofyan": [0, 0, 0], "duyek": [0, 0, 0], "curcuna": [0, 0, 0]},
"mahur": {"aksak": [0, 0, 0], "sofyan": [0, 0, 0], "duyek": [0, 0, 0], "curcuna": [0, 0, 0]},
"muhayyer": {"aksak": [0, 0, 0], "sofyan": [0, 0, 0], "duyek": [0, 0, 0], "curcuna": [0, 0, 0]},
"nihavent": {"aksak": [0, 0, 0], "sofyan": [0, 0, 0], "duyek": [0, 0, 0], "curcuna": [0, 0, 0]},
"rast": {"aksak": [0, 0, 0], "sofyan": [0, 0, 0], "duyek": [0, 0, 0], "curcuna": [0, 0, 0]},
"segah": {"aksak": [0, 0, 0], "sofyan": [0, 0, 0], "duyek": [0, 0, 0], "curcuna": [0, 0, 0]},
"ussak": {"aksak": [0, 0, 0], "sofyan": [0, 0, 0], "duyek": [0, 0, 0], "curcuna": [0, 0, 0]}
}
for i in df_makam_bin_vals.index:
sample = df_makam_bin_vals.loc[i]
usul = sample['usul']
makam = sample['makam']
x = sample['bin_vals']
file_path = sample['file_path'].split('-withoutKeySignature.xml')[0]
print(file_path)
plot_hist(x, makam, usul, i, file_path)
#correlation and p-value for each usul
correlation[usul][0] = int(correlation[usul][0]) + 1
correlation[usul][1] = correlation[usul][1] + sc.spearmanr(x, weights[usul])[0]
correlation[usul][2] = correlation[usul][2] + sc.spearmanr(x, weights[usul])[1]
#correlation and p-value for each usul considering the makam as well
correlation_makam[makam][usul][0] += 1
correlation_makam[makam][usul][1] += sc.spearmanr(x, weights[usul])[0]
correlation_makam[makam][usul][2] += sc.spearmanr(x, weights[usul])[1]
print("Total number of makam processed: {}".format(i))
# +
# Create DataFrame
correlation_df = pd.DataFrame(correlation, index =['Total makam', 'correlation', 'p-value'])
correlation_df.iloc[0].apply(int)
correlation_df
# -
for usul in usul_considered:
#average correlations per usul
print("Correlation for {} usul: {}, p-value: {}".format(usul, correlation[usul][1]/correlation[usul][0],
correlation[usul][2]/correlation[usul][0]))
# +
#todo: make a data frame as well
correlation_makam_avg = {
"hicaz": {"aksak": [0, 0], "sofyan": [0, 0], "duyek": [0, 0], "curcuna": [0, 0]},
"hicazkar": {"aksak": [0, 0], "sofyan": [0, 0], "duyek": [0, 0], "curcuna": [0, 0]},
"huseyni": {"aksak": [0, 0], "sofyan": [0, 0], "duyek": [0, 0], "curcuna": [0, 0]},
"huzzam": {"aksak": [0, 0], "sofyan": [0, 0], "duyek": [0, 0], "curcuna": [0, 0]},
"mahur": {"aksak": [0, 0], "sofyan": [0, 0], "duyek": [0, 0], "curcuna": [0, 0]},
"muhayyer": {"aksak": [0, 0], "sofyan": [0, 0], "duyek": [0, 0], "curcuna": [0, 0]},
"nihavent": {"aksak": [0, 0], "sofyan": [0, 0], "duyek": [0, 0], "curcuna": [0, 0]},
"rast": {"aksak": [0, 0], "sofyan": [0, 0], "duyek": [0, 0], "curcuna": [0, 0]},
"segah": {"aksak": [0, 0], "sofyan": [0, 0], "duyek": [0, 0], "curcuna": [0, 0]},
"ussak": {"aksak": [0, 0], "sofyan": [0, 0], "duyek": [0, 0], "curcuna": [0, 0]}
}
#change name of this
total = 0
for makam, value in correlation_makam.items():
for usul, value1 in value.items():
avg_correlation = value1[1]/value1[0]
avg_pval = value1[2]/value1[0]
# update our dict
correlation_makam_avg[makam][usul][0] = avg_correlation
correlation_makam_avg[makam][usul][1] = avg_pval
total += value1[0]
# assert we have considered all makams
print("Total number of makam processed: {}".format(total))
# -
# Create DataFrame
correlation_df_makams = pd.DataFrame(correlation_makam_avg)
for makam in makam_considered:
print('Makam:', makam)
total_correlation = 0
total_pval = 0
usul_count = 0
for usul in usul_considered:
correlation_val = correlation_df_makams[makam][usul][0]
p_val = correlation_df_makams[makam][usul][1]
total_correlation += correlation_val
total_pval += p_val
usul_count += 1
# output stats
print("Correlation for {} usul: {}, p-value: {}".format(usul, correlation_val, p_val))
print('Average correlation for {} makam: {}, p-value: {}\n'.format(makam, total_correlation/usul_count,
total_pval/usul_count))
| MeterExploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from math import pi
from torch import nn
import seaborn as sns
from tqdm.notebook import tqdm
import torch.nn.functional as F
import matplotlib.pyplot as plt
def data(batch_size, noise=.05):
X, y = datasets.make_moons(n_samples=batch_size, noise=noise)
X = StandardScaler().fit_transform(X)
return torch.tensor(X).float()
# +
class Flow(nn.Module):
def __init__(self, seq_len, hidden, flip):
super().__init__()
half_seq_len = seq_len // 2
self.log_s = nn.Sequential(
nn.Linear(half_seq_len, hidden),
nn.ReLU(True),
nn.Linear(hidden, half_seq_len)
)
self.b = nn.Sequential(
nn.Linear(half_seq_len, hidden),
nn.ReLU(True),
nn.Linear(hidden, half_seq_len)
)
self.flip = flip
def forward(self, z):
za, zb = z.chunk(2, dim=1)
if self.flip:
za, zb = zb, za
log_s, b = self.log_s(za), self.b(za)
ya = za
yb = (zb - b) / log_s.exp()
if self.flip:
ya, yb = yb, ya
y = torch.cat((ya, yb), dim=1)
return y
def reverse(self, y):
ya, yb = y.chunk(2, dim=1)
if self.flip:
ya, yb = yb, ya
log_s, b = self.log_s(ya), self.b(ya)
za = ya
zb = yb*log_s.exp() + b
if self.flip:
za, zb = zb, za
z = torch.cat((za, zb), dim=1)
return z, log_s
class Model(nn.Module):
def __init__(self, d, hidden, num_flows):
super().__init__()
self.flows = nn.ModuleList([
Flow(d, hidden, i%2==0)
for i in range(num_flows)
])
self.distribution = torch.distributions.MultivariateNormal(torch.zeros(d), torch.eye(d))
def forward(self, batch_size):
y = self.distribution.rsample(sample_shape=(batch_size,))
for flow in self.flows:
y = flow(y)
return y
def reverse(self, y):
"""This is used for training in this system"""
z = y
log_jacob = 0
for flow in reversed(self.flows):
z, ls = flow.reverse(z)
log_jacob += ls.sum(-1)
log_pz = self.distribution.log_prob(z)
return z, log_pz, log_jacob
# +
num_flows = 5
batch_size = 512
epochs = 1000
display_div = 99
model = Model(2, 1024, num_flows)
optim = torch.optim.Adam(model.parameters(), lr=1e-4)
for step in tqdm(range(epochs), "Fitting"):
optim.zero_grad()
y = data(batch_size)
z, log_pz, log_jacob = model.reverse(y)
loss = (-log_pz - log_jacob).mean()
loss.backward()
optim.step()
with torch.no_grad():
if step % display_div == 0:
print(f"step {step:04d}:: {loss.item()} [{log_pz.mean().item(), log_jacob.mean().item()}]")
y2 = model.forward(batch_size)
fig, ax = plt.subplots(1,2, figsize=(25,5), constrained_layout=True)
ax[0].scatter(y[:, 0], y[:, 1])
ax[1].scatter(y2[:, 0], y2[:, 1])
fig.show()
| real-nvp-moons.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Deep Q-Network
# Acknowledgement: https://medium.com/@awjuliani/simple-reinforcement-learning-with-tensorflow-part-4-deep-q-networks-and-beyond-8438a3e2b8df
#
# 17/02/2017
#
# From Q network to deep Q network.
# More improvement has been made to Deep Q-network. Now double DQN and Dueling DQN improve performance, staility and faster training time.
#
# Revision: Convolutional layers consider a region of image instead of looking at each pixel indipendently.
#
#
# Technical:
# Convolution layer premade function in TF: _tf.contrib.layers.convolution2d_
# ```
# convolution_layer = tf.contrib.layers.convolution2d(inputs,num_outputs,kernel_size,stride,padding)
# ```
# - num_outs - filters to the previous layer?
# - kernal_size - sliding window size, to slide over the previous layer
# - stride - how many pixel to skip when sliding window across layer
# - padding - pad window to the same dimension as the previous layer or not
#
#
#
# Experience replay - stores agent experiences and then randomly drawing batches of them to train the network.
#
# Experiences are stored as a _tuple_.
# ``` <state, action, reward, next state>```
#
# __DDQN__
# - Instead of taking the max over Q-values when computing the target-Q value
# - Use primary netowrk to chose an action
# - Use target network to generate the target Q-value for that action
#
# __Why?__
# - Cause suboptimal actinons regularly gives higher Q-values than optimal actions
#
# DDQN equation for updating the target value:
#
# _```Q-Target = r + γQ(s’,argmax(Q(s’,a,ϴ),ϴ’))```_
# __Dueling Dqn__
# - Goal of Dueling DQN is to have a network that separately computes the advantage and value function, and combines them back into a single Q-function only at the final layer.
#
# __```Q(s,a)```__ correspond to - how good it is to take _certain action_ given a _certain state_.
# This notion can be broken down into: ```V(s) and A(a)```
# - Value function V(s) - the value of the given state
# - Advantage function A(a) - how much better taking a certain action would be compared to others
#
# Therefore:
#
# _```Q(s,a) = V(s) + A(a)```_
# _Notes: I'm not going to try the code since my computer is not powerful and I cannot install Tensorflow on the lab comps. I am going to swap over to the lab computer now and try pytorch._
| tensorflow/Part 4 - Deep Q-Networks and Beyond.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import geopandas as gpd
facilities_df = pd.read_csv('/opt/src/data/who-cds-gmp-2019-01-eng-facilities.csv')
abbrv_df = pd.read_csv('/opt/src/data/who-cds-gmp-2019-01-eng-abbrv.csv')
facilities_df[facilities_df['Facility type'].isnull()]
abbrv = {}
for _, row in abbrv_df.iterrows():
abbrv[row['CBO']] = row['Community Based Organization']
abbrv
facilities_df['Ownership_full'] = facilities_df.apply(
lambda row: abbrv.get(row['Ownership'], row['Ownership']),
axis=1
)
facilities_df['Ownership_full'] = facilities_df.apply(set_ownership())
facilities_df
facilities_gdf = gpd.GeoDataFrame(
facilities_df,
geometry=gpd.points_from_xy(facilities_df.Long, facilities_df.Lat)
)
facilities_gdf.to_file(
'/opt/src/data/published/africa_health_facilities.geojson',
encoding='utf-8',
driver='GeoJSON'
)
| data-processing/notebooks/capacity_tab-africa_health_facilities.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm
# Init seaborn
sns.set()
# Read in data
data = pd.read_csv('tracks.csv')
data.head()
# Get metrics on the data
data.info()
# Test for empty values
data.isnull().sum()
# Clean the data
data['name'].fillna('Unknown Title', inplace=True)
# Test for empty values again
data.isnull().sum()
# Extract only the features
features = data.drop(columns=
['id', 'name', 'artists', 'id_artists', 'release_date']
)
# Get any correlations
features.corr()
# Normalize the data as needed (from https://thecleverprogrammer.com/2021/03/03/spotify-recommendation-system-with-machine-learning/)
from sklearn.preprocessing import MinMaxScaler
datatypes = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
normarization = data.select_dtypes(include=datatypes)
for col in normarization.columns:
MinMaxScaler(col)
# Cluster based off features to semi-predict the genre (from https://thecleverprogrammer.com/2021/03/03/spotify-recommendation-system-with-machine-learning/)
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=10)
features = kmeans.fit_predict(normarization)
data['features'] = features
MinMaxScaler(data['features'])
class Recommender():
def __init__(self, data):
self.data = data
def recommend(self, songs, amount=1):
distance = []
song = self.data[(self.data.name.str.lower() == songs.lower())].head(1).values[0]
rec = self.data[self.data.name.str.lower() != songs.lower()]
for songs in tqdm(rec.values):
d = 0
for column in np.arange(len(rec.columns)):
if not column in [0, 1, 5, 6, 7]:
d = d + np.absolute(float(song[column]) - float(songs[column]))
distance.append(d)
rec['distance'] = distance
rec = rec.sort_values('distance')
columns = ['artists', 'name']
return rec[columns][:amount]
recommendations = Recommender(data)
recommendations.recommend("Runaway", 20)
| tests/Spotify.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Keyboard shortcuts
#
# In this notebook, you'll get some practice using keyboard shortcuts. These are key to becoming proficient at using notebooks and will greatly increase your work speed.
#
# First up, switching between edit mode and command mode. Edit mode allows you to type into cells while command mode will use key presses to execute commands such as creating new cells and openning the command palette. When you select a cell, you can tell which mode you're currently working in by the color of the box around the cell. In edit mode, the box and thick left border are colored green. In command mode, they are colored blue. Also in edit mode, you should see a cursor in the cell itself.
#
# By default, when you create a new cell or move to the next one, you'll be in command mode. To enter edit mode, press Enter/Return. To go back from edit mode to command mode, press Escape.
#
# > **Exercise:** Click on this cell, then press Enter + Shift to get to the next cell. Switch between edit and command mode a few times.
# +
# mode practice
# -
# ## Help with commands
#
# If you ever need to look up a command, you can bring up the list of shortcuts by pressing `H` in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now.
# ## Creating new cells
#
# One of the most common commands is creating new cells. You can create a cell above the current cell by pressing `A` in command mode. Pressing `B` will create a cell below the currently selected cell.
# > **Exercise:** Create a cell above this cell using the keyboard command.
# > **Exercise:** Create a cell below this cell using the keyboard command.
# ## Switching between Markdown and code
#
# With keyboard shortcuts, it is quick and simple to switch between Markdown and code cells. To change from Markdown to cell, press `Y`. To switch from code to Markdown, press `M`.
#
# > **Exercise:** Switch the cell below between Markdown and code cells.
# +
## Practice here
def fibo(n): # Recursive Fibonacci sequence!
if n == 0:
return 0
elif n == 1:
return 1
return fibo(n-1) + fibo(n-2)
# -
# ## Line numbers
#
# A lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing `L` (in command mode of course) on a code cell.
#
# > **Exercise:** Turn line numbers on and off in the above code cell.
# ## Deleting cells
#
# Deleting cells is done by pressing `D` twice in a row so `D`, `D`. This is to prevent accidently deletions, you have to press the button twice!
#
# > **Exercise:** Delete the cell below.
# +
# DELETE ME
# -
# ## Saving the notebook
#
# Notebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press `S`. So easy!
# ## The Command Palette
#
# You can easily access the command palette by pressing Shift + Control/Command + `P`.
#
# > **Note:** This won't work in Firefox and Internet Explorer unfortunately. There is already a keyboard shortcut assigned to those keys in those browsers. However, it does work in Chrome and Safari.
#
# This will bring up the command palette where you can search for commands that aren't available through the keyboard shortcuts. For instance, there are buttons on the toolbar that move cells up and down (the up and down arrows), but there aren't corresponding keyboard shortcuts. To move a cell down, you can open up the command palette and type in "move" which will bring up the move commands.
#
# > **Exercise:** Use the command palette to move the cell below down one position.
# +
# Move this cell down
# +
# below this cell
# -
# ## Finishing up
#
# There is plenty more you can do such as copying, cutting, and pasting cells. I suggest getting used to using the keyboard shortcuts, you’ll be much quicker at working in notebooks. When you become proficient with them, you'll rarely need to move your hands away from the keyboard, greatly speeding up your work.
#
# Remember, if you ever need to see the shortcuts, just press `H` in command mode.
#
| jupyter-notebook/keyboard-shortcuts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.12 64-bit (''env0'': conda)'
# name: python3
# ---
# +
from newspaper import Article
# import nltk
# nltk.download('punkt')
# -
def summarize(url):
article = Article(url)
article.download()
article.parse()
article.nlp()
print('\nTitle\n', article.title)
print('\nSUMMARY\n', article.summary)
print('\nKEYWORDS\n', article.keywords)
return article.summary
a = summarize('https://www.cbc.ca/news/politics/vote-counting-elections-canada-2021-1.6179742')
# +
def complexity(summary, range):
sens = summary.split('. ')
full = []
for i in sens[0:range]:
print(i, end='')
full.append(i)
final = ''
for i in full:
i = i + '.'
final += i
# print(final)
return final
complexity(a, 2)
# -
res = complexity(a, 2)
print(res)
a.split('.\n')
| notebooks/summarizing_articles.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # About this book
#
# The github repository for this book is [https://github.com/ststewart/synestiabook2](https://github.com/ststewart/synestiabook2).<p>
#
# This work is distributed under the MIT license:
#
# ```
# MIT License
#
# Copyright (c) 2020 <NAME>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# ```
#
#
# This book was compiled using the following python ecosystem:
# + tags=["hide-input"]
# Record the verions information for this book for posterity
import platform
print('python version: ',platform.python_version())
del platform
import matplotlib
print('matplotlib version: ', matplotlib.__version__)
del matplotlib
import numpy
print('numpy version: ', numpy.__version__)
del numpy
import scipy
print('scipy version: ', scipy.__version__)
del scipy
import rebound
print('rebound version: ', rebound.__version__)
del rebound
import ipywidgets
print('ipywidgets version: ', ipywidgets.__version__)
del ipywidgets
# !jupyter-book --version
# -
# Change log:<br>
# 10/17/2020: Removed dependence on synfits module in 02Physical_Properties_of_Synestias<br>
#
# This book was updated on
# + tags=["hide-input"]
# !date
| synestia-book/docs/.ipynb_checkpoints/AboutThisBook-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/slightperturbation/ml_examples/blob/master/ML_Example_Trivial_Linear_Regression_from_In_memory_Array.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="wcPT4oqNJBKS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="8f923743-df2a-4299-9996-2afb13d61876"
import tensorflow as tf
import numpy as np
x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
y = [2.*x+1.2 for x in x]
print("Learning to map ", x, " into ", y)
model = tf.keras.Sequential(
[
tf.keras.layers.Dense(1, activation='linear', input_shape=[1])
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01), loss='mse')
history = model.fit(x, y, epochs=500, verbose=0)
print("Result: ", float(model.weights[0]), "x + ", float(model.weights[1]))
| ML_Example_Trivial_Linear_Regression_from_In_memory_Array.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Titanic 沉没
#
# 这是一个分类任务,特征包含离散特征和连续特征,数据如下:[Kaggle地址](https://www.kaggle.com/c/titanic/data)。目标是根据数据特征预测一个人是否能在泰坦尼克的沉没事故中存活下来。接下来解释下数据的格式:
#
# ```
# survival 目标列,是否存活,1代表存活 (0 = No; 1 = Yes)
# pclass 乘坐的舱位级别 (1 = 1st; 2 = 2nd; 3 = 3rd)
# name 姓名
# sex 性别
# age 年龄
# sibsp 兄弟姐妹的数量(乘客中)
# parch 父母的数量(乘客中)
# ticket 票号
# fare 票价
# cabin 客舱
# embarked 登船的港口
# (C = Cherbourg; Q = Queenstown; S = Southampton)
# ```
# ## 载入数据并分析
# +
# -*- coding: UTF-8 -*-
# %matplotlib inline
import pandas as pd
import string
import numpy as np
import matplotlib.pyplot as plt
from sklearn import preprocessing
# +
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
def substrings_in_string(big_string, substrings):
for substring in substrings:
if string.find(big_string, substring) != -1:
return substring
return np.nan
def replace_titles(x):
title=x['Title']
if title in ['Mr','Don', 'Major', 'Capt', 'Jonkheer', 'Rev', 'Col']:
return 'Mr'
elif title in ['Master']:
return 'Master'
elif title in ['Countess', 'Mme','Mrs']:
return 'Mrs'
elif title in ['Mlle', 'Ms','Miss']:
return 'Miss'
elif title =='Dr':
if x['Sex']=='Male':
return 'Mr'
else:
return 'Mrs'
elif title =='':
if x['Sex']=='Male':
return 'Master'
else:
return 'Miss'
else:
return title
title_list = ['Mrs', 'Mr', 'Master', 'Miss', 'Major', 'Rev',
'Dr', 'Ms', 'Mlle','Col', 'Capt', 'Mme', 'Countess',
'Don', 'Jonkheer']
# -
label = train['Survived'] # 目标列
# ### Pclass、Sex、Embarked离散特征数据预览
# 除此之外Name、Ticket、Cabin也是离散特征,我们暂时不用这几个特征,直观上来讲,叫什么名字跟在事故中是否存活好像没有太大的联系。
# 接下来我们对每个特征进行一下分析:
train.groupby(['Pclass'])['PassengerId'].count().plot(kind='bar')
train.groupby(['SibSp'])['PassengerId'].count().plot(kind='bar')
train.groupby(['Parch'])['PassengerId'].count().plot(kind='bar')
train.groupby(['Embarked'])['PassengerId'].count().plot(kind='bar')
train.groupby(['Sex'])['PassengerId'].count().plot(kind='bar')
# ### 连续特征处理
# Age、Fare是连续特征,观察数据分布查看是否有缺失值和异常值,我们看到Age中存在缺失值,我们考虑使用均值来填充缺失值。
print '检测是否有缺失值:'
print train[train['Age'].isnull()]['Age'].head()
print train[train['Fare'].isnull()]['Fare'].head()
print train[train['SibSp'].isnull()]['SibSp'].head()
print train[train['Parch'].isnull()]['Parch'].head()
train['Age'] = train['Age'].fillna(train['Age'].mean())
print '填充之后再检测:'
print train[train['Age'].isnull()]['Age'].head()
print train[train['Fare'].isnull()]['Fare'].head()
print '检测测试集是否有缺失值:'
print test[test['Age'].isnull()]['Age'].head()
print test[test['Fare'].isnull()]['Fare'].head()
print test[test['SibSp'].isnull()]['SibSp'].head()
print test[test['Parch'].isnull()]['Parch'].head()
test['Age'] = test['Age'].fillna(test['Age'].mean())
test['Fare'] = test['Fare'].fillna(test['Fare'].mean())
print '填充之后再检测:'
print test[test['Age'].isnull()]['Age'].head()
print test[test['Fare'].isnull()]['Fare'].head()
# +
# 处理Title特征
train['Title'] = train['Name'].map(lambda x: substrings_in_string(x, title_list))
test['Title'] = test['Name'].map(lambda x: substrings_in_string(x, title_list))
train['Title'] = train.apply(replace_titles, axis=1)
test['Title'] = test.apply(replace_titles, axis=1)
# family特征
train['Family_Size'] = train['SibSp'] + train['Parch']
train['Family'] = train['SibSp'] * train['Parch']
test['Family_Size'] = test['SibSp'] + test['Parch']
test['Family'] = test['SibSp'] * test['Parch']
# +
train['AgeFill'] = train['Age']
mean_ages = np.zeros(4)
mean_ages[0] = np.average(train[train['Title'] == 'Miss']['Age'].dropna())
mean_ages[1] = np.average(train[train['Title'] == 'Mrs']['Age'].dropna())
mean_ages[2] = np.average(train[train['Title'] == 'Mr']['Age'].dropna())
mean_ages[3] = np.average(train[train['Title'] == 'Master']['Age'].dropna())
train.loc[ (train.Age.isnull()) & (train.Title == 'Miss') ,'AgeFill'] = mean_ages[0]
train.loc[ (train.Age.isnull()) & (train.Title == 'Mrs') ,'AgeFill'] = mean_ages[1]
train.loc[ (train.Age.isnull()) & (train.Title == 'Mr') ,'AgeFill'] = mean_ages[2]
train.loc[ (train.Age.isnull()) & (train.Title == 'Master') ,'AgeFill'] = mean_ages[3]
train['AgeCat'] = train['AgeFill']
train.loc[ (train.AgeFill<=10), 'AgeCat'] = 'child'
train.loc[ (train.AgeFill>60), 'AgeCat'] = 'aged'
train.loc[ (train.AgeFill>10) & (train.AgeFill <=30) ,'AgeCat'] = 'adult'
train.loc[ (train.AgeFill>30) & (train.AgeFill <=60) ,'AgeCat'] = 'senior'
train['Fare_Per_Person'] = train['Fare'] / (train['Family_Size'] + 1)
# +
test['AgeFill'] = test['Age']
mean_ages = np.zeros(4)
mean_ages[0] = np.average(test[test['Title'] == 'Miss']['Age'].dropna())
mean_ages[1] = np.average(test[test['Title'] == 'Mrs']['Age'].dropna())
mean_ages[2] = np.average(test[test['Title'] == 'Mr']['Age'].dropna())
mean_ages[3] = np.average(test[test['Title'] == 'Master']['Age'].dropna())
test.loc[ (test.Age.isnull()) & (test.Title == 'Miss') ,'AgeFill'] = mean_ages[0]
test.loc[ (test.Age.isnull()) & (test.Title == 'Mrs') ,'AgeFill'] = mean_ages[1]
test.loc[ (test.Age.isnull()) & (test.Title == 'Mr') ,'AgeFill'] = mean_ages[2]
test.loc[ (test.Age.isnull()) & (test.Title == 'Master') ,'AgeFill'] = mean_ages[3]
test['AgeCat'] = test['AgeFill']
test.loc[ (test.AgeFill<=10), 'AgeCat'] = 'child'
test.loc[ (test.AgeFill>60), 'AgeCat'] = 'aged'
test.loc[ (test.AgeFill>10) & (test.AgeFill <=30) ,'AgeCat'] = 'adult'
test.loc[ (test.AgeFill>30) & (test.AgeFill <=60) ,'AgeCat'] = 'senior'
test['Fare_Per_Person'] = test['Fare'] / (test['Family_Size'] + 1)
# +
train.Embarked = train.Embarked.fillna('S')
test.Embarked = test.Embarked.fillna('S')
train.loc[ train.Cabin.isnull() == True, 'Cabin'] = 0.2
train.loc[ train.Cabin.isnull() == False, 'Cabin'] = 1
test.loc[ test.Cabin.isnull() == True, 'Cabin'] = 0.2
test.loc[ test.Cabin.isnull() == False, 'Cabin'] = 1
# +
#Age times class
train['AgeClass'] = train['AgeFill'] * train['Pclass']
train['ClassFare'] = train['Pclass'] * train['Fare_Per_Person']
train['HighLow'] = train['Pclass']
train.loc[ (train.Fare_Per_Person < 8) ,'HighLow'] = 'Low'
train.loc[ (train.Fare_Per_Person >= 8) ,'HighLow'] = 'High'
#Age times class
test['AgeClass'] = test['AgeFill'] * test['Pclass']
test['ClassFare'] = test['Pclass'] * test['Fare_Per_Person']
test['HighLow'] = test['Pclass']
test.loc[ (test.Fare_Per_Person < 8) ,'HighLow'] = 'Low'
test.loc[ (test.Fare_Per_Person >= 8) ,'HighLow'] = 'High'
# -
print train.head(1)
# print test.head()
# ## 特征工程
# +
# 处理训练集
Pclass = pd.get_dummies(train.Pclass)
Sex = pd.get_dummies(train.Sex)
Embarked = pd.get_dummies(train.Embarked)
Title = pd.get_dummies(train.Title)
AgeCat = pd.get_dummies(train.AgeCat)
HighLow = pd.get_dummies(train.HighLow)
train_data = pd.concat([Pclass, Sex, Embarked, Title, AgeCat, HighLow], axis=1)
train_data['Age'] = train['Age']
train_data['Fare'] = train['Fare']
train_data['SibSp'] = train['SibSp']
train_data['Parch'] = train['Parch']
train_data['Family_Size'] = train['Family_Size']
train_data['Family'] = train['Family']
train_data['AgeFill'] = train['AgeFill']
train_data['Fare_Per_Person'] = train['Fare_Per_Person']
train_data['Cabin'] = train['Cabin']
train_data['AgeClass'] = train['AgeClass']
train_data['ClassFare'] = train['ClassFare']
cols = ['Age', 'Fare', 'SibSp', 'Parch', 'Family_Size', 'Family', 'AgeFill', 'Fare_Per_Person', 'AgeClass', 'ClassFare']
train_data[cols] = train_data[cols].apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))
print train_data.head()
# 处理测试集
Pclass = pd.get_dummies(test.Pclass)
Sex = pd.get_dummies(test.Sex)
Embarked = pd.get_dummies(test.Embarked)
Title = pd.get_dummies(test.Title)
AgeCat = pd.get_dummies(test.AgeCat)
HighLow = pd.get_dummies(test.HighLow)
test_data = pd.concat([Pclass, Sex, Embarked, Title, AgeCat, HighLow], axis=1)
test_data['Age'] = test['Age']
test_data['Fare'] = test['Fare']
test_data['SibSp'] = test['SibSp']
test_data['Parch'] = test['Parch']
test_data['Family_Size'] = test['Family_Size']
test_data['Family'] = test['Family']
test_data['AgeFill'] = test['AgeFill']
test_data['Fare_Per_Person'] = test['Fare_Per_Person']
test_data['Cabin'] = test['Cabin']
test_data['AgeClass'] = test['AgeClass']
test_data['ClassFare'] = test['ClassFare']
test_data[cols] = test_data[cols].apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))
print test_data.head()
# -
# ## 模型训练
# +
from sklearn.linear_model import LogisticRegression as LR
from sklearn.cross_validation import cross_val_score
from sklearn.naive_bayes import GaussianNB as GNB
from sklearn.ensemble import RandomForestClassifier
import numpy as np
# -
# ### 逻辑回归
# +
model_lr = LR(penalty = 'l2', dual = True, random_state = 0)
model_lr.fit(train_data, label)
print "逻辑回归10折交叉验证得分: ", np.mean(cross_val_score(model_lr, train_data, label, cv=10, scoring='roc_auc'))
result = model_lr.predict( test_data )
output = pd.DataFrame( data={"PassengerId":test["PassengerId"], "Survived":result} )
output.to_csv( "lr.csv", index=False, quoting=3 )
# -
# #### 提交kaggle后准确率:0.78469
# ### 高斯贝叶斯
# +
model_GNB = GNB()
model_GNB.fit(train_data, label)
print "高斯贝叶斯分类器10折交叉验证得分: ", np.mean(cross_val_score(model_GNB, train_data, label, cv=10, scoring='roc_auc'))
result = model_GNB.predict( test_data )
output = pd.DataFrame( data={"PassengerId":test["PassengerId"], "Survived":result} )
output.to_csv( "gnb.csv", index=False, quoting=3 )
# -
# #### 提交kaggle后准确率:0.74163
# ### 随机森林
# +
forest = RandomForestClassifier( n_estimators=500, criterion='entropy', max_depth=5, min_samples_split=1,
min_samples_leaf=1, max_features='auto', bootstrap=False, oob_score=False, n_jobs=4,
verbose=0)
# %time forest = forest.fit( train_data, label )
print "随机森林分类器10折交叉验证得分: ", np.mean(cross_val_score(forest, train_data, label, cv=10, scoring='roc_auc'))
result = forest.predict( test_data )
output = pd.DataFrame( data={"PassengerId":test["PassengerId"], "Survived":result} )
output.to_csv( "rf.csv", index=False, quoting=3 )
# -
# #### 提交kaggle后准确率:0.76555
# --------
#
# ## 寻找最佳参数
# +
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split,StratifiedShuffleSplit,StratifiedKFold
param_grid = dict( )
pipeline=Pipeline([ ('clf', forest) ])
grid_search = GridSearchCV(pipeline, param_grid=param_grid, verbose=3, scoring='accuracy',
cv=StratifiedShuffleSplit(label, n_iter=10, test_size=0.2, train_size=None)).fit(train_data, label)
print("Best score: %0.3f" % grid_search.best_score_)
| competitions/Titanic/titanic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from scipy import signal
from scipy import fftpack
import sys
# insert at 1, 0 is the script path (or '' in REPL)
sys.path.insert(1, '../dependencies/')
from plotting import *
# -
z = np.arange(-5,5,0.01)
N = 1 / np.sqrt(2 * np.pi) * np.exp(-0.5 * (z)**2)
generate_plot(z,N,
[''],'$Z$-Score','Probability',showplot=True,
template='wide',
ymax=0.01,
save_plot=True,
transparent=True,
num_col=2,
folder='figures/Control_Charts',
filename='Normal_Distribution',
tick_increment=1,
file_type='png')
| Dissertation-Notebooks/Chapter-2/Control Chart.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="e34ef6128dcfff76f57c9b7d456b3f5ec1627a51"
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
sns.set_style('darkgrid')
sns.set(font_scale=1.6)
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
# + _uuid="3b097d479b9d7c8c6c842f208452cbfb0a8df973"
from kaggle.competitions import twosigmanews
env = twosigmanews.make_env()
(market_train_df, news_train_df) = env.get_training_data()
news_train_df['subjects'] = news_train_df['subjects'].str.findall(f"'([\w\./]+)'")
news_train_df['audiences'] = news_train_df['audiences'].str.findall(f"'([\w\./]+)'")
news_train_df['assetCodes'] = news_train_df['assetCodes'].str.findall(f"'([\w\./]+)'")
(market_train_df.shape, news_train_df.shape)
# + _uuid="773d2c262e09f4941205404530836d53de8ba2b2"
# We will reduce the number of samples for memory reasons
toy = False
if toy:
market_train_df = market_train_df.tail(100_000)
news_train_df = news_train_df.tail(300_000)
else:
market_train_df = market_train_df.tail(3_000_000)
news_train_df = news_train_df.tail(6_000_000)
(market_train_df.shape, news_train_df.shape)
# + [markdown] _uuid="7a14ab84bb84884c6fc2360c89991e137ce443c6"
# # Data Generator
# + _uuid="5ea137f942ac9eea666458fa5703445e1df1fe79"
def generator(data, lookback, min_index, max_index,
shuffle=False, batch_size=128, step=1):
if max_index is None:
max_index = data.shape[0] - 1
i = min_index + lookback
while True:
# Select the rows that will be used for the batch
if shuffle:
rows = np.random.randint(min_index + lookback, size=batch_size)
else:
rows = np.arange(i, min(i + batch_size, max_index))
i += rows.shape[0]
if i + batch_size >= max_index:
i = min_index + lookback
samples = np.zeros((len(rows),
lookback // step,
data.shape[-1]))
targets = np.zeros((len(rows),))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices]
targets[j] = data[rows[j]][1]
yield samples, targets
# + [markdown] _uuid="efe68f0b5c3eaafd7def4815eb96451e6bb9976a"
# # Learning the Models
# + [markdown] _uuid="d980ff39c1d44b544ce427877bed73676c51fe01"
# Each model is trained on a particular asset. The time series data for that particular asset needs to be normalized.
# + _uuid="03f5cfb992a821a3e53e7aaa4139caf50f776b30"
from keras.callbacks import Callback
class EarlyStoppingByLossVal(Callback):
def __init__(self, monitor='val_loss', value=0.00001, verbose=0):
super(Callback, self).__init__()
self.monitor = monitor
self.value = value
self.verbose = verbose
def on_epoch_end(self, epoch, logs={}):
current = logs.get(self.monitor)
if current is None:
warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)
if current < self.value:
if self.verbose > 0:
print("Epoch %05d: early stopping THR" % epoch)
self.model.stop_training = True
# + _uuid="1fc9313345a93e94a9b5df6f04425e8550276cd4"
from tqdm import tqdm
from sklearn.preprocessing import StandardScaler
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop, Adagrad
lookback = 30
batch_size = 1024
steps_per_epoch = 100
epochs = 10
data_split = 0.8
step = 1
def generators(market_data_float):
l = market_data_float.shape[0]
train_gen = generator(market_data_float,
min_index=0,
max_index=int(data_split * l),
batch_size=batch_size,
lookback=lookback,
step=step)
val_gen = generator(market_data_float,
min_index=int(data_split * l),
max_index=None,
batch_size=batch_size,
lookback=lookback,
step=step)
return (train_gen, val_gen)
def learn_model(market_data_float):
(train_gen, val_gen) = generators(market_data_float)
input_shape = (None, market_data_float.shape[-1])
model = Sequential()
model.add(layers.GRU(4, input_shape=input_shape))
model.add(layers.Dense(1))
model.compile(optimizer=Adagrad(), loss='mae')
callbacks = [EarlyStoppingByLossVal(monitor='val_loss', value=0.00001, verbose=1)]
history = model.fit_generator(train_gen,
steps_per_epoch=steps_per_epoch,
epochs=10,
validation_data=val_gen,
validation_steps=100,
callbacks=callbacks)
return (model, history)
def learn_models(market_train_df, Histories):
for asset_code, market_data in tqdm(market_train_df.groupby('assetCode')):
# drop the non-numeric columns and handle the nans.
market_float_data = market_data.drop(['assetCode', 'assetName', 'time'], axis=1).fillna(0)
# normalize the data
scaler = StandardScaler().fit(market_float_data)
# learn a model using the normalized data
(model, history) = learn_model(scaler.transform(market_float_data))
# save the history
Histories[asset_code] = history
yield asset_code, (scaler, model)
# + [markdown] _uuid="dc7064ad2ce5a095983d037b2a5bc603cd7752b2"
# # Random Sample the Assets
# + _uuid="f148143adef195eab68b76ea083f84bbb7c426e4"
n = 5
n_random_assets = np.random.choice(market_train_df.assetCode.unique(), n)
market_train_sampled_df = market_train_df[market_train_df.assetCode.isin(n_random_assets)]
# + _uuid="9beb8beacfd4fafb2d3dfa2d1df070dc7f95c04a"
(fig, ax) = plt.subplots(figsize=(15, 8))
market_train_sampled_df.groupby('assetCode').plot(x='time', y='close', ax=ax)
ax.legend(n_random_assets)
plt.xlabel('time')
plt.ylabel('close')
plt.title('Closing Price of %i random assets' % n)
plt.show()
# + _uuid="cb8d9bf794909134085839b7c841fcbe0d14350e"
Histories = {}
Models = dict(learn_models(market_train_sampled_df, Histories))
# + [markdown] _uuid="8584410369c7f6a541bd2238604bb14cf27a8eea"
# # Unscaled loss plots
# + _uuid="88d1d7e11be1b141fa6a7e41629a0078608e5be2"
for asset, history in Histories.items():
(fig, ax) = plt.subplots(figsize=(15, 8))
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title(asset)
plt.legend()
plt.show()
| 3 mu sigma using news to predict news/lstm-baseline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Use greenflow to Build cuStreamz Examples
#
# The stream processing market is experiencing exponential growth with businesses relying heavily on real-time analytics, inferencing, monitoring, and more. [cuStreamz](https://medium.com/rapids-ai/gpu-accelerated-stream-processing-with-rapids-f2b725696a61) is the first GPU-accelerated streaming data processing library. Written in Python, it is built on top of RAPIDS, the GPU-accelerator for data science libraries. cuStreamz accelerates [Streamz](https://streamz.readthedocs.io/en/latest/) by leveraging RAPIDS cuDF under the hood to perform computations on streaming data in an accelerated manner on GPUs. The Python library Streamz helps you build pipelines to manage continuous streams of data.
#
# It is easy to wrap the cuStreamz streaming pipelines into greenflow nodes and visualize it. In this notebook example, we will step-by-step build a stream pipeline example in greenflow. You will learn how to write a basic greenflow node, register it for the greenflow Jupyterlab plugin, and use GPU to accelerate the computations.
#
# Let's first import the streamz libraries and greenflow libraries.
import streamz
import streamz.dataframe
import cudf
from greenflow.dataframe_flow import Node
from greenflow.dataframe_flow import NodePorts, PortsSpecSchema, TaskSpecSchema
from greenflow.dataframe_flow import ConfSchema
from greenflow.dataframe_flow import TaskGraph
from greenflow.dataframe_flow import MetaData
# ## Double the Streaming Numbers by Streamz
#
# A greenflow node is defined by inheriting from the `Node` class and overriding methods `init`, `meta_setup`, `ports_setup`, `conf_schema`, `process`.
#
# The `ports_setup` must return an instance of `NodePorts` which encapsulates the ports specs. Ports specs are dictionaries with port attributes/options per `PortsSpecSchema`.
#
# The `process` method receives an input dictionary where keys are input ports and values are input data. It return a dictionary where the keys correspond to the output ports.
#
# The `meta_setup` is used to compute the output meta information. It returns a dictionary where keys correspond to the output ports.
#
# The `conf_schema` is used to define the Node configuration [JSON schema](https://json-schema.org/). greenflowlab UI uses [RJSF](https://github.com/rjsf-team/react-jsonschema-form) project to generate HTML form elements based on the JSON schema. [RJSF playground](https://rjsf-team.github.io/react-jsonschema-form/) is a good place to learn how to write JSON schema and visualize it. `conf_schema` returns `ConfSchema` which encapsulate the JSON schema and UI schema together.
#
# The `column` and `port_types` information sometimes are determined dynamically. greenflow provides a few utility functions to help get dynamical graph information. `self.get_connected_inports()` will return a dictionay where keys are connected inport names and values are inport types.
# `self.get_input_meta()` will return a dictionary where keys are connected inport names and values are column name/type pairs from the parent node. `self.outport_connected(port_name)` method returns a boolean if the output port `port_name` is connected.
#
# As a `hello-world` example for streamz, we will build an example that doubles continuous streams of numbers and print it. We need 3 nodes for it: source, map stream and sink.
#
# ### Define the Source Stream Node
class StreamNode(Node):
def ports_setup(self):
input_ports = {}
output_ports = {
'stream_out': {
PortsSpecSchema.port_type: streamz.Stream
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def conf_schema(self):
json = {
"title": "Source node configure",
"type": "object",
"properties": {}
}
ui = {}
return ConfSchema(json=json, ui=ui)
def init(self):
pass
def meta_setup(self):
columns_out = {
'stream_out': {'element': 'number'}
}
return MetaData(inports={}, outports=columns_out)
def process(self, inputs):
output = {'stream_out': streamz.Stream()}
return output
# As shown above, this source node has one output port which outputs `streamz.Stream` object. It has no configuration, so we just leave it empty. In the `meta_setup` method, we specify that the element type in the stream is a number. The down-stream node can connect to it only if its `MetaData.inports` has the same key value pair. The `Stream` could stream many types of data and just knowing that something is a `streamz.Stream` is not enough to know what is actually being streamed. Using columns setup enables one to implement meta-typecking enforcement i.e. above one expects the stream output to contain `{'element': 'number'}` which is just some custom type specification. When the outputs are dataframes (pandas, cudf, dask-dataframes) then the columns has a concrete meaning i.e. what columns are present and what are their types in the dataframes.
#
# In the `process` method, `StreamNode` outputs the `stream.Stream()` as the source end of the pipeline. Later, we will use the `emit` method to add numbers to it.
# ### Define the Double Element Node
class TransformNode(Node):
def ports_setup(self):
input_ports = {
'stream_in': {
PortsSpecSchema.port_type: streamz.Stream
}
}
output_ports = {
'stream_out': {
PortsSpecSchema.port_type: streamz.Stream
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def conf_schema(self):
json = {
"title": "Transform Node configure",
"type": "object",
"properties": {}
}
ui = {}
return ConfSchema(json=json, ui=ui)
def init(self):
pass
def meta_setup(self):
required = {
'stream_in': {'element': 'number'}
}
columns_out = {
'stream_out': {'element': 'number'}
}
return MetaData(inports=required, outports=columns_out)
def process(self, inputs):
in_stream = inputs['stream_in']
def double(ele):
return ele*2
output = {'stream_out': in_stream.map(double)}
return output
# `TransformNode` definition is similar to `StreamNode`, but it has an input port `stream_in`. It defines the key value pair `element->number` in the `MetaData.inports` dictionary, so it is compatible to connect to the source node.
#
# In the `process` method, it maps each elements in the stream by `double` function.
# ### Define the Print Elements Node
class SinkNode(Node):
def ports_setup(self):
input_ports = {
'stream_in': {
PortsSpecSchema.port_type: streamz.Stream
}
}
output_ports = {
'stream_out': {
PortsSpecSchema.port_type: streamz.Stream
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def conf_schema(self):
json = {
"title": "Simple SinkNode configure",
"type": "object",
"properties": {}
}
ui = {}
return ConfSchema(json=json, ui=ui)
def init(self):
pass
def meta_setup(self):
icols = self.get_input_meta()
try:
icoltype = icols['stream_in']['element']
except KeyError:
icoltype = None
required = {'stream_in': {}}
if icoltype in ('number', 'numbers'):
required['stream_in']['element'] = icoltype
elif icoltype is not None:
required['stream_in']['element'] = 'number'
columns_out = {
'stream_out': {'element': 'number'}
}
return MetaData(inports=required, outports=columns_out)
def process(self, inputs):
in_stream = inputs['stream_in']
def frame_counter(frame_counter, element):
return (frame_counter + 1, (frame_counter, element))
def print_sink(counter_element):
counter = counter_element[0]
element = counter_element[1]
print('Frame {}: {}'.format(counter, element))
stream_out = in_stream\
.accumulate(frame_counter, returns_state=True, start=0)\
.sink(print_sink)
output = {'stream_out': stream_out}
# output = {'stream_out': in_stream.sink(print)}
return output
# The `SinkNode` `process` is relatively simple. An accumulate is used to keep a frame counter, and the sink prints the data with its corresponding frame counter.
#
# The `meta_setup` is slightly involved. Below `SinkNode` will also support streaming `numbers` i.e. a set of numbers as well as one number at a time. This node can do this dynamically. Initially its required setup is `{'stream_in': {}}` it can connect to anything that produces `streamz.Stream`. However, once it is connected the required columns setting is enforced to be either `'number'` or `'numbers'`, which is matched to the connected port. If neither `'number'` or `'numbers'` matches the connected port, then `'number'` is set. Unless the connected port specifies for its output meta `'number'` or `'numbers'` this node will raise a `LookupError` for the unexpected columns type. This validations is done when the taskgraph run is invoked, during an intermediate stage after the taskgraph build, but prior calling the process computations of the nodes within the taskgraph.
# ### Define the Graph Programmatically
#
# Having these three nodes, we can construct a simple task graph to connect them. We added a `Output_Collector` node so that we can get a handle to both the source and sink.
# +
module_name = 'streamz'
source_spec = {
TaskSpecSchema.task_id: 'source',
TaskSpecSchema.node_type: StreamNode,
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {},
TaskSpecSchema.module: module_name
}
task_spec = {
TaskSpecSchema.task_id: 'double',
TaskSpecSchema.node_type: TransformNode,
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {
'stream_in': 'source.stream_out'
},
TaskSpecSchema.module: module_name
}
sink_spec = {
TaskSpecSchema.task_id: 'sink',
TaskSpecSchema.node_type: SinkNode,
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {
'stream_in': 'double.stream_out'
},
TaskSpecSchema.module: module_name
}
out_spec = {
TaskSpecSchema.task_id: '',
TaskSpecSchema.node_type: "Output_Collector",
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {
'in0': 'source.stream_out',
'in1': 'sink.stream_out',
}
}
task_list = [source_spec, task_spec, sink_spec, out_spec]
# -
# ### Dynamically Register the Greenflow Nodes
#
# The task graph defined above is ready to be evaluated programmatically by
task_graph = TaskGraph(task_list)
task_graph.run()
# The result has both the source and sink nodes stored as a dictionary.
#
# To use the task graph in the greenflow Jupyterlab plugin, we can either put them in a python file and edit `greenflowrc` file to register them. You can find an example in the `05_customize_nodes_with_ports` notebook. Or, we can dynamically register them by `register_lab_node` method.
TaskGraph.register_lab_node(module_name, StreamNode)
TaskGraph.register_lab_node(module_name, TransformNode)
TaskGraph.register_lab_node(module_name, SinkNode)
# We can visualize this simple task graph
task_graph = TaskGraph(task_list)
task_graph.draw()
# Let's evaluate it and get the source node. We will stream numbers by calling `emit` method on the `source` stream.
r = task_graph.run()
# The result is saved to variable `r`. It can be used as a dictionary. Let's find the keys:
r.get_keys()
# We can emit some numbers to the source and see the stream processing is in effect.
for i in range(10):
r['source.stream_out'].emit(i)
# ## Show the Doubling Results in a Sliding Window
#
# Imagine you want to build a monitor that can watch the latest 50 numbers in the stream. Streamz library provides a few stream process nodes that make it easy to do. Let's add a sliding window node that can group the numbers in a sliding window of defined length.
# ### Define the Sliding Window Node
class SlideWindowNode(Node):
def ports_setup(self):
input_ports = {
'stream_in': {
PortsSpecSchema.port_type: streamz.Stream
}
}
output_ports = {
'stream_out': {
PortsSpecSchema.port_type: streamz.Stream
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def conf_schema(self):
json = {
"title": "SlideWindow Node configure",
"type": "object",
"properties": {
"window": {
"type": "integer"
}
}
}
ui = {}
return ConfSchema(json=json, ui=ui)
def init(self):
pass
def meta_setup(self):
required = {
'stream_in': {'element': 'number'}
}
columns_out = {
'stream_out': {'element': 'numbers'}
}
return MetaData(inports=required, outports=columns_out)
def process(self, inputs):
in_stream = inputs['stream_in']
stream_out = in_stream.sliding_window(self.conf['window'],
return_partial=False)
output = {'stream_out': stream_out}
return output
# It is similar to the `TransformNode`. In the `process` method, it uses `in_stream.sliding_window` method to group the elements in a sliding window. In the `conf_schema`, we define a `window` field of type `integer`. It defines the sliding window size. As it buffers the numbers in a window, the `columns_out` has element type `numbers` instead of `number`. Note, the key-value string can be arbitary strings as long as the compatible input/output ports share the same key-value pairs.
# ### Define the Plot Node
#
# Define a node to visualize the numbers online
# +
import bqplot
import numpy as np
import bqplot.pyplot as plt
class PlotSinkNode(Node):
def ports_setup(self):
input_ports = {
'stream_in': {
PortsSpecSchema.port_type: streamz.Stream
}
}
output_ports = {
'fig_out': {
PortsSpecSchema.port_type: bqplot.Figure
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def conf_schema(self):
json = {
"title": "Plot configure",
"type": "object",
"properties": {}
}
ui = {}
return ConfSchema(json=json, ui=ui)
def init(self):
pass
def meta_setup(self):
required = {
'stream_in': {'element': 'numbers'}
}
columns_out = {'fig_out': {}}
return MetaData(inports=required, outports=columns_out)
def process(self, inputs):
in_stream = inputs['stream_in']
axes_options = {'x': {'label': 'x'}, 'y': {'label': 'y'}}
x = []
y = []
fig = plt.figure(animation_duration=10)
lines = plt.plot(x=x, y=y, colors=['red', 'green'], axes_options=axes_options)
output = {}
def update(numbers):
y = np.array(numbers)
elements = y.shape[-1]
x = np.arange(elements)
lines.x = x
lines.y = y
in_stream.sink(update)
output.update({'fig_out': fig})
return output
# -
# This `PlotSinkNode` is a sink of the stream that use `update` function to plot the numbers in the sliding window. So we can see an animiation of number changes as new numbers are added.
#
# Register these two new nodes:
TaskGraph.register_lab_node(module_name, PlotSinkNode)
TaskGraph.register_lab_node(module_name, SlideWindowNode)
# If you draw a taskgraph you can right-click and select "Create Taskgraph from this Cell" so that you can just load the taskgraph next time. We already drew the graph and saved it in the `plot.gq.yaml` file. Load it from the disk:
task_graph = TaskGraph.load_taskgraph('../taskgraphs/streamz/plot.gq.yaml')
task_graph.draw()
# It applies `sliding_windw` after `doubling` the numbers, then feed them to `plot` sink node.
#
# Run it to get the source and sink
r = task_graph.run()
# Show the plot:
r['plot.fig_out']
# Let's add `sine` wave numbers to the stream:
for i in range(1000):
r['source.stream_out'].emit(np.sin(i/3.14))
# You can see the moving sin waves in the plot. Try to change the sliding window size and play with it!
# ## Process Two Branches of the stream
#
# Streamz library can handle complicated streams in the pipeline. For illustration purpose, we add a `ZipNode` that can merge two streams together so the element from the two streams are zipped into a tuple.
#
# ### Define the Zip Node to Combine Branches
class ZipNode(Node):
def ports_setup(self):
input_ports = {
'stream1_in': {
PortsSpecSchema.port_type: streamz.Stream
},
'stream2_in': {
PortsSpecSchema.port_type: streamz.Stream
}
}
output_ports = {
'stream_out': {
PortsSpecSchema.port_type: streamz.Stream
}
}
# dynamic ports
# add one additional port
iports_conn = self.get_connected_inports()
nports = len(iports_conn)
if nports >= 2:
input_ports = {'stream{}_in'.format(ii+1): {
PortsSpecSchema.port_type: streamz.Stream
} for ii in range(nports+1)}
return NodePorts(inports=input_ports, outports=output_ports)
def conf_schema(self):
json = {
"title": "Zip Node configure",
"type": "object",
"properties": {}
}
ui = {}
return ConfSchema(json=json, ui=ui)
def init(self):
pass
def meta_setup(self):
required = {
'stream1_in': {'element': 'numbers'},
'stream2_in': {'element': 'numbers'}
}
iports_conn = self.get_connected_inports()
nports = len(iports_conn)
columns_out = {
'stream_out': {'element': 'numbers'}
}
return MetaData(inports=required, outports=columns_out)
def process(self, inputs):
streams_list = [istream for istream in inputs.values() if istream]
zip_stream = streamz.zip(*streams_list)
output = {'stream_out': zip_stream}
return output
# `ZipNode` defines two required input ports: `stream1_in` and `stream2_in`. Once these two ports are connected another port is dynamically added and so on. In the `process` method all the streams are zipped together.
#
# Register this node:
TaskGraph.register_lab_node(module_name, ZipNode)
# We wired the graph in the `two_branches.gq.yaml` file. Load it from the disk:
task_graph = TaskGraph.load_taskgraph('../taskgraphs/streamz/two_branches.gq.yaml')
task_graph.draw()
# It first doubles the elements. Then it branches off. In one branch, it doubles the elements again. After merging these two branches by `ZipNode`, it sends the stream of tuples to the `plot` sink node.
#
# Let's run this graph:
r = task_graph.run()
# Show the plot:
r['plot.fig_out']
# Let's add sine wave numbers to the stream
for i in range(1000):
r['source.stream_out'].emit(np.sin(i/3.14))
# It shows two sine waves. One is doubled and the other one is quadrupled
# ## Use GPU to Accelerate
#
# Stream processing can be accelerated in the GPU by RAPIDS cudf dataframe. Streamz can work with both Pandas and cuDF dataframes, to provide sensible streaming operations on continuous tabular data.
#
# We will take the above streaming pipeline and convert it to be accelerated in the GPU.
#
# ### Define the Node to Convert a Tuple of Numbers to Cudf Dataframe
#
# The sliding window node aggregate the numbers in a tuple. We can define a Node to convert the tuple of numbers to cudf dataframe
class TupleToCudf(Node):
def ports_setup(self):
input_ports = {
'stream_in': {
PortsSpecSchema.port_type: streamz.Stream
}
}
output_ports = {
'stream_out': {
PortsSpecSchema.port_type: streamz.Stream
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def conf_schema(self):
json = {
"title": "Tuple of data to Cudf dataframe Node configure",
"type": "object",
"properties": {}
}
ui = {}
return ConfSchema(json=json, ui=ui)
def init(self):
pass
def meta_setup(self):
required = {
'stream_in': {'element': 'numbers'}
}
columns_out = {
'stream_out': {'element': 'cudf'}
}
return MetaData(inports=required, outports=columns_out)
def process(self, inputs):
in_stream = inputs['stream_in']
def convert(ele):
df = cudf.DataFrame({'x': ele})
return df
output = {'stream_out': in_stream.map(convert)}
return output
# It maps tuple of numbers to cudf dataframe elements by `in_stream.map` method.
#
# ### Define the Node to Convert Cudf stream to streamz.DataFrame
#
# The `streamz.dataframe` module provides a streaming dataframe object that implements many of dataframe methods. It provides a Pandas-like interface on streaming data. Let's write a `ToDataFrame` Node that converts the cudf dataframe stream to streaming dataframe.
# Create a streamz dataframe to get stateful word count
class ToDataFrame(Node):
def ports_setup(self):
input_ports = {
'stream_in': {
PortsSpecSchema.port_type: streamz.Stream
}
}
output_ports = {
'df_out': {
PortsSpecSchema.port_type: streamz.dataframe.DataFrame
},
}
return NodePorts(inports=input_ports, outports=output_ports)
def conf_schema(self):
json = {
"title": "streamz Dataframe Node configure",
"type": "object",
"properties": {}
}
ui = {}
return ConfSchema(json=json, ui=ui)
def init(self):
pass
def meta_setup(self):
required = {
'stream_in': {'element': 'cudf'}
}
columns_out = {
'df_out': {'x': 'float64'}
}
return MetaData(inports=required, outports=columns_out)
def process(self, inputs):
in_stream = inputs['stream_in']
sdf = streamz.dataframe.DataFrame(
in_stream,
example=cudf.DataFrame({'x':[]})
)
output = {'df_out': sdf}
return output
# In the `process` method, it calls `streamz.dataframe.DataFrame` constructor to convert a stream of cudf dataframe into a streamz dataframe.
# ### Define the Node to Double the numbers via streamz.DataFrame API
#
# It is straight forward to write a Node that doubles one column in a dataframe.
class GPUDouble(Node):
def ports_setup(self):
input_ports = {
'df_in': {
PortsSpecSchema.port_type: streamz.dataframe.DataFrame
}
}
output_ports = {
'df_out': {
PortsSpecSchema.port_type: streamz.dataframe.DataFrame
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def conf_schema(self):
json = {
"title": "GPU double Node configure",
"type": "object",
"properties": {}
}
ui = {}
return ConfSchema(json=json, ui=ui)
def init(self):
pass
def meta_setup(self):
required = {
'df_in': {'x': 'float64'}
}
columns_out = {
'df_out': {'x': 'float64'}
}
return MetaData(inports=required, outports=columns_out)
def process(self, inputs):
in_df = inputs['df_in']
in_df['x'] = in_df['x'] * 2
output = {'df_out': in_df}
return output
# ## Define the Node to Convert streamz.DataFrame to Cudf.Series stream
#
# Once the streamz Dataframe finished the computation, we can convert it back to normal stream for display.
class ToStream(Node):
def ports_setup(self):
input_ports = {
'df_in': {
PortsSpecSchema.port_type: streamz.dataframe.DataFrame
}
}
output_ports = {
'stream_out': {
PortsSpecSchema.port_type: streamz.Stream
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def conf_schema(self):
json = {
"title": "df to stream Node configure",
"type": "object",
"properties": {}
}
ui = {}
return ConfSchema(json=json, ui=ui)
def init(self):
pass
def meta_setup(self):
required = {
'df_in': {'x': 'float64'}
}
columns_out = {
'stream_out': {'element': 'numbers'}
}
return MetaData(inports=required, outports=columns_out)
def process(self, inputs):
in_df = inputs['df_in']
def toarray(df_series):
return df_series.to_array()
outstream = in_df.stream.pluck('x').map(toarray)
output = {'stream_out': outstream}
return output
# The stream can be accessed by the `stream` property in the streamz Dataframe. We call `pluck` method to return the `x` column of the dataframe and convert the resulting series to a numpy array. This numpy array is compatible for the `{'element': 'numbers'}` meta-typecheck to be operated in the plot and print sink nodes.
#
# Register the newly added Nodes:
TaskGraph.register_lab_node(module_name, TupleToCudf)
TaskGraph.register_lab_node(module_name, ToDataFrame)
TaskGraph.register_lab_node(module_name, GPUDouble)
TaskGraph.register_lab_node(module_name, ToStream)
# We already wired the graph in the `plot.gq.yaml` file. Load it from the disk:
task_graph = TaskGraph.load_taskgraph('../taskgraphs/streamz/gpu_double_two_branches.gq.yaml')
task_graph.draw()
# It first aggregates the numbers into a tuple of numbers by sliding window node. It converts the tuple of numbers into a stream of cudf dataframes. After converting it to streamz.Dataframe, we can use normal dataframe API to do transformations on the dataframe. Then it converts back to stream of cudf.Series of numbers for ploting and printing.
#
# Let's run the TaskGraph:
r = task_graph.run()
# Show the plot:
r['plot.fig_out']
# Let's add `sine` wave numbers to the stream:
for i in range(200):
r['source.stream_out'].emit(np.sin(i/3.14))
# +
## Clean up
# -
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
| gQuant/plugins/gquant_plugin/notebooks/10_streamz.ipynb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Imports
# +
import pandas as pd
import numpy as np
import spacy
from nltk.stem.porter import *
from sklearn.feature_extraction.text import CountVectorizer
from tqdm import tqdm
from itertools import product
from sklearn.model_selection import StratifiedKFold, cross_val_score, train_test_split
from imblearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report, ConfusionMatrixDisplay, precision_score, f1_score, recall_score
import matplotlib.pyplot as plt
# -
# # Functions
def metrics_model(list_estimators):
results = []
for n_est in list_estimators:
model = RandomForestClassifier(n_estimators=n_est, random_state=42)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
f1 = f1_score(y_test, y_pred, average='weighted', zero_division=0)
ps = precision_score(y_test, y_pred, average='weighted', zero_division=0)
recall = recall_score(y_test, y_pred, average='weighted', zero_division=0)
results.append([n_est, f1, ps, recall])
return pd.DataFrame(results, columns=['n_estimators', 'f1', 'precision', 'recall'])
def search_cross_valid(search_space, X_train, y_train, X_test, y_test, SEED=42):
f1_scores, precision_s, recall, params = [], [], [], []
for n_est, crt, max_f in tqdm(search_space, desc="Treinamento: "):
model = RandomForestClassifier(n_estimators=n_est, criterion=crt, max_features=max_f, random_state=SEED)
cv = StratifiedKFold(n_splits=10, random_state=SEED, shuffle=True)
steps = [('model', model)]
pipeline = Pipeline(steps=steps)
scores = cross_val_score(pipeline, X_train, y_train, scoring='f1_micro', cv=cv, n_jobs=-1)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
f1 = f1_score(y_test, y_pred, average='weighted', zero_division=0)
ps = precision_score(y_test, y_pred, average='weighted', zero_division=0)
r = recall_score(y_test, y_pred, average='weighted', zero_division=0)
print('Mean f1: %.3f' % np.mean(scores))
print('F1-score test: %.3f' % f1)
print('Precision score: %.3f' % ps)
print('Recall test: %.3f' % r)
parm = dict(n_estimators=n_est, criterion=crt, max_features=max_f)
print("Parameters: ", parm)
print("Score of validation: ", np.mean(scores))
f1_scores.append(f1)
precision_s.append(ps)
recall.append(r)
params.append(parm)
zipped_results = zip(params, f1_scores, precision_s, recall)
best_result = max(zipped_results, key = lambda res: res[1])
best_params, best_f1_score, best_precision_s, best_recall = best_result
results = pd.DataFrame(zip(['Random Forest'] * len(f1_scores), params, f1_scores, precision_s, recall),
columns=['model', 'parameters', 'f1', 'precision', 'recall'])
print('------------------------------')
print("Best parameters: ", best_params)
print("Best f1-score score: ", best_f1_score)
print("Best precision score: ", best_precision_s)
print("Best recall score: ", best_recall)
return best_params, results
def plot_results(results):
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.plot(results['n_estimators'], results[['f1', 'precision', 'recall']])
plt.xlabel("N_estimators")
plt.ylabel('Results')
plt.legend(['f1', 'precision', 'recall'])
# # Loading the dataset
df = pd.read_csv("data/reviews_final.csv")
df.head()
df.dropna(inplace=True)
df['label'] = df['label'].replace('Neutral', 'Bad')
df['label'].value_counts()
df.shape
# # Bag of words
nlp = spacy.load('pt_core_news_sm')
stopwords = nlp.Defaults.stop_words
len(stopwords)
vectorizer = CountVectorizer(lowercase=True, stop_words=stopwords)
# # Models
# ## Parameters
n_estimators = list(range(10, 200, 20))
criterions = ["gini", "entropy"]
max_features = ["sqrt", "auto", "log2"]
# +
search_space = tuple(product(n_estimators, criterions, max_features))
print("Amount of training sets that will be validate: ", len(search_space))
# -
# ## Model without stemming
# ### Splitting data into training and testing
# +
X = vectorizer.fit_transform(df['review']).toarray()
y = df['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, random_state=42)
# -
len(X[0])
# ### Search better n_estimators
results_no_stemmer = metrics_model(list(range(10, 200, 20)))
results_no_stemmer
plot_results(results_no_stemmer)
# ### Instantiating model with best n_estimator
model = RandomForestClassifier(n_estimators=10, random_state=42)
model.fit(X_train, y_train)
print(classification_report(y_test, model.predict(X_test)))
ConfusionMatrixDisplay.from_estimator(model, X_test, y_test, cmap='Blues', values_format='d')
# ## Finding better parameters
best_params, results = search_cross_valid(search_space, X_train, y_train, X_test, y_test)
results
# +
# results['model'] = 'Random Forest'
# results. to_csv('results/random_forest.csv')
# -
# ### Instantiating model with best parameters
model = RandomForestClassifier(**best_params, random_state=42)
model.fit(X_train, y_train)
print(classification_report(y_test, model.predict(X_test)))
ConfusionMatrixDisplay.from_estimator(model, X_test, y_test, cmap='Blues', values_format='d')
# # Stemming
# +
reviews = df['review'].apply(lambda x: x.lower())
reviews_stemmer = []
stemmer = PorterStemmer()
for r in reviews:
doc = nlp(r)
sentences = doc.sents
for sent in sentences:
tokens = nlp(sent.text)
# remove stop words and stemming
review_processed = []
for tkn in tokens:
if tkn.is_punct == False and tkn.is_stop == False:
review_processed.append(stemmer.stem(str(tkn)))
review_text = " ".join(review_processed)
reviews_stemmer.append(review_text)
# -
reviews[0]
reviews_stemmer[0]
# # Model with stemming
# ## Splitting data into training and testing
# +
X_s = vectorizer.fit_transform(reviews_stemmer).toarray()
X_train, X_test, y_train, y_test = train_test_split(X_s, y, test_size=.3, random_state=42)
# -
len(X_s[0])
results_stemmer = metrics_model(list(range(10, 200, 20)))
results_stemmer
plot_results(results_stemmer)
model_stemmer = RandomForestClassifier(n_estimators=70, random_state=42)
model_stemmer.fit(X_train, y_train)
print(classification_report(y_test, model_stemmer.predict(X_test)))
ConfusionMatrixDisplay.from_estimator(model_stemmer, X_test, y_test, cmap='Blues', values_format='d')
# ## Busca pelos melhores parametros
best_params, results = search_cross_valid(search_space, X_train, y_train, X_test, y_test)
results
# ### Salvando os resultados
results['model'] = 'Random Forest'
results.to_csv('results/random_forest.csv', index=False)
# ## Instantiating model with best parameters
model = RandomForestClassifier(**best_params, random_state=42)
model.fit(X_train, y_train)
print(classification_report(y_test, model.predict(X_test)))
ConfusionMatrixDisplay.from_estimator(model, X_test, y_test, cmap='Blues', values_format='d')
| random_forest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %config IPCompleter.greedy=True
import numpy as np
import gym
import random
environment = gym.make("Taxi-v2")
environment.render()
# +
# Number of possible states
state_size = environment.observation_space.n
# Number of possible actions per state
action_size = environment.action_space.n
# -
qtable = np.zeros((state_size, action_size))
print(qtable)
total_episodes = 15000
learning_rate = 0.8
max_steps = 99
gamma = 0.95
epsilon = 1.0
max_epsilon = 1.0
min_epsilon = 0.01
decay_rate = 0.005
# +
rewards = []
for episodes in range(total_episodes):
state = environment.reset()
step = 0
done = False
total_rewards = 0
for step in range(max_steps):
exploration_exploitation_tradeoff = random.uniform(0, 1)
if exploration_exploitation_tradeoff > epsilon:
action = np.argmax(qtable[state,:])
else:
action = environment.action_space.sample()
new_state, reward, done, info = environment.step(action)
qtable[state, action] = qtable[state, action] + learning_rate * (reward + gamma * \
np.max(qtable[new_state, :]) - qtable[state, action])
total_rewards += reward
state = new_state
if done == True:
break
epsilon = min_epsilon + (max_epsilon - min_epsilon)*np.exp(-decay_rate*episode)
rewards.append(total_rewards)
print(f"Score over time: {sum(rewards) / total_episodes}")
print(qtable)
# -
environment.reset()
print(environment.step(0))
# +
environment.reset()
for episode in range(5):
state = environment.reset()
step = 0
done = False
print("----------------------------------------------------")
print("EPISODE ", episode)
for step in range(max_steps):
action = np.argmax(qtable[state,:])
new_state, reward, done, info = environment.step(action)
if done:
environment.render()
print("Number of steps", step)
break
state = new_state
environment.close()
# -
| .ipynb_checkpoints/Q-Learning in Taxi-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploratory data analysis with labeled data
# Now that we have the labels for our data, we can do some initial EDA to see if there is something different between the hackers and the valid users.
#
# ## Setup
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sqlite3
with sqlite3.connect('logs/logs.db') as conn:
logs_2018 = pd.read_sql(
'SELECT * FROM logs WHERE datetime BETWEEN "2018-01-01" AND "2019-01-01";',
conn, parse_dates=['datetime'], index_col='datetime'
)
hackers_2018 = pd.read_sql(
'SELECT * FROM attacks WHERE start BETWEEN "2018-01-01" AND "2019-01-01";',
conn, parse_dates=['start', 'end']
).assign(
duration=lambda x: x.end - x.start,
start_floor=lambda x: x.start.dt.floor('min'),
end_ceil=lambda x: x.end.dt.ceil('min')
)
hackers_2018.head()
# -
# This function will tell us if the datetimes had hacker activity:
def check_if_hacker(datetimes, hackers, resolution='1min'):
"""
Check whether a hacker attempted a log in during that time.
Parameters:
- datetimes: The datetimes to check for hackers
- hackers: The dataframe indicating when the attacks started and stopped
- resolution: The granularity of the datetime. Default is 1 minute.
Returns:
A pandas Series of booleans.
"""
date_ranges = hackers.apply(
lambda x: pd.date_range(x.start_floor, x.end_ceil, freq=resolution),
axis=1
)
dates = pd.Series()
for date_range in date_ranges:
dates = pd.concat([dates, date_range.to_series()])
return datetimes.isin(dates)
# Let's label our data for Q1 so we can look for a separation boundary:
users_with_failures = logs_2018['2018-Q1'].assign(
failures=lambda x: 1 - x.success
).query('failures > 0').resample('1min').agg(
{'username':'nunique', 'failures': 'sum'}
).dropna().rename(
columns={'username':'usernames_with_failures'}
)
labels = check_if_hacker(users_with_failures.reset_index().datetime, hackers_2018)
users_with_failures['flag'] = labels[:users_with_failures.shape[0]].values
users_with_failures.head()
# Since we have the labels, we can draw a sample boundary that would separate most of the hackers from the valid users. Notice there is still at least one hacker in the valid users section of our separation:
ax = sns.scatterplot(
x=users_with_failures.usernames_with_failures,
y=users_with_failures.failures,
alpha=0.25,
hue=users_with_failures.flag
)
ax.plot([0, 8], [12, -4], 'r--', label='sample boundary')
plt.ylim(-4, None)
plt.legend()
plt.title('Usernames with failures on minute resolution')
| ch_11/3-EDA_labeled_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Lab 10: Births
#
# Please complete this lab by providing answers in cells after the question. Use **Code** cells to write and run any code you need to answer the question and **Markdown** cells to write out answers in words. After you are finished with the assignment, remember to download it as an **HTML file** and submit it in **ELMS**.
#
# This assignment is due by **11:59pm on Thursday, April 21**.
# +
import numpy as np
from datascience import *
# These lines do some fancy plotting magic.
import matplotlib
# %matplotlib inline
import matplotlib.pyplot as plt
# This is for linear regression
from sklearn.linear_model import LinearRegression
# -
# ### North Carolina Births Data
#
# In this lab, we will work with a dataset of births in North Carolina. A series of variables were collected, including characteristics about the mother as well as the birthweight of the baby. We are interested in what factors are associated with birthweight. We'll first look at predicting `weight` and different models we can build to predict it, then use bootstrapping to do inference with the regression models.
#
# Let's start by exploring the dataset a little bit to see what it looks like. We'll take a look at the relationship between the number of weeks of gestation and the birthweight of the baby.
ncbirths = Table.read_table('ncbirths.csv')
ncbirths.show(5)
ncbirths.scatter('weeks', 'weight')
plt.title('North Carolina Births Data')
# Now, let's try doing an initial regression model. We want to fit a line that models the relationship between `weeks` and `weight`.
#
# <font color = 'red'>**Question 1. Set up the model object as `ols` and the `predictor` and `outcome` objects to run the linear regression, using `weeks` as the predictor and `weight` as the outcome. Then, fit the model and print out the slope and intercept.**</font>
# +
ols = ...
predictor = ...
outcome = ...
...
# -
# We can plot the line on top of the scatterplot to see what it looks like.
ncbirths.scatter('weeks', 'weight')
plt.title('North Carolina Births Data')
plt.plot(predictor, ols.predict(predictor) , lw=4, color='gold')
# What if we wanted to try to predict the weight using the mother's `smoker` status? Since `smoker` is a categorical variable, we'd need to think about our interpretation of the slope differently, but we can still fit a regression model using this variable. However, we need to work with the data a little bit to be able to fit a model using categorical data. Essentially, what we do is treat each of the categories as a 0/1 variable. The observation has a value of 1 if it is in that category, and 0 if it is not.
#
# The `sklearn` package only allows for categorical variables that have been changed into dummy variables (that is, 0/1 variables). So, we're going to need to create new variables that contain the same information, except as numbers. Luckily, True/False maps onto 1/0, so we can just do comparisons. We'll use a lot of these variables later, so let's do some cleaning now.
# +
ncbirths_dummy = ncbirths.with_columns('premature', ncbirths.column('premie') == 'premie', # True if premature, False if not
'female', ncbirths.column('gender') == 'female', # True if female, False if not
'smoker', ncbirths.column('habit') == 'smoker', # True if smoker, False if not
'label', ncbirths.column('lowbirthweight') == 'low') # Our outcome. True if low birthweight, False if not
ncbirths_dummy.show(5)
# -
# Drop redundant rows now
ncbirths_clean = ncbirths_dummy.drop('premie', 'gender', 'habit','lowbirthweight')
ncbirths_clean.show(5)
# Now that we've cleaned up our dataset, let's try doing a linear regression with a categorical predictor.
# <font color = 'red'>**Question 2. Set up the predictor and outcome variables to run the linear regression, using `smoker` as the predictor and `birthweight` as the outcome.**</font>
#
# **Hint:** This is done the same way as the linear regression above, except using `smoker` when defining `predictor`, which happens to be a categorical variable.
# +
ols = LinearRegression()
predictor = ...
outcome = ...
ols.fit(X = predictor, y = outcome)
print(ols.coef_)
print(ols.intercept_)
# -
# ### Multiple Regression
#
# You can also add additional predictor variables to the linear regression to try to better predict the outcome. This is done by simply adding more variables the `select` statement when defining the predictor. You can think about this as just adding additional terms to the equation of the line. Before, we had one intercept and one slope. Now, we'll still only have one intercept, but we'll also add in other "slopes", or coefficients with additional variables. This way, we are using multiple variables to try to predict our outcome.
# +
multiple_ols = LinearRegression()
predictor = ncbirths_clean.select('mage', 'weeks', 'female', 'smoker').rows
outcome = ncbirths_clean.column('weight')
multiple_ols.fit(X = predictor, y = outcome)
print(multiple_ols.coef_)
print(multiple_ols.intercept_)
# -
# <font color = 'red'>**Question 2. Write out the form of the equation in the model that we just ran above.**</font>
# *Your answer here*
# ## Inference for Regression
#
# Now that we've constructed a model to predict a baby's birthweight, we might want to do some inference on some characteristics. Consider, for example, the coefficient for `weeks` in the model above. How certain are we that there is actually a relationship between `weeks` and `weight`? Maybe we got the coefficient that we did due to chance rather than a real association between the two variables.
#
# We can answer this question using bootstrapping and confidence intervals. The process is the same as before:
# - Take a bootstrap sample of the original data.
# - Fit a new line using the bootstrap sample.
# - Repeat this process many times.
# - Find the confidence interval using the bootstrap results.
#
# Let's see what this looks like with one bootstrap sample first.
# +
bootstrap_births = ncbirths_clean.sample()
bootstrap_ols = LinearRegression()
predictor = bootstrap_births.select('mage', 'weeks', 'female', 'smoker').rows
outcome = bootstrap_births.column('weight')
bootstrap_ols.fit(X = predictor, y = outcome)
print(bootstrap_ols.coef_)
print(bootstrap_ols.intercept_)
# -
# <font color = 'red'>**Question 3. Define a function called `bootstrap_slope` that has one argument `births` representing the original dataset. This function should take a bootstrap sample of `births`, fit a linear regression model with the four predictors (`mage`, `weeks`, `female`, and `smoker`), and return an array of the coefficients for the four variables.**</font>
# +
def bootstrap_slope(births):
...
return ...
bootstrap_slope(ncbirths_clean)
# -
# <font color = 'red'>**Question 4. Use a loop to take 500 bootstrap samples and store the coefficient for each of the predictor variables within arrays.**</font>
#
# *Hint:* Keep in mind the order in which you selected the variables to include in the model. `bootstrap_slope` should return an array with four coefficients. How would you get the individual coefficients?
# +
mage_coefs = make_array()
weeks_coefs = make_array()
female_coefs = make_array()
smoker_coefs = make_array()
for i in np.arange(500):
coefs = bootstrap_slope(ncbirths_clean)
mage_coefs = ...
weeks_coefs = ...
female_coefs = ...
smoker_coefs = ...
# -
# We should now have four different arrays of 500 bootstrap values, one for each of our coefficients. We want to derive confidence intervals for each of these. To do this, we can define a function that gives us a confidence interval and use it to find the confidence interval for the four arrays of bootstrap coefficients.
#
# The `confidence_interval` function has as its inputs an array of bootstrapped coefficients as well as a confidence level. It will output the left and right ends of the confidence interval as an array.
# +
def confidence_interval(coefficients, confidence_level):
left = percentile((100-confidence_level)/2, coefficients)
right = percentile(100 - (100 - confidence_level)/2, coefficients)
return make_array(left, right)
# 95% confidence interval for mother's age coefficient
confidence_interval(mage_coefs, 95)
# -
# <font color = 'red'>**Question 5. What are the 95% confidence intervals for each of the four coefficients? What would be your conclusion based on these confidence intervals?**</font>
# ## Prediction and Prediction Inference
#
# We can also build confidence intervals for the prediction that we make. The process for doing this is very similar to finding the confidence interval for a coefficient:
# - Take a bootstrap sample of the original data.
# - Fit a new line using the bootstrap sample and calculate the prediction using that line.
# - Repeat this process many times.
# - Find the confidence interval using the bootstrap results.
# <font color = 'red'>**Question 6. What would be the predicted weight of a baby according to our model if the mother's age was 30, the pregnancy lasted 36 weeks, the mother was not a smoker, and the baby was female?**</font>
new_table = Table().with_columns('mage', ...,
'weeks', ...,
'smoker', ...,
'female', ...).rows
...
# What would be the prediction interval for the prediction you found above? Let's go through the steps of generating the bootstrap prediction interval.
#
# <font color = 'red'>**Question 7. First, define a function called `bootstrap_prediction` that takes in the original data and new x-values that you want to make predictions for. This function should take a bootstrap sample, fit a linear regression model using the same predictors as before, and return one number that represents the predicted birth weight.**</font>
#
# *Hint:* There hasn't been much code provided here, but look back at what we've done before and think about what you would need to do. This function should be similar to the `bootstrap_slope` function, but the output will be different.
# +
def bootstrap_prediction(births, new_x):
...
return ...
bootstrap_prediction(ncbirths_clean, new_table)
# -
# <font color = 'red'>**Question 8. Find the 95% prediction interval. Use 500 iterations in the loop, and assign the 500 bootstrap predictions to `predictions`. Assign the left endpoint of the confidence interval to `left` and the right endpoint to `right`.**</font>
#
# *Hint:* Again, there hasn't been much code provided here, but look back at what we've done before and think about what you would need to do. You can feel free to reuse the `confidence_interval` function to calculate the endpoints of the interval.
# +
predictions = ...
...
left = ...
right = ...
print('The prediction interval is:', left, ',', right)
# -
# We can graph the bootstrap predictions and look at the prediction interval as we've done with bootstrap values in the past.
Table().with_columns('Prediction', predictions).hist()
left = percentile(2.5, predictions)
right = percentile(97.5, predictions)
plt.plot([left, right], [0, 0], color='yellow', lw=10, zorder=1)
| labs/Lab-10-RegressionInference/lab10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="kjwyqkvwwoZw" colab_type="code" colab={}
# Imports
import pandas as pd
import numpy as np
from sklearn.metrics import mean_absolute_error
# + id="SLU7MkAzwuxz" colab_type="code" colab={}
# Load the raw data
w1_results_df = pd.read_csv('https://raw.githubusercontent.com/Lambda-School-Labs/nfl-fantasy-ds/master/data/combined-predictions/predictions-week1.csv)
# + id="RgOXqOBKx4MC" colab_type="code" outputId="696443e9-8bb0-4cd2-ccae-54c709050cc5" colab={"base_uri": "https://localhost:8080/", "height": 204}
#### The week 1 predictions
# week1-cur = 2018 total points
# week1-pred = predicted points for the season
# week1-act = actual points for the season
# weekn-cur = week (n-1) actual points
# weekn-pred = predicted points for the rest of the season (n-17)
# weekn-act = actual points for the rest of the season (n-17)
w1_results_df.head()
# + id="BHPmWvto0XSY" colab_type="code" outputId="1eb4bce0-0379-4e39-d707-5e1aad7e93f3" colab={"base_uri": "https://localhost:8080/", "height": 884}
# Calculate the MAE for predicted points vs. actual points
# Calculate the MAE for current points using the average of previous weeks
for i in range(1, 18):
filename = 'https://raw.githubusercontent.com/Lambda-School-Labs/nfl-fantasy-ds/master/data/combined-predictions/predictions-week' + str(i) + '.csv'
# Column names
week_cur = 'week' + str(i) + '-cur'
week_pred = 'week' + str(i) + '-pred'
week_act = 'week' + str(i) + '-act'
# Weekly predictions
results_df = pd.read_csv(filename)
# Create the current points list using 2018 points in week 1 and average points going forward
if i == 1:
week_current = results_df['week1-cur'].tolist()
else:
# for each player (element) calculate the average points (element/(i-1)) and multiply by remaining games (17-(i-1))
# the 17th week is 0 and represents the bye week (17 weeks and 16 games)
week_list = results_df[week_cur].tolist()
week_current = [((element / (i -1)) * (17 - (i -1))) for element in week_list]
# Creat the prediction and actual lists
week_pred = results_df[week_pred].tolist()
week_act = results_df[week_act].tolist()
# Calculate the MAE for predicted vs. actual
week_pa_mae = mean_absolute_error(week_act, week_pred)
print('MAE predicted vs actual week {0:2d} {1:3.2f}'.format(i, week_pa_mae))
# Calculate the MAE for current vs. actual
week_ca_mae = mean_absolute_error(week_act, week_current)
print('MAE current vs actual week {0:2d} {1:3.2f}'.format(i, week_ca_mae), '\n')
| metrics/Metrics4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.7 64-bit
# language: python
# name: python3
# ---
# +
# %load_ext autoreload
# %autoreload 2
sys.path.append('../../src')
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from utils.functions import load_datasets_from_csv
from plot_variation import plot_t_h_curves_info
from paths import ROOT_DIR, FLLD_DB_DIR
IMG_PATH = ROOT_DIR+'/img/temp_hum_variation'
if not os.path.exists(IMG_PATH):
os.mkdir(IMG_PATH)
# -
sns.set_style({'font.family':'sans', 'font.serif':'Helvetica'})
sns.set_context(rc={"font.size":11,"axes.titlesize":14,"axes.labelsize":12})
dts = load_datasets_from_csv(FLLD_DB_DIR).copy()
plot_t_h_curves_info(dts['air'], "2021-03-01 22:00:00", "2021-03-02 16:00:00")
plt.savefig(IMG_PATH+'/temp_hum_variation_a.svg', dpi=600, pad_inches=0.05, bbox_inches='tight')
plt.savefig(IMG_PATH+'/temp_hum_variation_a.jpeg', dpi=600, pad_inches=0.05, bbox_inches='tight')
plot_t_h_curves_info(dts['air'], "2021-03-04 22:00:00", "2021-03-05 16:00:00")
plt.savefig(IMG_PATH+'/temp_hum_variation_b.svg', dpi=600, pad_inches=0.05, bbox_inches='tight')
plt.savefig(IMG_PATH+'/temp_hum_variation_b.jpeg', dpi=600, pad_inches=0.05, bbox_inches='tight')
plot_t_h_curves_info(dts['air'], "2021-03-08 22:00:00", "2021-03-09 16:00:00")
plt.savefig(IMG_PATH+'/temp_hum_variation_c.svg', dpi=600, pad_inches=0.05, bbox_inches='tight')
plt.savefig(IMG_PATH+'/temp_hum_variation_c.jpeg', dpi=600, pad_inches=0.05, bbox_inches='tight')
plot_t_h_curves_info(dts['air'], "2021-02-25 22:00:00", "2021-02-26 16:00:00")
plt.savefig(IMG_PATH+'/temp_hum_variation_d.svg', dpi=600, pad_inches=0.05, bbox_inches='tight')
plt.savefig(IMG_PATH+'/temp_hum_variation_d.jpeg', dpi=600, pad_inches=0.05, bbox_inches='tight')
| data_analysis/DDA/temp_hum_variation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py36_gis2]
# language: python
# name: conda-env-py36_gis2-py
# ---
import os
import mypackages.myrasters as mr
import netCDF4
import numpy as np
import geopandas as gpd
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
# %matplotlib inline
# Specify the data directories where input data is stored and where output data will be stored:
data_dir = os.path.join('..', 'data')
out_dir = os.path.join('..', 'output/impact')
# The data is in CDF format so we first need to open the dataset and then access the variables:
# +
file_name = 'FAO/gaez/res03_ehb22020h_sihr0_wpo.tif'
area_raster = mr.MyRaster(os.path.join(data_dir, file_name))
# -
area = area_raster.get_array().astype('int16')
area[area == 0] = -1
area[area == 9] = -1
area[area == 8] = 0
area[(area > 0) & (area < 6)] = 2
area[(area == 6) | (area == 7)] = 1
area = mr.cut_array_yboundaries(array=area, pixelHeight=area_raster.pixelHeight,
y_min_old=-90, y_min_new=-56, y_max_old=90, y_max_new=84)
# +
fig = plt.figure(figsize=(16,9))
plt.imshow(area);
# -
out_filename = 'suitable_area'
mr.array2geotiff(area, os.path.join(out_dir, out_filename),
-1, area.shape[1], area.shape[0],
-180, 84, area_raster.pixelWidth, area_raster.pixelHeight)
nogo = np.zeros(area.shape)
nogo[area == 0] = 1
marginal = np.zeros(area.shape)
marginal[area == 1] = 1
# +
file_name = 'impact_potato_pot_area.tif'
impact_raster = mr.MyRaster(os.path.join(out_dir, file_name))
# -
impact = impact_raster.get_array()
shapefile_name = 'shapefiles/countries.shp'
shapefile = os.path.join(data_dir, shapefile_name)
gdf = gpd.read_file(shapefile)
filename = 'crop_area.shp'
gdf2 = gpd.read_file(os.path.join(out_dir, filename))
cdict1 = {'red': ((0.0, 0.0, 0.0),
(1.0, 1.0, 1.0)),
'green': ((0.0, 0.0, 0.0),
(1.0, 0.0, 0.0)),
'blue': ((0.0, 0.0, 0.0),
(1.0, 0.0, 0.0)),
'alpha': ((0.0, 0.0, 0.0),
(1.0, 1.0, 1.0))}
plt.register_cmap(name='Red', data=cdict1)
cdict2 = {'red': ((0.0, 0.0, 0.0),
(1.0, 0.4, 0.4)),
'green': ((0.0, 0.0, 0.0),
(1.0, 0.8, 0.8)),
'blue': ((0.0, 0.0, 0.0),
(1.0, 0.0, 0.0)),
'alpha': ((0.0, 0.0, 0.0),
(1.0, 1.0, 1.0))}
plt.register_cmap(name='Green', data=cdict2)
np.percentile(impact[impact >= 0], 0.98)
vmin = 0
vmax = 0.5
# +
# %%time
fig = plt.figure(figsize=(16, 9))
ax = fig.add_subplot(111)
ax.set_xlim([-180, 180])
ax.set_ylim([-56, 84])
ax.set_xticks([])
ax.set_yticks([])
ax.set_frame_on(False)
gdf.plot(ax=ax, facecolor='none', edgecolor='#000000', linewidths=0.3)
gdf2.plot(ax=ax, facecolor='none', edgecolor='#000000', linewidths=0, hatch='////')
ax.imshow(impact, extent=[-180, 180, -56, 84], cmap='Greys', vmin=vmin, vmax=vmax)
ax.imshow(nogo, extent=[-180, 180, -56, 84], cmap='Red', alpha=0.15)
ax.imshow(marginal, extent=[-180, 180, -56, 84], cmap='Green', alpha=0.25)
norm = matplotlib.colors.Normalize(vmin, vmax, clip = False)
cax1 = fig.add_axes([0.2, 0.25, 0.01, 0.3]) # xmin, ymin, dx, dy
matplotlib.colorbar.ColorbarBase(cax1, cmap='Greys', norm=norm)
cax1.yaxis.set_ticks_position('left')
plt.savefig(os.path.join(out_dir, 'impact_potato_pot_area.png'),
dpi=300, bbox_inches='tight', pad_inches=0.1)
# -
filename = 'crop_area.shp'
gdf2 = gpd.read_file(os.path.join(out_dir, filename))
# +
# %%time
fig = plt.figure(figsize=(16, 9))
ax = fig.add_subplot(111)
ax.set_xlim([-180, 180])
ax.set_ylim([-56, 84])
ax.set_xticks([])
ax.set_yticks([])
ax.set_frame_on(False)
gdf.plot(ax=ax, facecolor='none', edgecolor='#000000', linewidths=0.3)
gdf2.plot(ax=ax, facecolor='none', edgecolor='#000000', linewidths=0, hatch='////')
ax.imshow(impact, extent=[-180, 180, -56, 84], cmap='Greys', vmin=vmin, vmax=vmax)
ax.imshow(nogo, extent=[-180, 180, -56, 84], cmap='Red', alpha=0.15)
ax.imshow(marginal, extent=[-180, 180, -56, 84], cmap='Green', alpha=0.25)
norm = matplotlib.colors.Normalize(vmin, vmax, clip = False)
cax1 = fig.add_axes([0.2, 0.25, 0.01, 0.3]) # xmin, ymin, dx, dy
matplotlib.colorbar.ColorbarBase(cax1, cmap='Greys', norm=norm)
cax1.yaxis.set_ticks_position('left')
plt.savefig(os.path.join(out_dir, 'impact_potato_pot_area.png'),
dpi=300, bbox_inches='tight', pad_inches=0.1)
# -
| python/6Ac_impact_potato_non-crop-area.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import math
import itertools
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Flatten
from keras.layers import TimeDistributed
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
import statsmodels.api as sm
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# -
import warnings
warnings.filterwarnings("ignore")
# # Load and process the data
df = pd.read_csv('AirPassengers.csv')
df['Month'] = pd.to_datetime(df['Month'])
df.head()
y = pd.Series(data=df['Passengers'].values, index=df['Month'])
y.head()
y.plot(figsize=(14, 6))
plt.show()
data = y.values.reshape(y.size,1)
# # LSTM Forecast Model
# ### LSTM Data Preparation
'MixMaxScaler'
scaler = MinMaxScaler(feature_range=(0, 1))
data = scaler.fit_transform(data)
train_size = int(len(data) * 0.7)
test_size = len(data) - train_size
train, test = data[0:train_size,:], data[train_size:len(data),:]
print('data train size :',train.shape[0])
print('data test size :',test.shape[0])
data.shape
# +
'function to reshape data according to the number of lags'
def reshape_data (data, look_back,time_steps):
sub_seqs = int(look_back/time_steps)
dataX, dataY = [], []
for i in range(len(data)-look_back-1):
a = data[i:(i+look_back), 0]
dataX.append(a)
dataY.append(data[i + look_back, 0])
dataX = np.array(dataX)
dataY = np.array(dataY)
dataX = np.reshape(dataX,(dataX.shape[0],sub_seqs,time_steps,np.size(data,1)))
return dataX, dataY
# +
look_back = 2
time_steps = 1
trainX, trainY = reshape_data(train, look_back,time_steps)
testX, testY = reshape_data(test, look_back,time_steps)
print('train shape :',trainX.shape)
print('test shape :',testX.shape)
# -
# ### Define and Fit the Model
# +
model = Sequential()
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=1, activation='relu'), input_shape=(None, time_steps, 1)))
model.add(TimeDistributed(MaxPooling1D(pool_size=1)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(50, activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
history = model.fit(trainX, trainY, epochs=300, validation_data=(testX, testY), verbose=0)
# -
'plot history'
plt.figure(figsize=(12,5))
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
'make predictions'
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
'invert predictions'
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
'calculate root mean squared error'
trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
# +
'shift train predictions for plotting'
trainPredictPlot = np.empty_like(data)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict
'shift test predictions for plotting'
testPredictPlot = np.empty_like(data)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(data)-1, :] = testPredict
# -
'Make as pandas series to plot'
data_series = pd.Series(scaler.inverse_transform(data).ravel(), index=df['Month'])
trainPredict_series = pd.Series(trainPredictPlot.ravel(), index=df['Month'])
testPredict_series = pd.Series(testPredictPlot.ravel(), index=df['Month'])
'plot baseline and predictions'
plt.figure(figsize=(15,6))
plt.plot(data_series,label = 'real')
plt.plot(trainPredict_series,label = 'predict_train')
plt.plot(testPredict_series,label = 'predict_test')
plt.legend()
plt.show()
# # SARIMA Model
y_train = y[:train_size]
y_test = y[train_size:]
# ## Grid search the p, d, q parameters
# +
'Define the p, d and q parameters to take any value between 0 and 3'
p = d = q = range(0, 3)
'Generate all different combinations of p, q and q triplets'
pdq = list(itertools.product(p, d, q))
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
# -
warnings.filterwarnings("ignore") # specify to ignore warning messages
best_result = [0, 0, 10000000]
for param in pdq:
for param_seasonal in seasonal_pdq:
mod = sm.tsa.statespace.SARIMAX(y_train,order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print('ARIMA{} x {} - AIC: {}'.format(param, param_seasonal, results.aic))
if results.aic < best_result[2]:
best_result = [param, param_seasonal, results.aic]
print('\nBest Result:', best_result)
# ## Plot model diagnostics
# +
mod = sm.tsa.statespace.SARIMAX(y_train,
order=(best_result[0][0], best_result[0][1], best_result[0][2]),
seasonal_order=(best_result[1][0], best_result[1][1], best_result[1][2], best_result[1][3]),
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print(results.summary())
# -
results.plot_diagnostics(figsize=(15, 12))
plt.show()
'make predictions'
pred = results.get_prediction(start=pd.to_datetime('1949-02-01'), dynamic=False,full_results=True)
pred_ci = pred.conf_int()
# +
ax = y_train.plot(label='Observed', figsize=(15, 6))
pred.predicted_mean.plot(ax=ax, label='predicted', alpha=.7)
ax.set_xlabel('Date')
ax.set_ylabel('Airline Passengers')
plt.legend()
plt.show()
# -
pred_uc = results.get_forecast(steps=44)
# +
ax = y_train.plot(label='train', figsize=(15,6))
pred_uc.predicted_mean.plot(ax=ax, label='Forecast')
y_test.plot(ax=ax, label='test')
ax.set_xlabel('Date')
ax.set_ylabel('Airline Passengers')
plt.legend()
plt.show()
# +
trainScore = math.sqrt(mean_squared_error(y_train[1:], pred.predicted_mean))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(y_test, pred_uc.predicted_mean))
print('Test Score: %.2f RMSE' % (testScore))
| time series data/1. univarite time series/air passengers/air_passengers_cnn-lstm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.4 64-bit (''base'': conda)'
# name: python364jvsc74a57bd0a3b5de61d878f304406f236b937d31a1186f9847b1a9314718805f467cc31b17
# ---
# !pip3 install tensorflow keras nltk
# !pip3 install ChatterBot
# !pip3 install Flask
# !pip install spacy
from chatterbot import ChatBot
bot = ChatBot("Candice")
# !pip3 install opencv-python
| lp_setup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: TF2
# language: python
# name: tf2
# ---
# + active=""
# 编写一个反转字符串的函数。输入字符串以字符数组char []形式给出。
# 不要为另一个数组分配额外的空间,必须通过使用O(1)额外的内存就地修改输入数组来做到这一点。
# 您可以假定所有字符都由可打印的ASCII字符组成。
#
# Example 1:
#
# Input: ["h","e","l","l","o"]
# Output: ["o","l","l","e","h"]
#
# Example 2:
# Input: ["H","a","n","n","a","h"]
# Output: ["h","a","n","n","a","H"]
# -
class Solution:
def reverseString(self, s) -> None:
"""
Do not return anything, modify s in-place instead.
"""
left = 0
right = len(s) - 1
while left <= right:
s[left], s[right] = s[right], s[left]
left += 1
right -= 1
print(s)
s_ = ["h","e","l","l"]
solution = Solution()
solution.reverseString(s_)
| String/0821/344. Reverse String.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Chapter 2 – End-to-end Machine Learning project
# ## 机器学习工程的主要步骤
# 1. Frame the problem and look at the big picture
# 2. Get the data.
# 3. Explore the data to gain insights
# 4. Prepare the data to better expose the underlying data patterns to Machine Learning algorithms
# 5. Explore many different models and short-list the best ones
# 6. Fine-tune your models and combine them into a great solution
# 7. Present your solution
# 8. Launch, monitor, and maintain your system
#
# ### 明确问题定位及整体情况Frame the problem and look at the big picture
# 要做机器学习问题,首先应该明确问题本身的定位,仅仅是构建一个模型出来不是最终目的。花时间去弄明白企业如何使用该模型及如何从这模型中获益是非常重要的,因为这影响了后续如何明确问题的定位,该选择什么算法,该选择什么性能测量去评估模型,及该花费多大精力去调优模型,这样处理问题才能有的放矢。
#
# 机器学习系统中有很多的数据要做各式处理,转换等,甚至需要依次进行,这一系列数据处理过程组成的序列称为data pipeline。
# pipeline中的各组件的执行相对独立,内部也通常是自包含的,互相之间通过数据存储进行数据交换,上一个的输出作为下一个的输入,相对划分清晰的各组件也可交给不同团队去实现。我们需要明白现在解决的问题是整个pipeline中的一步还是本身就是各小工程。
#
# 弄明白问题是什么后可以进一步思考明确下该问题:是监督,非监督还是强化学习?是分类还是回归?该用批量学习还是在线/增量学习...
#
# 本章作为示例的问题是california房价的预测,很明显这是个监督线性回归问题,给出的数据集也足够全部加载到内存中,因而选择批量学习方式即可...
#
# 作为模型好坏的分析,我们还要选择一个性能测量指标,对于分类问题来说,常用的是均方根误差Root Mean Square Error。
#
# RMSE:均方根误差,是预测值与真值偏差的平方和与观测次数m比值的平方根
# SD:标准差,方差的平方根,衡量的一组值自身的离散程度,与均值的比较。sigama,
#
# 明确问题清单:
# 1. 用业务术语定义目标
# 2. 问题解决后怎么使用?当前解决方案是怎样的
# 3. 明确问题(监督/非监督,在线/离线)
# 4. 性能测量标准
# 5.
# ### 获取数据Get the data.
# 获取数据集是模型训练的根本,没有数据再好的算法也得不出结果。另外真正工业界获取的数据并不一定是结构化的,需要经历ETL转换得到规整数据,得到规整数据后,当然还有后面的特征工程,识别其数据模式并必要时构建新特征
#
# ### 浏览数据得到器各特征的概貌
#
# ## 获取数据
# 首先是公共的输出图片的代码
# +
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "1.end_to_end_project"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
if os.path.exists(IMAGES_PATH) == False:
os.makedirs(IMAGES_PATH)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
# -
# 加载数据到housing中,下面进行数据的基本探索
# +
HOUSING_PATH = os.path.join("datasets", "housing")
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
housing = load_housing_data()
housing.head()
# -
housing.info()
housing.describe()
# %matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20,15))
save_fig("attribute_histogram_plots")
plt.show()
# 划分数据集,可以自己写分割策略,也可直接用sklearn的train_test_split函数。下面用train_test_split分割后依次查看train_test和test_set的数据分布直方图。
# +
from sklearn.model_selection import train_test_split
train_set,test_set = train_test_split(housing,test_size=0.2,random_state=42)
train_set.hist(bins=50, figsize=(20,15))
test_set.hist(bins=50, figsize=(20,15))
# -
# 专门看下median_income的直方图,大部分价格在2-5万刀
housing['median_income'].hist(bins=50, figsize=(10,6))
# 现在median_income是数值类型的,可以将其转为类型,离散值样的。为限制类型的总数,先将原值除以1.5,然后为了取整值,将其取ceil.
# 下面分别计算1-5五个取值中各个的占比
housing['income_cat']=np.ceil(housing['median_income']/1.5)
housing['income_cat'].where(housing['income_cat']<5,5.0,inplace=True)
housing['income_cat'].hist()
housing['income_cat'].value_counts()/len(housing)
# 分别实验StratifiedShuffleSplit与train_set_split的分配后的数据集的分布,提供了一种保持按照指定属性的分布进行划分的方法。
# +
from sklearn.model_selection import StratifiedShuffleSplit
split=StratifiedShuffleSplit(n_splits=1,test_size=0.2,random_state=42)
s=split.split(housing,housing['income_cat'])
for trainindex,testindex in s:
trainset=housing.loc[trainindex]
testset=housing.loc[testindex]
print('total:\n',housing['income_cat'].value_counts()/len(housing))
print('StratifiedShuffleSplit:\n',trainset['income_cat'].value_counts()/len(trainset))
randtrainset,randtestset=train_test_split(housing,test_size=0.2,random_state=42)
print('train_test_split:\n',randtrainset['income_cat'].value_counts()/len(randtrainset))
# -
# 现在起可以主要拿trainset做实验
# +
trainset.plot(kind="scatter",x="longitude",y="latitude",alpha=0.2)
# -
trainset.plot(kind='scatter',x="longitude",y="median_income",alpha=0.3)
# 可以用颜色及气泡大小更直观的展示数据,如下plot中s表示要根据气泡大小展示数据的列的取值,c表示颜色深浅, cmap则为colormap。图可见房价与是否靠海及其population度十分有关系。
housing=trainset
housing.plot(kind='scatter',x="longitude",y="latitude",
alpha=0.4,
s=housing['population']/50,label="pop",
c="median_house_value",cmap=plt.get_cmap("jet"),colorbar=True,)
# ## Looking for Correlation
# *standard correlation coefficient*(Pearson's r)皮尔森系数
# 相关系数的取值为-1到1,值接近1时,表示他们有强正相关性,值接近-1则表示他们有负相关性,值若接近0,则表示无线性相关性
corr_matrix=housing.corr()
corr_matrix
| handson-ml-zjh/2.end_to_end_machine_learning_project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # AIMA Python Binder Index
#
# Welcome to the AIMA Python Code Repository. You should be seeing this index notebook if you clicked on the **Launch Binder** button on the [repository](https://github.com/aimacode/aima-python). If you are viewing this notebook directly on Github we suggest that you use the **Launch Binder** button instead. Binder allows you to experiment with all the code in the browser itself without the need of installing anything on your local machine. Below is the list of notebooks that should assist you in navigating the different notebooks available.
#
# If you are completely new to AIMA Python or Jupyter Notebooks we suggest that you start with the Introduction Notebook.
#
# # List of Notebooks
#
# 1. [**Introduction**](./intro.ipynb)
#
# 2. [**Agents**](./agents.ipynb)
#
# 3. [**Search**](./search.ipynb)
#
# 4. [**Search - 4th edition**](./search-4e.ipynb)
#
# 4. [**Games**](./games.ipynb)
#
# 5. [**Constraint Satisfaction Problems**](./csp.ipynb)
#
# 6. [**Logic**](./logic.ipynb)
#
# 7. [**Planning**](./planning.ipynb)
#
# 8. [**Probability**](./probability.ipynb)
#
# 9. [**Markov Decision Processes**](./mdp.ipynb)
#
# 10. [**Learning**](./learning.ipynb)
#
# 11. [**Reinforcement Learning**](./rl.ipynb)
#
# 12. [**Statistical Language Processing Tools**](./text.ipynb)
#
# 13. [**Natural Language Processing**](./nlp.ipynb)
#
# Besides the notebooks it is also possible to make direct modifications to the Python/JS code. To view/modify the complete set of files [click here](.) to view the Directory structure.
| aimacode/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Dependencies and Setup
# %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from scipy.stats import sem
# Hide warning messages in notebook
import warnings
warnings.filterwarnings('ignore')
# File to Load (Remember to Change These)
mouse_data = "data/mouse_drug_data.csv"
clinical_trial_data = "data/clinicaltrial_data.csv"
# Read the Mouse and Drug Data and the Clinical Trial Data
mouse_df = pd.read_csv(mouse_data)
clinical_trial_df = pd.read_csv(clinical_trial_data)
# mouse_df.head()
# clinical_trial_df.head()
# Combine the data into a single dataset
combined_data = pd.merge(clinical_trial_df, mouse_df, on="Mouse ID")
# Display the data table for preview
combined_data
# +
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
placeholder = combined_data.groupby(["Drug", "Timepoint"])["Tumor Volume (mm3)"].mean()
# Convert to DataFrame
dataframe = pd.DataFrame(placeholder).reset_index()
#Preview DataFrame
dataframe
# +
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
standard_error_grouped = combined_data.groupby(["Drug", "Timepoint"])["Tumor Volume (mm3)"].sem()
# Convert to DataFrame
standard_error_df = pd.DataFrame(standard_error_grouped).reset_index()
# Preview DataFrame
standard_error_df
# -
# Minor Data Munging to Re-Format the Data Frames
drugtable = pd.pivot_table(combined_data,index=['Timepoint'],values=['Tumor Volume (mm3)'],columns=['Drug'])
# reset the index
drugtable.columns = drugtable.columns.droplevel(0)
drugtable
# +
# Plot
fig, ax = plt.subplots(figsize=(16,8))
ax.set_xlabel("Time (Days)")
ax.set_ylabel("Tumor Volume (mm3)")
ax.set_title("Tumor Response To Treatment")
drugs = ["Capomulin", "Infubinol", "Ketapril", "Placebo"]
marker = ["o", "^", "s", "d"]
color = ["red", "blue", "green", "black"]
stderr_x_axis = [row for row in drugtable.index]
stdErrPivot = standard_error_df.pivot(index="Timepoint", columns="Drug", values="Tumor Volume (mm3)")
i = 0
for drug in drugs:
y_axis = drugtable[drug]
ax.errorbar(stderr_x_axis, y_axis, stdErrPivot[drug], linestyle=":", fmt=marker[i], color=color[i], label=drug)
i = i+1
plt.legend()
ax.yaxis.grid()
plt.savefig("../Images/TumorResponse.png")
# +
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
metsitegrouped = combined_data.groupby(["Drug", "Timepoint"])["Metastatic Sites"].mean()
# Convert to DataFrame
metsite_df = pd.DataFrame(metsitegrouped)
# Preview DataFrame
metsite_df
# +
stderrgrouped = combined_data.groupby(["Drug", "Timepoint"])["Metastatic Sites"].sem()
stderrgrouped_df = pd.DataFrame(stderrgrouped).reset_index()
stderrgrouped_df
# -
# Minor Data Munging to Re-Format the Data Frames
drugtable2 = pd.pivot_table(combined_data,index=['Timepoint'],values=['Metastatic Sites'],columns=['Drug'])
# reset the index
drugtable2.columns = drugtable2.columns.droplevel(0)
drugtable2
# +
# Plot
fig, ax = plt.subplots(figsize=(16,8))
ax.set_xlabel("Treatment Duration (Days)")
ax.set_ylabel("Met.Sites")
ax.set_title("Metastatic Spread During Treatment")
drugs = ["Capomulin", "Infubinol", "Ketapril", "Placebo"]
marker = ["o", "^", "s", "d"]
color = ["red", "blue", "green", "black"]
x_axis = [row for row in drugtable2.index]
stdErrPvt = stderrgrouped_df.pivot(index="Timepoint", columns="Drug", values="Metastatic Sites")
i = 0
for drug in drugs:
y_axis = drugtable2[drug]
ax.errorbar(x_axis, y_axis, stdErrPvt[drug], linestyle=":", fmt=marker[i], color=color[i], label=drug)
i = i+1
plt.legend()
ax.yaxis.grid()
plt.savefig("../Images/MetastaticSpread.png")
# +
mousecountgrouped = combined_data.groupby(["Drug", "Timepoint"])["Mouse ID"].count()
# Convert to DataFrame
mousecount_df = pd.DataFrame(mousecountgrouped).reset_index()
# Preview DataFrame
mousecount_df
# -
# Minor Data Munging to Re-Format the Data Frames
mousetable = pd.pivot_table(mousecount_df,index=['Timepoint'],values=['Mouse ID'],columns=['Drug'])
# reset the index
mousetable.columns = mousetable.columns.droplevel(0)
mousetable
# +
Drugs = ["Capomulin", "Infubinol", "Ketapril", "Placebo"]
marker = ["o", "^", "s", "d"]
color = ["red", "blue", "green", "black"]
x_axis = [row for row in mousetable.index]
fig, ax = plt.subplots(figsize=(16,8))
i = 0
ax.set_xlabel("Time (Days)")
ax.set_ylabel("Survival Rate (%)")
ax.set_title("Survival During Treatment")
for drug in drugs:
ax.plot(stderr_x_axis, (100 * mousetable[drug])/mousetable[drug][0], marker=marker[i], linestyle=":", label=drug, color=color[i])
i = i+1
plt.legend()
plt.grid()
plt.savefig("../Images/SurvivalRates.png")
# +
PercentChanges = {}
for drug in drugtable.columns:
begin = drugtable[drug][0]
end = drugtable[drug][45]
change = ((end - begin) / begin) * 100
PercentChanges[drug] = change
drugs = ["Capomulin", "Infubinol", "Ketapril", "Placebo"]
ChangesToChart = (PercentChanges[drugs[0]], PercentChanges[drugs[1]], PercentChanges[drugs[2]], PercentChanges[drugs[3]])
DecreasingDrugs = [drug for drug in ChangesToChart if drug < 0]
IncreasingDrugs = [drug for drug in ChangesToChart if drug >= 0]
y_pos = np.arange(len(ChangesToChart))
y_pos_pass = 0
y_pos_fail = np.arange(1, len(IncreasingDrugs) + 1)
fig, ax = plt.subplots(figsize=(16,8))
plt.title("Tumor Change over 45 Day Treatment")
plt.ylabel("% Tumor Volume Change")
plt.xticks(y_pos, drugs, ha='right')
PassingRectangles = plt.bar(y_pos_pass, DecreasingDrugs, align="edge", width=-1, color="green", linewidth="1", edgecolor="black")
FailingRectangles = plt.bar(y_pos_fail, IncreasingDrugs, align="edge", width=-1, color="red", linewidth="1", edgecolor="black")
def autolabel(rects, ax):
(y_bottom, y_top) = ax.get_ylim()
y_height = y_top - y_bottom
for rect in rects:
height = rect.get_height()
if height >= 0:
label_position = y_height * 0.025
elif height < 0:
label_position = -(y_height * 0.075)
ax.text(rect.get_x() + rect.get_width()/2., label_position,
f'{int(height)} %',
ha='center', va='bottom', color="white")
autolabel(PassingRectangles, ax)
autolabel(FailingRectangles, ax)
ax.grid()
ax.set_axisbelow(True)
# -
# ANALYSIS
#
# 1. CAPOMULIN is noticeably more effective than the others drugs charted. Tumor growth declined on Capomulin whereas the other drugs Infubinol and Ketapril, experienced growth. Capomulin also had slower metastatic spread compared to placebo and the other drugs. Survival rate was significantly higher on Capomulin, and maintained much more stable over the course of treatment than compared to placebo and the other drugs.
#
# 2. INFUBINOL and KETAPRIL seem fairly ineffective. In most measures they nearly mirror placebo.
#
# 3. While all of the drugs show effective, to at least a small degree, in metastatic spread, KETAPRIL loses that effectiveness in later days, even showing to be slighly worse than placebo, though that does fall within the standard error.
| Pymaceuticals/Pymaceuticals.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import print_function
import warnings
warnings.filterwarnings("ignore")
import logging
import numpy as np
# %matplotlib inline
# -
# ### Setting up random state for reproducibility
RANDOM_STATE = 1234
np.random.seed(RANDOM_STATE)
# ### Setting up logger
# +
# Logging configuration
logger = logging.getLogger(__name__)
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(name)-5s %(levelname)-5s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
# -
# ### Restrict maximum features
#
# We restrict the maximum number of features a.k.a. our inputs to be 5000. So only top 5000 words will be chosen from IMDB dataset. `load_data` automatically does a 50:50 train test split.
max_features = 5000
max_review_length = 300
# +
from keras.datasets import imdb
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# -
logger.debug('Length of X_train: %(len)s', {'len': len(x_train)})
logger.debug('Length of X_test: %(len)s', {'len': len(x_test)})
# +
from keras.preprocessing import sequence
X_train = sequence.pad_sequences(x_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(x_test, maxlen=max_review_length)
# -
logger.debug('Shape of X_train: %(shape)s', {'shape': X_train.shape})
logger.debug('Shape of X_test: %(shape)s', {'shape': X_test.shape})
# ### Simple LSTM
# +
from keras.models import Sequential
from keras.layers import Dense, LSTM
from keras.layers.embeddings import Embedding
model = Sequential()
model.add(Embedding(max_features, 128, embeddings_initializer='glorot_normal'))
model.add(LSTM(128, dropout = 0.3, recurrent_dropout=0.3))
model.add(Dense(1, activation='sigmoid', kernel_initializer='glorot_normal'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# -
model.summary()
model.fit(X_train, y_train, batch_size=64, epochs=10, validation_data=(X_test, y_test))
| practice/imdb/Simple IMDB Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import scanpy as sc
import matplotlib.pyplot as plt
import numpy as np
#Gzip the star filtered and raw outputs
# ! ls ./DataT1/Solo.out/Gene/filtered/
# +
tcells = sc.read_10x_mtx("./DataT1/Solo.out/Gene/filtered/")
bcells = sc.read_10x_mtx("./DataB1/Solo.out/Gene/filtered/")
adata = tcells.concatenate(bcells)
sc.pl.highest_expr_genes(adata, n_top=20)
# +
sc.pp.filter_cells(adata, min_genes=100)
sc.pp.filter_genes(adata, min_cells=2)
adata.var['mt'] = adata.var_names.str.startswith('MT-') # annotate the group of mitochondrial genes as 'mt'
sc.pp.calculate_qc_metrics(adata, qc_vars=['mt'], percent_top=None, log1p=False, inplace=True)
sc.pl.violin(adata, ['n_genes_by_counts', 'total_counts', 'pct_counts_mt'],
jitter=0.4, multi_panel=True)
# -
sc.pl.scatter(adata, x='total_counts', y='pct_counts_mt')
sc.pl.scatter(adata, x='total_counts', y='n_genes_by_counts')
# +
#filter and normalize
adata = adata[adata.obs.n_genes_by_counts < 2500, :]
adata = adata[adata.obs.pct_counts_mt < 5, :]
sc.pp.normalize_total(adata, target_sum=1e4)
sc.pp.log1p(adata)
# -
#select highly variable genes
sc.pp.highly_variable_genes(adata, min_mean=0.0125, max_mean=3, min_disp=0.5)
sc.pl.highly_variable_genes(adata)
# +
adata.raw = adata
adata = adata[:, adata.var.highly_variable]
sc.pp.regress_out(adata, ['total_counts', 'pct_counts_mt'])
sc.pp.scale(adata, max_value=10)
sc.tl.pca(adata, svd_solver='arpack')
sc.pl.pca(adata, color='CD3D')
# -
sc.pl.pca_variance_ratio(adata, log=True)
sc.pp.neighbors(adata, n_neighbors=10, n_pcs=40)
sc.tl.umap(adata)
sc.pl.umap(adata, color=[ 'total_counts'])
sc.tl.leiden(adata)
sc.pl.umap(adata, color=['leiden'])
sc.pl.umap(adata, color=[ 'CD79A', 'batch'])
sc.tl.tsne(adata)
sc.pl.tsne(adata, color = ['CD3D','CD79A', 'leiden', 'batch'])
adata.obs
sc.tl.rank_genes_groups(adata, groupby='batch')
sc.pl.rank_genes_groups_heatmap(adata)
| TandBcells.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NLP
# Definition (source:Wikipedia)
# Natural language processing (NLP) is a subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.
#
# Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation.
# ## NLTK
# The **Natural Language Toolkit**, or more commonly NLTK, is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for English written in the Python programming language. It was developed by <NAME> and <NAME> in the Department of Computer and Information Science at the University of Pennsylvania.[4] NLTK includes graphical demonstrations and sample data. It is accompanied by a book that explains the underlying concepts behind the language processing tasks supported by the toolkit,[5] plus a cookbook.[6] source: Wikipedia
# Further information
# - https://github.com/nltk/nltk
# - http://www.nltk.org/
# + [markdown] deletable=true editable=true
# ## Input Data##
# -
# - **Token** Definition : Tokens could be paragraphs, sentences, individual words.
# + deletable=true editable=true
text = "Here is an exercise. You will need to do all preprocessing steps on it."
# + [markdown] deletable=true editable=true
# ## Data Preparation Steps ##
# -
# - **Tokenisation** : Splitting up a text into smaller lines(tokens).
# + [markdown] deletable=true editable=true
# ### Step 1: Tokenisation ###
# +
# You need to have nltk library in your machine
# + deletable=true editable=true
import nltk
tokenizer = nltk.tokenize.WhitespaceTokenizer()
tokenized_sentene =tokenizer.tokenize(text)
# -
# whitespace (space, tab, newline)
# + [markdown] deletable=true editable=true
# ### Step 2: Stemming ###
#
# -
# **Stem definition**
# -In linguistics, a stem is a part of a word used with slightly different meanings and would depend on the morphology of the language in question. In Athabaskan linguistics, for example, a verb stem is a root that cannot appear on its own, and that carries the tone of the word. Athabaskan verbs typically have two stems in this analysis, each preceded by prefixes.
# + [markdown] deletable=true editable=true
# A stemmer for English operating on the stem cat should identify such strings as cats, catlike, and catty. A stemming algorithm might also reduce the words fishing, fished, and fisher to the stem fish. The stem need not be a word, for example the Porter algorithm reduces, argue, argued, argues, arguing, and argus to the stem argu.
#
# -
# The stem of the verb wait is wait: it is the part that is common to all its inflected variants.
# - wait (infinitive)
# - wait (imperative)
# - waits (present, 3rd person, singular)
# - wait (present, other persons and/or plural)
# - waited (simple past)
# - waited (past participle)
# - waiting (progressive)
# - Source: Wikipedia.
# + deletable=true editable=true
import nltk
from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
for token in tokenized_sentene:
print (stemmer.stem(token))
# + [markdown] deletable=true editable=true
# ### Step 1: Lemmatization ###
# -
# in linguistics is the process of grouping together the inflected forms of a word so they can be analysed as a single item, identified by the word's lemma, or dictionary form.[1]
#
# In computational linguistics, lemmatisation is the algorithmic process of determining the lemma of a word based on its intended meaning.
# + deletable=true editable=true
import nltk
nltk.download('wordnet') # in MS Azure we had to do it manually. It takes a while.
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
for token in tokenized_sentene:
print (lemmatizer.lemmatize(token))
# -
# **Wordnet** is a lexical database of English.
#
# https://www.nltk.org/howto/wordnet.html
# ## Further information
# - nltk.download('wordnet') We need to download a dictionary
# - https://wordnet.princeton.edu/
# + [markdown] deletable=true editable=true
# ### Stemming vs Lemmatization ?
# -
# Unlike stemming, lemmatisation depends on correctly identifying the intended part of speech and meaning of a word in a sentence, as well as within the larger context surrounding that sentence, such as neighboring sentences or even an entire document. As a result, developing efficient lemmatisation algorithms is an open area of research.
# + [markdown] deletable=true editable=true
# ### more normalization steps? (optional) ###
# + deletable=true editable=true
#lower casing all tokens in the tokenized_sentene
for token in tokenized_sentene:
print (token.lower())
# + deletable=true editable=true
| NLP+datapreparation+with+notes+by+MSLO.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 3.7 Basic Statistical Learning Theory
#
# - Goal: Estimate an unknown probability distribution $D$ on a set $X$
# from samples $(i,i,d)$ $x_{1}, \ldots, x_{n} \in X$
#
# - Introduce a family of distributions $P_{\theta}$ for
# $\theta \in \Theta$ and try to choose $\theta$ to \"match\" the
# samples.
#
# - Maximum Likelihood Estimate: Choose $\theta$ to maximize the
# probability of the samples.
#
# - Example: Let $X=R$, have some samples $x_{1}, \ldots, x_{n}$
# drawn from a distribution D, say $P_{\theta}$ is a Gaussian with
# variance 1, centered at $\theta \in \mathbb{R}=\Theta$, i.e.
# density
# $p_{\theta}(x)=\frac{1}{\sqrt{2 \pi}} e^{-(x-\theta)^{2} / 2}$
#
# {width=".55\\textwidth"}
#
# - Use the samples to find the center $\theta$.
#
# ## 3.7.1 Maximum Likelihood Estimate(MLE)
#
# - Given $\theta \in \Theta (=\mathbb{R}$ for this example), what is
# the probability of the data $\left\{x_{j}\right\}_{j=1}^{n}$?
#
# - Samles independent: Likelihood function(as a function of
# $\sigma$)
#
# $$
# P_{\theta}\left(\left\{x_{j}\right\}_{j=1}^{n}\right)=\prod_{j=1}^{n} p_{\theta}\left(x_{j}\right)
# =\frac{1}{(\sqrt{2 \pi})^{n}}\prod_{j=1}^{n} e^{-\left(x_{j}-\theta\right)^{2} / 2}
# =\frac{1}{(2 \pi)^{n/ 2}} e^{-\sum_{j=1}^{n}\left(x_{j}-\theta\right)^{2} / 2}
# $$
#
# - MLE: Choose $\theta$ to maximize this!
#
# - Often it's useful to consider log likelihood function
# $\log \left(P_{\theta}\left(\left\{x_{j}\right\}_{j=1}^{n}\right)\right)$
#
# $$\begin{aligned} \theta^{*}= \operatorname{argmax} _{\theta \in \Theta}\log \left(P_{\theta}\left(\left\{x_{j}\right\}_{j=1}^{n}\right)\right) \\
# \left(\operatorname{argmin} _{\theta \in \Theta}-\log \left(P_{\theta}\left(\left\{x_{j}\right\}_{j=1}^{n}\right)\right)\right) \end{aligned}
# $$
#
# - For this example:
#
# $$
# \log \left(P_{\theta}\left(\left\{x_{j}\right\}_{j=1}^{n}\right)\right)=-\log (2 \pi) \cdot\left(\frac{n}{2}\right)-\sum_{j=1}^{n} \frac{\left(x_{j}-\theta\right)^{2}}{2}
# $$
#
# $$
# \theta^{*}=\operatorname{argmin}_{\theta \in \mathbb{R}} \sum_{j=1}^{n} \frac{\left(x_{j}-\theta\right)^{2}}{2}.
# $$
#
# $$
# \theta^{*}=\frac{1}{n} \sum_{j=1}^{n} x_{j}.
# $$
#
# {width=".55\\textwidth"}
| _build/jupyter_execute/ch03/ch3_7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np# x = np.linspace(-4,4,100)
# pdf = norm_rv.pdf(x)
# plt.plot(x, pdf, label='theoretical pdf', alpha=0.5)
# plt.legend()
# plt.ylabel('$f(x)$')
# plt.xlabel('$x$')# x = np.linspace(-4,4,100)
# pdf = norm_rv.pdf(x)
# plt.plot(x, pdf, label='theoretical pdf', alpha=0.5)
# plt.legend()
# plt.ylabel('$f(x)$')
# plt.xlabel('$x$')
import matplotlib.pyplot as plt
import scipy.stats as sts
# %matplotlib inline
# # Дискретное распределение
# Сгенерируем выборку объёма 100 из дискретного распределения с шестью равновероятными исходами.
sample = np.random.choice([1,2,3,4,5,6], 100)
# Представим теперь, что эта выборка была получена не искусственно, а путём подбрасывания симметричного шестигранного кубика 100 раз. Оценим вероятности выпадения каждой из сторон с помощью частот:
# +
# посчитаем число выпадений каждой из сторон:
from collections import Counter
c = Counter(sample)
print("Число выпадений каждой из сторон:")
print(c)
# теперь поделим на общее число подбрасываний и получим вероятности:
print("Вероятности выпадений каждой из сторон:")
print({k: v/100.0 for k, v in c.items()})
# -
# Это и есть оценка функции вероятности дискретного распределения.
# # Непрерывное распределение
# Сгенерируем выборку объёма 100 из стандартного нормального распределения (с $\mu=0$ и $\sigma^2=1$):
norm_rv = sts.norm(0, 1)
sample = norm_rv.rvs(100)
sample
# Эмпирическая функция распределения для полученной выборки:
# +
x = np.linspace(-4,4,100)
cdf = norm_rv.cdf(x)
plt.plot(x, cdf, label='theoretical CDF')
# для построения ECDF используем библиотеку statsmodels
from statsmodels.distributions.empirical_distribution import ECDF
ecdf = ECDF(sample)
plt.step(ecdf.x, ecdf.y, label='ECDF')
plt.ylabel('f(x)')
plt.xlabel('x')
plt.legend(loc='best')
# -
# Гистограмма выборки:
plt.hist(sample, density=True)
plt.ylabel('fraction of samples')
plt.xlabel('$x$')
# Попробуем задавать число карманов гистограммы вручную:
plt.hist(sample, bins=3, density=True)
plt.ylabel('fraction of samples')
plt.xlabel('$x$')
plt.hist(sample, bins=10, density=True)
plt.ylabel('fraction of samples')
plt.xlabel('$x$')
# Эмпирическая оценка плотности, построенная по выборке с помощью ядерного сглаживания:
# +
# для построения используем библиотеку Pandas:
df = pd.DataFrame(sample, columns=['KDE'])
ax = df.plot(kind='density')
# на том же графике построим теоретическую плотность распределения:
x = np.linspace(-4,4,100)
pdf = norm_rv.pdf(x)
plt.plot(x, pdf, label='theoretical pdf', alpha=0.5)
plt.legend()
plt.ylabel('$f(x)$')
plt.xlabel('$x$')
| math&python/4_week/sample_distribution_evaluation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Predict with Model
# ## Init Model
# + language="bash"
#
# pio init-model \
# --model-server-url=http://prediction-scikit.community.pipeline.io \
# --model-type scikit \
# --model-namespace default \
# --model-name scikit_linear \
# --model-version v1 \
# --model-path .
# -
# ## Predict with Model (CLI)
# + language="bash"
#
# pio predict \
# --model-test-request-path ./data/test_request.json
# -
# ## Predict Many
# This is a mini load test to provide instant feedback on relative performance.
# + language="bash"
#
# pio predict_many \
# --model-test-request-path ./data/test_request.json \
# --num-iterations 5
# -
# ### Predict with Model (REST)
# +
import requests
model_type = 'scikit'
model_namespace = 'default'
model_name = 'scikit_linear'
model_version = 'v1'
deploy_url = 'http://prediction-%s.community.pipeline.io/api/v1/model/predict/%s/%s/%s/%s' % (model_type, model_type, model_namespace, model_name, model_version)
with open('./data/test_request.json', 'rb') as fh:
model_input_binary = fh.read()
response = requests.post(url=deploy_url,
data=model_input_binary,
timeout=30)
print("Success! %s" % response.text)
# -
| jupyterhub/notebooks/scikit/scikit_linear/04_PredictModel.ipynb |
# ##### Copyright 2020 The OR-Tools Authors.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# # set_covering2
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/contrib/set_covering2.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a>
# </td>
# <td>
# <a href="https://github.com/google/or-tools/blob/master/examples/contrib/set_covering2.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a>
# </td>
# </table>
# First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab.
# !pip install ortools
# +
# Copyright 2010 <NAME> <EMAIL>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Set covering in Google CP Solver.
Example 9.1-2, page 354ff, from
Taha 'Operations Research - An Introduction'
Minimize the number of security telephones in street
corners on a campus.
Compare with the following models:
* MiniZinc: http://www.hakank.org/minizinc/set_covering2.mzn
* Comet : http://www.hakank.org/comet/set_covering2.co
* ECLiPSe : http://www.hakank.org/eclipse/set_covering2.ecl
* SICStus: http://hakank.org/sicstus/set_covering2.pl
* Gecode: http://hakank.org/gecode/set_covering2.cpp
This model was created by <NAME> (<EMAIL>)
Also see my other Google CP Solver models:
http://www.hakank.org/google_or_tools/
"""
from __future__ import print_function
from ortools.constraint_solver import pywrapcp
# Create the solver.
solver = pywrapcp.Solver("Set covering")
#
# data
#
n = 8 # maximum number of corners
num_streets = 11 # number of connected streets
# corners of each street
# Note: 1-based (handled below)
corner = [[1, 2], [2, 3], [4, 5], [7, 8], [6, 7], [2, 6], [1, 6], [4, 7],
[2, 4], [5, 8], [3, 5]]
#
# declare variables
#
x = [solver.IntVar(0, 1, "x[%i]" % i) for i in range(n)]
#
# constraints
#
# number of telephones, to be minimized
z = solver.Sum(x)
# ensure that all corners are covered
for i in range(num_streets):
# also, convert to 0-based
solver.Add(solver.SumGreaterOrEqual([x[j - 1] for j in corner[i]], 1))
objective = solver.Minimize(z, 1)
#
# solution and search
#
solution = solver.Assignment()
solution.Add(x)
solution.AddObjective(z)
collector = solver.LastSolutionCollector(solution)
solver.Solve(
solver.Phase(x, solver.INT_VAR_DEFAULT, solver.INT_VALUE_DEFAULT),
[collector, objective])
print("z:", collector.ObjectiveValue(0))
print("x:", [collector.Value(0, x[i]) for i in range(n)])
print("failures:", solver.Failures())
print("branches:", solver.Branches())
print("WallTime:", solver.WallTime())
| examples/notebook/contrib/set_covering2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
field=np.zeros([3,4])
bombas=[[0,0],[0,1]]
A
columnas=4
renglones=3
for bomba in bombas:
print(bomba)
renglon=bomba[0]
columna=bomba[1]
field[renglon][columna]=-1
for i in range(renglon-1, renglon+2):
for j in range(columna-1, columna+2):
print(i, ',', j)
if 0<=i<renglones and 0<=j<columnas and field[i][j]!=-1:
field[i][j]+=1
field
# +
def busca_minas(bombas,renglones,columnas):
field=np.zeros([renglones,columnas])
for bomba in bombas:
renglon=bomba[0]
columna=bomba[1]
field[renglon][columna]=-1
for i in range(renglon-1, renglon+2):
for j in range(columna-1, columna+2):
if 0<=i<renglones and 0<=j<columnas and field[i][j]!=-1:
field[i][j]+=1
return field
# -
busca_minas(bombas,renglones,columnas)
# +
(field==0)
field.shape[0]
# -
given_i=2
given_j=2
field
import queue
# +
def click(field,renglones,columnas, given_i,given_j):
to_check=queue.Queue()
if field[given_i][given_j]!=0:
return field
else:
field[given_i][given_j]=-2
to_check.put((given_i,given_j))
while not to_check.empty():
(current_i,current_j)=to_check.get()
for i in range((current_i-1),(current_i+2)):
for j in range((current_j-1),(current_j+2)):
if 0<=i<renglones and 0<=j<columnas and field[i][j]==0:
field[i][j]=-2
to_check.put((i,j))
return field
# -
click(field,renglones,columnas,given_i,given_j)
bombas=[[0,0],[3,3]]
renglones=4
columnas=4
field=busca_minas(bombas,renglones,columnas)
field
given_i=1
given_j=2
click(field,renglones,columnas,given_i,given_j)
| Udemy/BuscaMinasQueue.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Alternative Way to Create RegionDS
# ## Parse Methylpy DMRfind output
# If you used the [`methylpy DMRfind`](https://github.com/yupenghe/methylpy) function to identify DMRs, you can create a {{ RegionDS }} by running {func}`methylpy_to_region_ds <ALLCools.dmr.parse_methylpy.methylpy_to_region_ds>`
from ALLCools.mcds import RegionDS
from ALLCools.dmr.parse_methylpy import methylpy_to_region_ds
# DMR output of methylpy DMRfind
methylpy_dmr = '../../data/HIPBulk/DMR/snmC_CT/_rms_results_collapsed.tsv'
methylpy_to_region_ds(dmr_path=methylpy_dmr, output_dir='test_HIP_methylpy')
RegionDS.open('test_HIP_methylpy', region_dim='dmr')
# ## Create RegionDS from a BED file
# You can create an empty {{ RegionDS }} with a BED file, with only the region coordinates recorded. You can then perform annotation, motif scan and further analysis using the methods described in the following sections.
#
# The BED file contains three columns:
# 1. chrom: required
# 2. start: required
# 3. end: required
# 4. region_id: optional, but recommended to have. If not provided, RegionDS will automatically generate `f"{region_dim}_{i_row}"` as region_id. region_id must be unique.
#
# You also need to provide a `chrom_size_path` which tells RegionDS the sizes of your chromosomes.
#
# ```{important} About BED Sorting
# Region order matters throughout the genomic analysis. The best practice is to sort your BED file according to the `chrom_size_path` you are providing. If your BED file is already sorted, you can set `sort_bed=False`, which is True by default
# ```
# example BED file with region ID
# !head test_from_bed_func.bed
bed_region_ds = RegionDS.from_bed(
bed='test_from_bed_func.bed',
location='test_from_bed_RegionDS',
chrom_size_path='../../data/genome/mm10.main.nochrM.chrom.sizes',
region_dim='bed_region',
# True by default, set to False if bed is already sorted
sort_bed=True)
# the RegionDS is stored at {location}
RegionDS.open('test_from_bed_RegionDS')
| docs/allcools/cluster_level/RegionDS/01b.other_option_to_init_region_ds.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.7.0
# language: julia
# name: julia-1.7
# ---
# # MATH50003 Numerical Analysis: Problem Sheet 5
#
# This problem sheet explores positive definite matrices,
# Cholesky decompositions, matrix norms, and the singular value decomposition.
#
# Questions marked with a ⋆ are meant to be completed without using a computer.
# Problems are denoted A/B/C to indicate their difficulty.
using LinearAlgebra, Plots, Test
# ## 1. Positive definite matrices and Cholesky decompositions
#
#
# **Problem 1.1⋆ (C)** Use the Cholesky decomposition to determine
# which of the following matrices are symmetric positive definite:
# $$
# \begin{bmatrix} 1 & -1 \\
# -1 & 3
# \end{bmatrix}, \begin{bmatrix} 1 & 2 & 2 \\
# 2 & 1 & 2\\
# 2 & 2 & 1
# \end{bmatrix}, \begin{bmatrix} 3 & 2 & 1 \\
# 2 & 4 & 2\\
# 1 & 2 & 5
# \end{bmatrix},
# \begin{bmatrix} 4 & 2 & 2 & 1 \\
# 2 & 4 & 2 & 2\\
# 2 & 2 & 4 & 2 \\
# 1 & 2 & 2 & 4
# \end{bmatrix}
# $$
#
# **SOLUTION**
#
# A matrix is symmetric positive definite (SPD) if and only if it has a Cholesky decomposition, so the task here is really just to compute Cholesky decompositions (by hand). Since our goal is to tell if the Cholesky decompositions exist, we do not have to compute $L_k$'s. We only need to see if the decomposition process can keep to the end.
#
# **Matrix 1**
#
# $$A_0=\begin{bmatrix} 1 & -1 \\
# -1 & 3
# \end{bmatrix}$$
#
# $A_1=3-\frac{(-1)\times(-1)}{1}>0$, so Matrix 1 is SPD.
#
# **Matrix 2**
#
# $$A_0=\begin{bmatrix}
# 1 & 2 & 2 \\
# 2 & 1 & 2 \\
# 2 & 2 & 1
# \end{bmatrix}$$
#
# $$A_1=\begin{bmatrix}
# 1&2\\
# 2&1
# \end{bmatrix}-\begin{bmatrix} 2 \\ 2 \end{bmatrix}\begin{bmatrix} 2 & 2 \end{bmatrix}=
# \begin{bmatrix}
# -3&-2\\
# -2&-3
# \end{bmatrix}$$
#
# $A_1[1,1]<0$, so Matrix 2 is not SPD.
#
# **Matrix 3**
#
# $$A_0=\begin{bmatrix}
# 3 & 2 & 1 \\
# 2 & 4 & 2 \\
# 1 & 2 & 5
# \end{bmatrix}$$
#
# $$A_1=
# \begin{bmatrix}
# 4&2\\
# 2&5
# \end{bmatrix}-\frac{1}{3}\begin{bmatrix} 2 \\ 1 \end{bmatrix}\begin{bmatrix} 2 & 1 \end{bmatrix}=\frac{1}{3}
# \begin{bmatrix}
# 8&4\\
# 4&13
# \end{bmatrix}$$
#
# $3A_2=13-\frac{4\times 4}{8}>0$, so Matrix 3 is SPD.
#
# **Matrix 4**
#
# $$A_0=\begin{bmatrix}
# 4 & 2 & 2 & 1 \\
# 2 & 4 & 2 & 2 \\
# 2 & 2 & 4 & 2 \\
# 1 & 2 & 2 & 4
# \end{bmatrix}$$
#
# $$A_1=\begin{bmatrix}
# 4&2&2\\
# 2&4&2\\
# 2&2&4
# \end{bmatrix}-\frac{1}{4}\begin{bmatrix} 2 \\ 2 \\ 1 \end{bmatrix}\begin{bmatrix} 2 & 2 & 1 \end{bmatrix}=\frac{1}{4}
# \begin{bmatrix}
# 12&4&6\\
# 4&12&6\\
# 6&6&15
# \end{bmatrix}$$
#
# $$4A_2=\begin{bmatrix}
# 12&6\\
# 6&15
# \end{bmatrix}-\frac{1}{12}\begin{bmatrix} 4 \\ 6 \end{bmatrix}\begin{bmatrix} 4 & 6 \end{bmatrix}=\frac{4}{3}
# \begin{bmatrix}
# 8&3\\
# 3&9
# \end{bmatrix}$$
# $3A_3=9-\frac{3\times 3}{8}>0$, so Matrix 4 is SPD.
#
# We can check that we did this correctly by running the following in Julia:
cholesky([1 -1; -1 3])
# +
# this throws an error when uncommented and run because the matrix is not SPD
# cholesky([1 2 2; 2 1 2; 2 2 1])
# -
cholesky([3 2 1; 2 4 2; 1 2 5])
cholesky([4 2 2 1; 2 4 2 2; 2 2 4 2; 1 2 2 4])
# **END**
#
# **Problem 1.2⋆ (B)** Recall that an inner product $⟨𝐱, 𝐲⟩$ on $ℝ^n$
# over the reals $ℝ$ satisfies, for all $𝐱,𝐲,𝐳 ∈ ℝ$ and $a,b ∈ ℝ$:
# 1. Symmetry: $⟨𝐱, 𝐲⟩ = ⟨𝐲, 𝐱⟩$
# 2. Linearity: $⟨a𝐱+b𝐲, 𝐳⟩ = a ⟨𝐱, 𝐳⟩+ b⟨𝐲, 𝐳⟩$
# 3. Posive-definite: $⟨𝐱, 𝐱⟩ > 0, x \neq 0$
#
# Prove that $⟨𝐱, 𝐲⟩$ is an inner product if and only if
# $$
# ⟨𝐱, 𝐲⟩ = 𝐱^⊤ K 𝐲
# $$
# where $K$ is a symmetric positive definite matrix.
#
# **SOLUTION**
#
# We begin by showing that $⟨𝐱, 𝐲⟩ = 𝐱^⊤ K 𝐲$ with $K$ spd defines an inner product. To do this we simply verify the three properties: For symmetry, we find
# $$ ⟨𝐱, 𝐲⟩ = 𝐱^⊤ K𝐲 = 𝐱 \cdot (K𝐲) = (K𝐲) \cdot 𝐱$$
# $$= (K𝐲)^⊤ 𝐱 = 𝐲^⊤ K^⊤𝐱 = 𝐲^⊤ K 𝐱 = ⟨𝐲, 𝐱⟩.$$
# For linearity:
# $$ ⟨a𝐱+b𝐲, 𝐳⟩ = (a𝐱+b𝐲)^⊤ K𝐳 = (a𝐱^⊤+b𝐲^⊤)K𝐳$$
# $$ = a𝐱^⊤ K𝐳 + b𝐲^⊤ K𝐳 = a⟨𝐱, 𝐳⟩ + b⟨𝐲, 𝐳⟩.$$
# Positive-definiteness of the matrix $K$ immediately yields $⟨𝐱, 𝐱⟩ = 𝐱^⊤ K 𝐱 >0$. Now we turn to the converse result, i.e. that there exists a symmetric positive definite matrix $K$ for any inner product ⟨𝐱, 𝐲⟩ such that it can be written as $⟨𝐱, 𝐲⟩ = 𝐱^⊤ K 𝐲$. Define the entries of $K$ by $K_{ij} = ⟨e_i, e_j⟩$ where $e_j$ is the $j$-th standard basis vector. Note that by linearity of the inner product any inner product on $ℝ^n$ can be written as $⟨𝐱, 𝐲⟩ = \sum_{k=0}^n \sum_{l=0}^n x_k y_l ⟨e_k, e_l⟩$ by linearity. But with the elements of $K$ defined as above this is precisely
# $$⟨𝐱, 𝐲⟩ = \sum_{k=0}^n \sum_{l=0}^n x_k K_{kl} y_l = 𝐱^⊤ K 𝐲.$$
# What remains is to show that this $K$ is symmetric positive definite. Symmetry is an immediate conseqence of the symmetry of its elements, i.e. $K_{ij} = ⟨e_i, e_j⟩ = ⟨e_j, e_i⟩ = K_{ji}$. Finally, positive definiteness follows from the positive definiteness of the inner product $⟨𝐱, 𝐱⟩ > 0$ with $⟨𝐱, 𝐱⟩ = 𝐱^⊤ K 𝐱$.
#
# **END**
#
# **Problem 1.3⋆ (A)** Show that a matrix is symmetric positive definite if and only if it has a Cholesky
# decomposition of the form
# $$
# A = U U^⊤
# $$
# where $U$ is upper triangular with positive entries on the diagonal.
#
# **SOLUTION**
#
#
#
# We didn't discuss this but note that because a symmetric positive definite matrix has stricly positive eigenvalues: for a normalised
# eigenvector we have
# $$
# λ = λ𝐯^⊤ 𝐯 = 𝐯^⊤ K 𝐯 > 0.
# $$
# Thus they are always invertible. Then note that any such matrix has a Cholesky decomposition of standard form $A = L L^⊤$ where $L$ is lower triangular. The inverse of this standard form Cholesky decomposition is then $A^{-1} = L^{-T} L^{-1}$, which is of the desired form since $L$ is lower triangular and $L^{-T}$ is upper triangular. The positive entries on the diagonal follow directly because this is the case for the Cholesky decomposition factors of the original matrix. Thus, since all symmetric positive definite matrices can be written as the inverses of a symmetric positive definite matrix, this shows that they all have a decomposition $A = U U^⊤$ (using the Cholesky factors of its inverse).
#
# Alternatively, we can replicate the procedure of computing the Cholesky decomposition beginning in the bottom right
# instead of the top left. Write:
# $$
# A = \begin{bmatrix} K & 𝐯\\
# 𝐯^⊤ & α \end{bmatrix} =
# \underbrace{\begin{bmatrix} I & {𝐯 \over \sqrt{α}} \\
# & \sqrt{α}
# \end{bmatrix}}_{U_1}
# \begin{bmatrix} K - {𝐯 𝐯^⊤ \over α} & \\
# & 1 \end{bmatrix}
# \underbrace{\begin{bmatrix} I \\
# {𝐯^⊤ \over \sqrt{α}} & \sqrt{α}
# \end{bmatrix}}_{U_1^⊤}
# $$
# The induction proceeds as in the lower triangular case.
#
#
# **END**
#
# **Problem 1.4⋆ (A)** Prove that the following $n × n$ matrix is symmetric positive definite
# for any $n$:
# $$
# Δ_n := \begin{bmatrix}
# 2 & -1 \\
# -1 & 2 & -1 \\
# & -1 & 2 & ⋱ \\
# && ⋱ & ⋱ & -1 \\
# &&& -1 & 2
# \end{bmatrix}
# $$
# Deduce its two Cholesky decompositions: $Δ_n = L_n L_n^⊤ = U_n U_n^⊤$ where
# $L_n$ is lower triangular and $U_n$ is upper triangular.
#
# **SOLUTION**
#
# We first prove that $L_n$ is lower bidiagonal by induction. Let $A$ be a tridiagonal SPD matrix:
#
# $$A=\left[\begin{array}{c|ccc}
# \alpha & \beta & 0 & \cdots \\\hline
# \beta & & & \\
# 0 & & K & \\
# \vdots & & & \\
# \end{array}\right]$$
# where $K$ is again tridiagonal. Denote the Cholesky decomposition of $A$ by $A=LL^\top$. Recalling the proof of the *Theorem (Cholesky & SPD)* from the lecture, we can write
#
# $$L=\left[\begin{array}{c|ccc}
# \sqrt{\alpha} & & 0 & \\\hline
# \frac{\beta}{\sqrt{\alpha}} & & & \\
# 0 & & \tilde{L} & \\
# \vdots & & &
# \end{array}\right]$$
# where $\tilde{L}$ satisfies $\tilde{L}\tilde{L}^\top=K-\begin{bmatrix} \beta^2/\alpha & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{bmatrix}$ which is a tridiagonal matrix smaller than $A$. Since $L$ is lower bidiagonal when $A$ is $1\times 1$, we know by induction that $L$ is lower bidiagonal for $A$ of any size.
#
# Once we know that $L_n$ is lower bidiagonal, we can write it as
# $$L_n=\begin{bmatrix}
# a_1 & \\
# b_1 & \ddots & \\
# & \ddots & \ddots & \\
# & & b_{n-1} & a_n
# \end{bmatrix}$$
# which we substitute into $\Delta_n=L_nL_n^\top$ to get
# $$\left\{\begin{array}{l}
# a_1=\sqrt{2}\\
# a_kb_k=-1\\
# b_{k}^2+a_{k+1}^2=2
# \end{array}\right.$$
# for $k=1,\dots,n-1$.
#
# Now we solve the recurrence. Substituting the second equation into the third one:
# $$\frac{1}{a_k^2}+a_{k+1}^2=2.$$
# Let $c_k=a_k^2$:
# $$\frac{1}{c_k}+c_{k+1}=2.$$
# Consider the fixing point of this recurrence: $\frac{1}{x}+x=2\Longrightarrow(x-1)^2=0$ has the double root $x=1$ which hints us to consider $\frac{1}{c_k-1}$. In fact, the recurrence is equivalent to
# $$\frac{1}{c_{k+1}-1}=\frac{1}{c_k-1}+1.$$
# Recalling that $c_1=2$, we know that $\frac{1}{c_k-1}=k$. As a result, $c_k=(k+1)/k$, $a_k=\sqrt{(k+1)/k}$ and $b_k=-\sqrt{k/(k+1)}$, hence we know $L_n$.
#
# We can apply the same process to $U_n$, but this is a special case since flipping $\Delta_n$ horizontally and vertically gives itself: $P\Delta_nP^\top=\Delta_n$ where
# $$
# P=\begin{bmatrix} & & 1 \\ & ⋰ & \\ 1 & & \end{bmatrix}
# $$
# is the permutation that reverses a vector.
# So we can also flip $L_n$ to get $U_n$:
# $$
# U_n=PL_nP
# $$
# so that $U_n U_n^⊤ = P L_n P P L_n^⊤ P = P Δ_n P = Δ_n$.
#
# Alternatively one can use the procedure from Problem 1.3. That is, write:
# $$
# Δ_n = \begin{bmatrix} Δ_{n-1} & -𝐞_n \\
# -𝐞_n^⊤ & 2 \end{bmatrix} =
# \underbrace{\begin{bmatrix} I & {-𝐞_n \over \sqrt{2}} \\
# & \sqrt{2}
# \end{bmatrix}}_{U_1}
# \begin{bmatrix} Δ_{n-1} - {𝐞_n 𝐞_n^⊤ \over 2} & \\
# & 1 \end{bmatrix}
# \underbrace{\begin{bmatrix} I \\
# {𝐯^⊤ \over \sqrt{α}} & \sqrt{α}
# \end{bmatrix}}_{U_1^⊤}
# $$
# Continuing proceeds as above.
#
# **END**
#
# **Problem 1.5 (B)** `SymTridiagonal(dv, eu)` is a type for representing symmetric tridiagonal
# matrices (that is, `SymTridiagonal(dv, ev) == Tridiagonal(ev, dv, ev)`). Complete the following
# implementation of `cholesky` to return a `Bidiagonal` cholesky factor in $O(n)$ operations,
# and check your result
# compared to your solution of Problem 1.3 for `n = 1_000_000`.
# +
import LinearAlgebra: cholesky
# return a Bidiagonal L such that L'L == A (up to machine precision)
cholesky(A::SymTridiagonal) = cholesky!(copy(A))
# return a Bidiagonal L such that L'L == A (up to machine precision)
# You are allowed to change A
function cholesky!(A::SymTridiagonal)
d = A.dv # diagonal entries of A
u = A.ev # sub/super-diagonal entries of A
T = float(eltype(A)) # return type, make float in case A has Ints
n = length(d)
ld = zeros(T, n) # diagonal entries of L
ll = zeros(T, n-1) # sub-diagonal entries of L
## SOLUTION
ld[1]=sqrt(d[1])
for k=1:n-1
ll[k]=u[k]/ld[k]
ld[k+1]=sqrt(d[k+1]-ll[k]^2)
end
## END
Bidiagonal(ld, ll, :L)
end
n = 1000
A = SymTridiagonal(2*ones(n),-ones(n-1))
L = cholesky(A)
@test L ≈ cholesky(Matrix(A)).L
# -
# ## 2. Matrix norms
#
# **Problem 2.1⋆ (B)** Prove the following:
# $$
# \begin{align*}
# \|A\|_∞ &= \max_k \|A[k,:]\|_1 \\
# \|A\|_{1 → ∞} &= \|\hbox{vec}(A)\|_∞ = \max_{kj} |a_{kj}|
# \end{align*}
# $$
#
# **SOLUTION**
#
# **Step 1. upper bounds**
#
# $$\|A\mathbf{x}\|_\infty=\max_k\left|\sum_ja_{kj}x_j\right|\le\max_k\sum_j|a_{kj}x_j|\le
# \begin{cases}
# \max\limits_j|x_j|\max\limits_k\sum\limits_j|a_{kj}|=\|\mathbf{x}\|_\infty\max\limits_k\|A[k,:]\|_1\\
# \max\limits_{kj}|a_{kj}|\sum\limits_j|x_j|=\|\mathbf{x}\|_1\|\text{vec}(A)\|_\infty
# \end{cases}
# $$
#
# **Step 2.1. meeting the upper bound ($\|A\|_{1 → ∞}$)**
#
# Let $a_{lm}$ be the entry of $A$ with maximum absolute value. Let $\mathbf{x}=\mathbf{e}_m$, then
# $$\|A\mathbf{x}\|_\infty=\max_k\left|\sum_ja_{kj}x_j\right|=\max_k|a_{km}|=|a_{lm}|$$
# and
# $$\|\mathbf{x}\|_1\|\text{vec}(A)\|_\infty=1\cdot|a_{lm}|.$$
#
#
# **Step 2.2. meeting the upper bound ($\|A\|_∞$)**
#
# Let $A[n,:]$ be the row of $A$ with maximum 1-norm. Let $\mathbf{x}=\left(\text{sign}.(A[n,:])\right)^\top$, then $\left|\sum_ja_{kj}x_j\right|\begin{cases} =\sum_j|a_{kj}|=\|A[k,:]\|_1 & k=n \\ \le\sum_j|a_{kj}|=\|A[k,:]\|_1 & k\ne n \end{cases}$, so
# $$\|A\mathbf{x}\|_\infty=\max_k\left|\sum_ja_{kj}x_j\right|=\max\limits_k\|A[k,:]\|_1$$
# while
# $$\|\mathbf{x}\|_\infty\max\limits_k\|A[k,:]\|_1=1\cdot\max\limits_k\|A[k,:]\|_1.$$
#
#
# **Conclusion**
#
# In both cases, equality can hold, so the upper bounds are actually maxima.
#
# **END**
#
#
# **Problem 2.2⋆ (B)** For a rank-1 matrix $A = 𝐱 𝐲^⊤$ prove that
# $$
# \|A \|_2 = \|𝐱\|_2 \|𝐲\|_2.
# $$
# Hint: use the Cauchy–Schwartz inequality.
#
# **SOLUTION**
#
# $$\|A\mathbf{z}\|_2=\|\mathbf{x}\mathbf{y}^\top\mathbf{z}\|_2=|\mathbf{y}^\top\mathbf{z}|\|\mathbf{x}\|_2,$$
# so it remains to prove that $\|\mathbf{y}\|_2=\sup_{\mathbf{z}}\frac{|\mathbf{y}^\top\mathbf{z}|}{\|\mathbf{z}\|_2}$.
#
# By Cauchy-Schwartz inequality,
# $$|\mathbf{y}^\top\mathbf{z}|=|(\mathbf{y},\mathbf{z})|\le\|\mathbf{y}\|_2\|\mathbf{z}\|_2$$
# with the two sides being equal when $\mathbf{y}$ and $\mathbf{z}$ are linearly dependent, in which case the bound is tight.
#
# **END**
#
# **Problem 2.3⋆ (B)** Show for any orthogonal matrix $Q ∈ ℝ^m$ and
# matrix $A ∈ ℝ^{m × n}$ that
# $$
# \|Q A\|_F = \|A\|_F
# $$
# by first showing that $\|A \|_F = \sqrt{\hbox{tr}(A^⊤ A)}$ using the
# _trace_ of an $m × m$ matrix:
# $$
# \hbox{tr}(A) = a_{11} + a_{22} + ⋯ + a_{mm}.
# $$
#
# **SOLUTION**
#
# $$\text{tr}(A^\top A)=\sum_k(A^\top A)[k,k]=\sum_k\sum_jA^\top[k,j]A[j,k]=\sum_k\sum_jA[j,k]^2=\|A\|_F^2.$$
#
# On the other hand,
# $$\text{tr}(A^\top A)=\text{tr}(A^\top Q^\top QA)=\text{tr}((QA)^\top (QA))=\|QA\|_F^2,$$
# so $\|Q A\|_F = \|A\|_F$.
#
# **END**
#
# ## 3. Singular value decomposition
#
# **Problem 3.1⋆ (B)** Show that $\|A \|_2 ≤ \|A\|_F ≤ \sqrt{r} \|A \|_2$ where
# $r$ is the rank of $A$.
#
# **SOLUTION**
#
# From Problem 2.3 use the fact that $\|A \|_F = \sqrt{\hbox{tr}(A^⊤ A)}$, where $A\in \mathbb{R}^{m\times n}$.
#
# Hence,
#
# $$\|A \|_F^2 = \hbox{tr}(A^⊤ A) = \sigma_1^2 +...+\sigma_m^2$$
#
# where $\sigma_1\ge...\ge \sigma_n \ge 0$ are the singular values of $A$ and $\sigma_i^2$ are the eigenvalues of $A^⊤ A$
#
# Knowing that $\|A\|_2^2 = \sigma_1^2$ we have $\|A \|_2^2 ≤ \|A\|_F^2$
#
# Moreover, since if the rank of $A$ is $r$ we have that $\sigma_{r+1}=...=\sigma_m=0$ and we also know $\sigma_1\ge...\ge \sigma_n \ge 0$, we have that
#
# $\|A\|_F^2 = \sigma_1^2 +...+\sigma_m^2 =\sigma_1^2 +...+\sigma_r^2 \le r \sigma_1^2 =r \|A \|_2^2$
#
# Hence,
# $$
# \|A \|_2 ≤ \|A\|_F ≤ \sqrt{r} \|A \|_2.
# $$
#
# **END**
#
# **Problem 3.2 (A)** Consider functions sampled on a $(n+1) × (n+1)$ 2D grid
# $(x_k,y_j) = (k/n, j/n)$ where $k,j = 0,…,n$.
# For $n = 100$, what is the lowest rank $r$ such that
# the best rank-$r$ approximation to the samples
# that is accurate to within $10^{-5}$ accuracy for the following functions:
# $$
# (x + 2y)^2, \cos(\sin x {\rm e}^y), 1/(x + y + 1), \hbox{sign}(x-y)
# $$
# For which examples does the answer change when $n = 1000$?
#
# **SOLUTION**
# +
#define functions
f₁(x,y) = (x + 2 * y) ^ 2
f₂(x,y) = cos(sin(x)*exp(y))
f₃(x,y) = 1/(x + y + 1)
f₄(x,y) = sign(x-y)
#define n and error goal
error = 1e-5
#helper function to compute nxn samples
function samples(f, n)
x = y = range(0, 1; length=n)
return f.(x,y')
end
# +
function find_min_rank(f, n, ϵ)
F = samples(f,n)
U,σ,V = svd(F)
for k=1:n
Σ_k = Diagonal(σ[1:k])
U_k = U[:,1:k]
V_k = V[:,1:k]
if norm(U_k * Σ_k * V_k' - F) <= ϵ
return k
end
end
end
n=100
println("Error ≤ ", error, " with n = ", n)
println("Rank for f₁ = ", find_min_rank(f₁, n, error))
println("Rank for f₂ = ", find_min_rank(f₂, n, error))
println("Rank for f₃ = ", find_min_rank(f₃, n, error))
println("Rank for f₄ = ", find_min_rank(f₄, n, error))
n=1000
println("Error ≤ ", error, " with n = ", n)
println("Rank for f₁ = ", find_min_rank(f₁, n, error))
println("Rank for f₂ = ", find_min_rank(f₂, n, error))
println("Rank for f₃ = ", find_min_rank(f₃, n, error))
println("Rank for f₄ = ", find_min_rank(f₄, n, error))
# -
# **END**
#
# **Problem 3.3⋆ (B)** Define the _pseudo-inverse_:
# $$
# A^+ := V Σ^{-1} U^⊤.
# $$
# Show that it satisfies the _Moore-Penrose conditions_:
# 1. $A A^+ A = A$
# 2. $A^+ A A^+ = A^+$
# 3. $(A A^+)^⊤ = A A^+$ and $(A^+ A)^⊤ = A^+ A$
#
# **SOLUTION**
#
# Let $A=UΣ V^⊤$ and $A^+ := V Σ^{-1} U^⊤$. Note that $UU^⊤ = U^⊤U = I$ ($U$ orthonormal) and $V^⊤ V =I$
#
# 1. We have
# $$A A^+ A = U \Sigma V^⊤ V \Sigma^{-1} U^⊤ U \Sigma V^⊤ = U \Sigma \Sigma^{-1} \Sigma V^⊤ = U\Sigma V^⊤ = A$$
#
# 2. Moreover,
# $$A^+ A A^+ = V \Sigma^{-1}U^⊤ U \Sigma V^⊤ V \Sigma^{-1} U^⊤ = V \Sigma^{-1}\Sigma \Sigma^{-1} U^⊤ = V \Sigma^{-1} U^⊤ = A^+$$
#
# 3. Furthermore, $AA^+ = U \Sigma V^⊤ V \Sigma^{-1} U^⊤ = U U^⊤ = I$, thus
#
# $$(AA^+)^⊤ = I^⊤ = I = AA^+$$
#
# Moreover, $A^+A = V \Sigma^{-1} U^⊤ U \Sigma V^⊤ = VV^⊤$, hence,
#
# $$(A^+A)^⊤ = (VV^⊤)^⊤=VV^⊤=A^+A$$
#
# **END**
#
# **Problem 3.4⋆ (A)** Show for $A ∈ ℝ^{m × n}$ with $m ≥ n$ and ⊤ rank
# that $𝐱 = A^+ 𝐛$ is the least squares solution, i.e., minimises $\| A 𝐱 - 𝐛 \|_2$.
# Hint: extend $U$ in the SVD to be a square orthogonal matrix.
#
# **SOLUTION**
#
# The proof mimics that of the QR decomposition. Write $A = U Σ V^⊤$ and let
# $$
# Ũ = \begin{bmatrix}U & K \end{bmatrix}
# $$
# so that $Ũ$ is orthogonal. We use the fact orthogonal matrices do not change norms:
# $$
# \begin{align*}
# \|A 𝐱 - 𝐛 \|_2^2 &= \|U Σ V^⊤ 𝐱 - 𝐛 \|_2^2 = \|Ũ^⊤ U Σ V^⊤ 𝐱 - Ũ^⊤ 𝐛 \|_2^2 = \|\underbrace{\begin{bmatrix}I_m \\ O \end{bmatrix}}_{∈ ℝ^{m × n}} Σ V^⊤ 𝐱 - \begin{bmatrix} U^⊤ \\ K^⊤ \end{bmatrix} 𝐛 \|_2^2 \\
# &= \|Σ V^⊤ 𝐱 - U^⊤ 𝐛 \|_2^2 + \|K^⊤ 𝐛\|^2
# \end{align*}
# $$
# The second term is independent of $𝐱$. The first term is minimised when zero:
# $$
# \|Σ V^⊤ 𝐱 - U^⊤ 𝐛 \|_2 =\|Σ V^⊤ V Σ^{-1} U^⊤ 𝐛 - U^⊤ 𝐛 \|_2 = 0
# $$
#
# **END**
#
# **Problem 3.5⋆ (A)**
# If $A ∈ ℝ^{m × n}$ has a non-empty kernel there are multiple solutions to the least
# squares problem as
# we can add any element of the kernel. Show that $𝐱 = A^+ 𝐛$ gives the least squares solution
# such that $\| 𝐱 \|_2$ is minimised.
#
# **SOLUTION**
#
# Let $𝐱 =A^+b$ and let $𝐱 + 𝐤$ to be another solution i.e.
# $$
# \|A𝐱 - b \| = \|A (𝐱 +𝐤) - b \|
# $$
# Following the previous part we deduce:
# $$
# Σ V^⊤ (𝐱 +𝐤) = U^⊤ 𝐛 \Rightarrow V^⊤ 𝐤 = 0
# $$
# As $𝐱 = V 𝐜$ lies in the span of the columns of $V$ we have
# $𝐱^⊤ 𝐤 = 0$. Thus
# $$
# \| 𝐱 + 𝐤 \|^2 = \| 𝐱 \|^2 + \| 𝐤 \|^2
# $$
# which is minimised when $𝐤 = 0$.
#
# **END**
| sheets/week5s.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="162d91e9"
# # Let's check how the ensemble does with all and six landmarks.
# + id="75869bdd"
# first get all X and y data from the all_points_folds
import numpy as np
import pickle, random
import cv2, os
X_all, y = [], [] # X needs to be changed for each fold, y doesn't need to be changed for each fold.
for file in os.listdir("all_points_folds"):
with open(f"all_points_folds/{file}", 'rb') as f:
X_y = pickle.load(f)
X_all.append(X_y[0])
y.append(X_y[1])
def specify_data(dataset, landmarks):
ds = np.concatenate([X_i for X_i in dataset])
"""
dataset should contain a 4D matrix of (5, _, 90, 126).
"""
cols = []
for landmark in landmarks:
cols += (np.array([0, 21, 42, 63, 84, 105]) + landmark).tolist()
return ds[:, :, tuple(cols)]
X_six = specify_data(X_all, [0, 4, 8, 12, 16, 20]).reshape(5, 20, 90, 36)
X_one = specify_data(X_all, [0]).reshape(5, 20, 90, 6)
import random
def shuffle(X, y, seed = None):
if seed == None:
seed = random.randrange(0, 100)
print(f"using seed {seed}")
np.random.seed(seed)
new_X = np.concatenate([X_i for X_i in X])
new_y = np.concatenate([y_i for y_i in y])
N = np.random.permutation(new_X.shape[0])
new_X = new_X[N]
new_y = new_y[N]
new_X = new_X.reshape(5, 20, 90, new_X.shape[-1])
new_y = new_y.reshape(5, 20)
return new_X, new_y
# + id="255dab3c"
SEED = 65
X_all, y = shuffle(X_all, y, seed = SEED)
X_six, _ = shuffle(X_six, y, seed = SEED)
X_six, _ = shuffle(X_six, y, seed = SEED)
# + id="a66fe962"
import sklearn.metrics
def ensemble_val_acc(models, X_tests, y_test):
preds = np.zeros_like(y_test)
for model, X_test in zip(models, X_tests):
preds += model.predict(X_test).flatten()
preds = preds / len(models)
return (np.round_(preds) == y_test).mean(), sklearn.metrics.precision_score(y_test, np.round_(preds)), sklearn.metrics.recall_score(y_test, np.round_(preds))
# + id="0390341a"
import time, glob
def cross_validate(make_model_one, make_model_six, X_one, X_six, y, epochs = 75, callbacks = [], verbose = 1):
val_accuracies, precisions, recalls = [], [], []
for i in range(5):
model_one = make_model_one()
model_six = make_model_six()
# define global labels
y_train = np.concatenate([y_j for j, y_j in enumerate(y) if i != j])
y_test = y[i]
# first run all
X_test_one = X_one[i]
X_train_one = np.concatenate([X_j for j, X_j in enumerate(X_one) if i != j])
try:
os.remove("best_aso.h5")
except Exception as e:
pass
# train
history_one = model_one.fit(X_train_one, y_train, validation_data = (X_test_one, y_test), epochs = epochs, callbacks = callbacks, verbose = verbose)
try:
model_one.load_weights("best_aso.h5")
except Exception as e:
pass
print("\nevaluation on all:")
model_one.evaluate(X_test_one, y_test)
time.sleep(1)
# next train six
X_test_six = X_six[i]
X_train_six = np.concatenate([X_j for j, X_j in enumerate(X_six) if i != j])
try:
os.remove("best_aso.h5")
except Exception as e:
pass
# train
history_six = model_six.fit(X_train_six, y_train, validation_data = (X_test_six, y_test), epochs = epochs, callbacks = callbacks, verbose = verbose)
try:
model_six.load_weights("best_aso.h5")
except Exception as e:
pass
print("\nevaluation on six:")
model_six.evaluate(X_test_six, y_test)
time.sleep(1)
# YAY! WE HAVE TRAINED THE MODEL ON EVERYTHING FOR THIS FOLD
# get the aggregate validation accuracy on everything
models = [model_one, model_six]
val_acc, pres, recall = ensemble_val_acc(models, [X_test_one, X_test_six], y_test)
val_accuracies.append(val_acc)
precisions.append(pres)
recalls.append(recall)
print(f"ensemble validation accuracy : {(val_acc, pres, recall)}")
time.sleep(2)
for video in glob.glob("*mov"):
print(f"video is {video}, {predict_on_video(models, video)}")
print(f"average : {sum(val_accuracies) / len(val_accuracies), sum(precisions) / len(precisions), sum(recalls) / len(recalls)}")
# + id="7c02d84d"
# create the functions to create models
import tensorflow as tf
def make_model_one():
model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(32, return_sequences=False),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(1, activation = "sigmoid")
])
model.compile(loss = "binary_crossentropy", optimizer=tf.keras.optimizers.Adam(learning_rate=0.01), metrics=['accuracy', tf.keras.metrics.Precision(), tf.keras.metrics.Recall()])
return model
def make_model_six():
model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(64, return_sequences=False, input_shape = (None, 36)),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(1, activation = 'sigmoid')
])
model.compile(loss = "binary_crossentropy", optimizer = tf.keras.optimizers.Adam(learning_rate=0.01), metrics = ['accuracy', tf.keras.metrics.Precision(), tf.keras.metrics.Recall()])
return model
# + colab={"base_uri": "https://localhost:8080/"} id="4da2ec83" outputId="23d4a2a5-a369-4966-b529-0fefac2595d2"
import mediapipe as mp
from tqdm import tqdm
def hand_locations_general(frame, min_detection_confidence = 0.5, min_tracking_confidence = 0.5):
"""give all landmarks"""
hands = mp.solutions.hands.Hands(min_detection_confidence=min_detection_confidence, min_tracking_confidence=min_tracking_confidence) # MAKE SURE THIS IS ALL GOOD
results = hands.process(frame.astype('uint8'))
X_locations = [0] * 42
Y_locations = [0] * 42
Z_locations = [0] * 42
if results.multi_hand_landmarks:
x = y = z = 0
for hand, hand_landmark in enumerate(results.multi_hand_landmarks):
for i in range(0, 21):
landmark = hand_landmark.landmark[i]
X_locations[x] = landmark.x
Y_locations[y] = landmark.y
Z_locations[z] = landmark.z
x += 1; y += 1; z +=1;
hands.close()
return np.concatenate([X_locations, Y_locations, Z_locations])
# create a function to pad your videos
def pad(locations, maxlen = 90, padding = "post", truncating = "post"):
new_locations = locations.tolist()
empty_row = np.zeros((1, 126))
for i, video in tqdm(enumerate(new_locations)):
if len(video) < maxlen:
for new_row in range(maxlen - len(video)):
if padding == "post":
new_locations[i] = np.array(new_locations[i])
new_locations[i] = np.concatenate([new_locations[i], empty_row])
if padding == "pre":
new_locations[i] = np.array(new_locations[i])
new_locations[i] = np.concatenate([empty_row, new_locations[i]])
if len(video) > maxlen:
if truncating == "post":
new_locations[i] = new_locations[i][:maxlen]
elif truncating == "pre":
new_locations[i] = new_locations[i][len(video) - maxlen : ]
return np.array(new_locations)
# + id="4qDuUuABWpJr"
model_to_arrangements = {1 : [0], 2 : [0, 4, 8, 12, 16, 20]}
# + id="ms0s-2miWlb0"
# function to ensemble predict on frames.
def ensemble_predict(models, X_test):
preds = np.zeros((X_test.shape[0], 1))
for model, landmarks in tqdm(zip(models, model_to_arrangements.values())):
test_data = specify_data(np.array([X_test]), landmarks)
preds += model.predict(test_data)
preds = preds / len(models)
return preds.flatten()
# + id="4de0a3df"
# create a function to then predict on videos
import cv2, numpy as np
def predict_on_video(models, path):
LOCATIONS = []
cap = cv2.VideoCapture(path)
while cap.isOpened():
print("read frame")
_, frame = cap.read()
if not _: break
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
LOCATIONS.append(hand_locations_general(frame))
print('read all locations')
LOCATIONS = np.array([LOCATIONS])
LOCATIONS = pad(LOCATIONS)
print("padded")
return ensemble_predict(models, LOCATIONS)
# + colab={"base_uri": "https://localhost:8080/"} id="a19a5cd5" outputId="dee9a319-5d55-4c2e-f9a3-4ee41ad95b7f"
checkpoint = tf.keras.callbacks.ModelCheckpoint("best_aso.h5", save_best_only=True, monitor = "val_accuracy")
early_stopping = tf.keras.callbacks.EarlyStopping(monitor = "val_accuracy", patience=10)
cross_validate(make_model_one, make_model_six, X_one, X_six, y, epochs = 75, callbacks=[checkpoint, early_stopping], verbose = 0)
# + id="472d64c2"
# let's see whether this runs with jupyter now re-installed.
# -
| ensemble_code/six_one_ensemble.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !ls
# !cat Q59.csv | wc -l
# !head -10 Q59.csv
import pandas as pd
pres = pd.read_csv('./Q59.csv', sep=',', encoding = "ISO-8859-1")
print(type(pres))
pres.head(15)
presDec17 = pd.read_csv('./T201712PDPI+BNFT.CSV', sep=',', encoding = "ISO-8859-1")
print(type(presDec17))
presDec17.head(15)
presDec17.columns
presDec17.head()
presDec17.index
row_0 = presDec17.iloc[0]
type(row_0)
print(row_0)
row_0[' SHA']
Q59 = presDec17[' SHA'] == 'Q59'
print(type(Q59))
Q59.head(15)
presDec17[' SHA'].describe()
presDec17['NIC '].describe()
presDec17[' SHA'].unique()
presDec17['PRACTICE'].unique()
presDec17.corr()
presDec17['BNF CODE'].describe()
presDec17['PRACTICE'].describe()
Q59 = presDec17[' SHA'] == 'Q59'
presQ59 = presDec17[Q59]
presQ59.head()
presQ59['PCT'].unique()
| src/DataExploration/PrescriptionDataExploration-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Sirviendo modelos con MLFlow.
#
# Un caso rápido con Pytorch e Iris Dataset.
# +
import torch
import pickle
import sklearn
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
iris = pd.read_csv('https://raw.githubusercontent.com/HackSpacePeru/Datasets_intro_Data_Science/master/Iris_people/iris.csv')
# -
# ##### ¿A qué se le llaman artifacts?
#
# Artifacts no son más que objetos útiles e instanciados que usaremos para hacer nuestro proceso de predicción. Por ejemplo, puede ser un modelo ya entrenado, un diccionario que encodee algunas variables categóricas, o el famoso state_dict (los pesos del modelo) de Pytorch. En este caso, es necesario usar un LabelEncoder para poder continuar con el proceso de nuestro modelo.
# +
L_encoder = LabelEncoder()
L_encoder_path = 'artifacts/label_encoder.pkl'
iris['tipo_flor'] = L_encoder.fit_transform(iris['tipo_flor'])
#Tenemos que guardar este artifact para la inferecncia posterior.
with open(L_encoder_path, 'wb') as handle:
pickle.dump(L_encoder, handle, protocol = pickle.HIGHEST_PROTOCOL)
# -
#Ahora, para entrenar generaremos un X_train y X_test con un random seed de 10.
X_train, X_test, Y_train, Y_test = train_test_split(iris.drop('tipo_flor',axis=1),
iris['tipo_flor'], random_state=10)
#Como usaremos Pytorch, convertiremos nuestro dataframe a tensores.
X_train = torch.from_numpy(X_train.values).float()
X_test = torch.from_numpy(X_test.values).float()
y_train = torch.from_numpy(Y_train.values).view(1,-1)[0].type(torch.LongTensor)
y_test = torch.from_numpy(Y_test.values).view(1,-1)[0].type(torch.LongTensor)
# #### Dónde armamos el modelo?
#
# El modelo va a estar definido en un script aparte llamado modelo.py. Esto nos permitirá importar la clase del modelo, con toda la arquitectura de la red neuronal que hayamos decidido usar. ¿Por qué hacemos esto? Porque es una forma más formal de almacenar nuestro modelo para una posterior puesta en producción.
# +
from modelo import ModeloIris
from torch.optim import Adam
import torch.nn as nn
model = ModeloIris()
#Inicializamos el modelo, seteando las configuraciones:
optimizer = Adam(model.parameters(), lr = 0.01)
fn_perdida = nn.NLLLoss()
# +
#Entrenaremos el modelo.
epochs = 1000
for epoch in range(epochs):
optimizer.zero_grad()
y_pred = model(X_train)
loss = fn_perdida(y_pred, y_train)
loss.backward()
optimizer.step()
if epoch % 100 == 0:
print(f'Epoch: {epoch} loss: {loss.item()}')
# +
import mlflow.pytorch
mlflow_pytorch_path = 'models/iris_mlflow_pytorch'
# -
#Sacaremos el environment de anaconda que estamos usando:
conda_env = mlflow.pytorch.get_default_conda_env()
print(conda_env)
#Guardaremos el modelo
mlflow.pytorch.save_model(model, mlflow_pytorch_path, conda_env = conda_env)
# #### Hasta acá ya tenemos nuestro modelo. Lo que falta es poder cargarlo desde los archivos.
# +
loaded_model = mlflow.pytorch.load_model(mlflow_pytorch_path)
#Tomamos un ejemplo para hacer la inferencia.
ejemplo = torch.tensor([[3.0, 2.5, 1.8, 0.8]])
prediccion = torch.argmax(loaded_model(ejemplo), dim = 1)
print(prediccion)
# -
#Guardamos también el state_dict
state_dict_path = 'state_dict.pt'
torch.save(model.state_dict(), f'{state_dict_path}')
# #### Faltan cargar los artifacts y hacer la clase para poder usarlos en la prediccion
artifacts = {
'state_dict' : state_dict_path,
'label_encoder' : L_encoder_path
}
class ModelWrapper(mlflow.pyfunc.PythonModel):
#El objeto context es dado por MLFlow, y contiene
#los artifacts especificados arriba.
def load_context(self, context):
import torch
import pickle
from modelo import ModeloIris
#Inicializamos el modelo y el state_dict
self.model = ModeloIris()
self.model.load_state_dict(torch.load(context.artifacts['state_dict']))
#Cargamos el label encoder.
with open(context.artifacts['label_encoder'],'rb') as handle:
self.label_encoder = pickle.load(handle)
def predict(self, context, model_input):
example = torch.tensor(model_input.values)
pred = torch.argmax(model(example.float()), axis=1)
#Volvemos a codificar los labels como letras.
pred_labels = self.label_encoder.inverse_transform(pred)
return pred_labels
# #### Ahora ¿Para qué necesitamos esa clase? porque vamos a guardar el modelo con MLFlow y esa clase. Para que cuando podamos cargar denuevo el modelo solo tengamos que poner ese .predict() y obtener la clase ya hecha.
conda_env = {
'channels': ['defaults', 'pytorch'],
'dependencies': [
f'python=3.8.5',
{
'pip':[
f'mlflow=={mlflow.__version__}',
f'scikit-learn=={sklearn.__version__}',
f'torch=={torch.__version__}',
'cloudpickle==1.6.0'
]
}
],
'name': 'mlflow-env-iris'
}
# +
mlflow_pyfunc_model_path = "models/iris_model_pyfunc"
#Empaquetamos el modelo completo!
mlflow.pyfunc.save_model(path = mlflow_pyfunc_model_path,
python_model=ModelWrapper(),
artifacts = artifacts,
conda_env=conda_env,
code_path=['modelo.py','meta_data.txt'])
# +
loaded_model = mlflow.pyfunc.load_model(mlflow_pyfunc_model_path)
test_ = loaded_model.predict(pd.DataFrame([[5.1,3.5,1.4,.02]]))
print(test_)
| MLFlow Training/MlFlow Model and PyFunc.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.7.2
# language: julia
# name: julia-1.7
# ---
# + [markdown] slideshow={"slide_type": "slide"} tags=[]
# # <span style="color:#2c061f"> Macro 318: Tutorial #1 </span>
#
# <br>
#
# ## <span style="color:#374045"> Introduction to Programming with Julia </span>
#
#
# #### <span style="color:#374045"> Lecturer: </span> <span style="color:#d89216"> <br> <NAME> (<EMAIL>) </span>
# + [markdown] slideshow={"slide_type": "slide"}
# # Introduction
#
# Your interest in computational work will determine how much of this guide you read.
#
# - **Very interested** -- Read the entire notebook (even if you don't understand)
# - *Somewhat interested* -- Read through the notebook and scan through the `optional` sections
# - No interest -- Only read the slides (compulsory
# ) so that you can complete the tutorials
#
# Try not to let the amount of information overwhelm you.
#
# The notebook is meant to be used as a **reference**
#
# The slides presented here are the **compulsory** part of the tutorial.
# + [markdown] slideshow={"slide_type": "slide"}
# # Tutorial topics
#
# Here are some of the focus areas
#
# 1. **Fundamentals of programming with Julia**
# 2. Data, math and stats
# 3. Optimisation and the consumer problem
# 4. Solow model
# + [markdown] slideshow={"slide_type": "slide"}
# # Running the notebooks
#
# It is preferred that you install the programs on your computer.
#
# This requires that you install Anaconda and Julia.
#
# [Here](https://julia.quantecon.org/getting_started_julia/getting_started.html) is a link that explains the installation process.
#
# However, if you are finding it difficult to get things working you may try the other options.
#
# I will make a brief video on how to install Anaconda and link it to Julia.
#
# + [markdown] slideshow={"slide_type": "slide"}
# # Tutorial outline
#
# 1. Variables
# 2. Data structures
# 3. Control flow
# 4. Functions
# 5. Visualisation
# 6. Type system and generic programming (`optional`)
# + [markdown] slideshow={"slide_type": "slide"}
# # Your first code!
#
# Before we start our discussion, let us try and run our first Julia program.
#
# For those that have done programming before, this normally entails writing a piece of code that gives us the output ``Hello World!``.
#
# In Julia this is super easy to do.
#
# + slideshow={"slide_type": "fragment"}
println("Hello World!")
# + [markdown] slideshow={"slide_type": "slide"}
# # Introduction to packages
#
# Julia has many useful packages. If we want to include a specific package then we can do the following,
#
# `import Pkg`
#
# `Pkg.add("PackageName")`
#
# `using PackageName`
# + slideshow={"slide_type": "skip"}
import Pkg
Pkg.add("DataFrames") # Package for working with data
Pkg.add("GLM") # Package for linear regression
Pkg.add("LinearAlgebra") # Package for linear algebra applications
Pkg.add("Plots") # Package for plotting
Pkg.add("TypeTree") # Package that shows type hierarchy in tree form
using Base: show_supertypes
using DataFrames
using GLM
using LinearAlgebra
using Plots
using TypeTree
# + slideshow={"slide_type": "fragment"}
import Pkg
Pkg.add("DataFrames") # Package for working with data
Pkg.add("GLM") # Package for linear regression
Pkg.add("LinearAlgebra") # Package for linear algebra applications
Pkg.add("Plots") # Package for plotting
Pkg.add("TypeTree") # Package that shows type hierarchy in tree form
using Base: show_supertypes
using DataFrames
using GLM
using LinearAlgebra
using Plots
using TypeTree
# + [markdown] slideshow={"slide_type": "slide"}
# # Variables and types
#
# After having successfully written your `Hello World!` code in Julia, a natural place to continue your journey is with variables.
#
# A variable in a programming language is going to be some sort of symbol that we assign some value.
# + slideshow={"slide_type": "fragment"}
x = 2 # We assign the value of 2 to the variable x
# + slideshow={"slide_type": "fragment"}
typeof(x) # Command to find the type of the x variable
# + [markdown] slideshow={"slide_type": "fragment"}
# We see that the type of the variable is `Int64`.
#
# What is an `Int64`?
# + [markdown] slideshow={"slide_type": "slide"}
# # Variables and types
#
# We can now work with `x` as if it represents the value of `2`.
#
# Since an integer is a number, we can perform basic mathematical operations.
# + slideshow={"slide_type": "fragment"}
y = x + 2
# + slideshow={"slide_type": "fragment"}
typeof(y)
# + [markdown] slideshow={"slide_type": "slide"}
# # Variables and types
#
# We can reassign the variable `x` to another value, even with another type.
# + slideshow={"slide_type": "fragment"}
x = 3.1345
# + slideshow={"slide_type": "fragment"}
typeof(x)
# + [markdown] slideshow={"slide_type": "fragment"}
# Now `x` is a floating point number.
#
# What is a floating point number?
#
# This is an *approximation* to a decimal (or real) number.
# + [markdown] slideshow={"slide_type": "slide"}
# # Primitive data types
#
# There are several important data types that are at the core of computing. Some of these include,
#
# - **Booleans**: true and false
# - **Integers**: -3, -2, -1, 0, 1, 2, 3, etc.
# - **Floating point numbers**: 3.14, 2.95, 1.0, etc.
# - **Strings**: "abc", "cat", "hello there"
# - **Characters**: 'f', 'c', 'u'
# + [markdown] slideshow={"slide_type": "slide"}
# # Arithmetic operators
#
# We can perform basic arithmetic operations.
#
# Operators perform operations.
#
# These common operators are called the **arithmetic operators**.
# + [markdown] slideshow={"slide_type": "fragment"}
# | Expression | Name | Description |
# | :-- | :-- | :-- |
# | `x + y` | binary plus | performs addition |
# | `x - y` | binary minus | performs subtraction |
# | `x * y` | times | performs multiplication |
# | `x / y` | divide | performs division |
# | `x ÷ y` | integer divide | `x / y`, truncated to an integer |
# | `x \ y` | inverse divide | equivalent to `y / x` |
# | `x ^ y` | power | raises `x` to the `y`th power |
# | `x % y` | remainder | equivalent to `rem(x,y)` |
# + [markdown] slideshow={"slide_type": "slide"}
# # Arithmetic operators
#
# Here are some simple examples that utilise these arithmetic operators.
# + slideshow={"slide_type": "fragment"}
x = 2; y = 10;
# + slideshow={"slide_type": "fragment"}
x * y
# + slideshow={"slide_type": "fragment"}
x ^ y
# + slideshow={"slide_type": "fragment"}
y / x # Note that division converts integers to floats
# + slideshow={"slide_type": "fragment"}
2x - 3y
# + slideshow={"slide_type": "fragment"}
x // y
# + [markdown] slideshow={"slide_type": "slide"}
# # Augmentation operators
#
# Augmentation operators will be especially important in the section on control flow.
# + slideshow={"slide_type": "fragment"}
x += 1 # same as x = x + 1
# + slideshow={"slide_type": "fragment"}
x *= 2 # same as x = x * 2
# + slideshow={"slide_type": "fragment"}
x /= 2 # same as x = x / 2
# + [markdown] slideshow={"slide_type": "slide"}
# # Comparison operators
#
# These operators help to generate true and false values for our conditional statements
#
# | Operator | Name |
# | :-- | :-- |
# | `==` | equality |
# | `!=`, `≠` | inequality |
# | `<` | less than |
# | `<=`, `≤` | less than or equal to |
# | `>` | greater than |
# | `>=`, `≥` | greater than or equal to |
#
# + slideshow={"slide_type": "fragment"}
x = 3; y = 2;
# + slideshow={"slide_type": "fragment"}
x < y # If x is less than y this will return true, otherwise false
# + slideshow={"slide_type": "fragment"}
x != y # If x is not equal to y then this will return true, otherwise false
# + slideshow={"slide_type": "fragment"}
x == y # If x is equal to y this will return true, otherwise false
# + [markdown] slideshow={"slide_type": "slide"}
# # Data Structures
#
# There are several types of containers, such as arrays and tuples, in Julia.
#
# These containers contain data of different types.
#
# We explore some of the most commonly used containers here.
#
# Note that we have both **mutable** and **immutable** containers in Julia.
# + [markdown] slideshow={"slide_type": "slide"}
# # Tuples
#
# Let us start with one of the basic types of containers, which are referred to as tuples.
#
# These containers are immutable, ordered and of a fixed length.
# + slideshow={"slide_type": "fragment"}
x = (10, 20, 30)
# + slideshow={"slide_type": "fragment"}
x[1] # First element of the tuple
# + slideshow={"slide_type": "fragment"}
a, b, c = x # With this method of unpacking we have that a = 10, b = 20, c = 30
# + slideshow={"slide_type": "fragment"}
a
# + [markdown] slideshow={"slide_type": "slide"}
# ## Arrays
#
# One of the most important containers in Julia are arrays.
#
# You will use tuples and arrays quite frequently in your code.
#
# An array is a multi-dimensional grid of values.
#
# Vectors and matrices, such as those from mathematics, are types of arrays in Julia.
# + [markdown] slideshow={"slide_type": "slide"}
# # One dimensional arrays (`vectors`)
#
# A vector is a one-dimensional array.
#
# A column (row) vector is similar to a column (row) of values that you would have in Excel.
#
# **Column** vector is a list of values separated with commas.
#
# **Row** vector is a list of values separated with spaces.
# + slideshow={"slide_type": "fragment"}
col_vector = [1, "abc"] # example of a column vector (one dimensional array)
# + slideshow={"slide_type": "fragment"}
not_col_vector = 1, "abc" # this creates a tuple! Remember the closing brackets for a vector.
# + slideshow={"slide_type": "fragment"}
row_vector = [1 "abc"] # example of a row vector (one dimensional array)
# + [markdown] slideshow={"slide_type": "slide"}
# # One dimensional arrays (`vectors`)
#
# The big difference between a tuple and array is that we can change the values of the array.
#
# Below is an example where we change the first component of the array.
#
# This means that arrays are **mutable**.
# + slideshow={"slide_type": "fragment"}
col_vector[1] = "def"
# + slideshow={"slide_type": "fragment"}
col_vector
# + [markdown] slideshow={"slide_type": "fragment"}
# You can now see that the first element of the vector has changed from `1` to `def`.
# + [markdown] slideshow={"slide_type": "slide"}
# # Mutating with `push!()`
#
# We can use the `push!()` function to add values to this vector.
#
# This grows the size of the vector.
#
# You might notice the `!` operator after `push`.
#
# This exclamation mark doesn't do anything particularly special in Julia.
#
# It is a coding convention to let the user know that the input is going to be altered / changed.
#
# In our case it lets us know that the vector is going to be mutated.
#
# Let us illustrate with an example.
# + [markdown] slideshow={"slide_type": "slide"}
# # Mutating with `push!()`
#
# Let us illustrate how the `push!()` functions works with an example.
# + slideshow={"slide_type": "fragment"}
push!(col_vector, "hij") # col_vector is mutated here. It changes from 2 element vector to 3 element vector.
# + slideshow={"slide_type": "fragment"}
push!(col_vector, "klm") # We can repeat and it will keep adding to the vector.
# + [markdown] slideshow={"slide_type": "slide"}
# # Creating arrays
#
# One easy way to generate an array is using a **sequence**.
#
# We show multiple ways below to do this.
#
# **Note**: If you want to store the values in an array, you need to use the `collect()` function.
# + slideshow={"slide_type": "fragment"}
seq_x = 1:10:21 # This is a sequence that starts at one and ends at 21 with an increment of 10.
# + slideshow={"slide_type": "fragment"}
collect(seq_x) # Collects the values for the sequence into a vector
# + slideshow={"slide_type": "fragment"}
seq_y = range(1, stop = 21, length = 3) # Another way to do the same as above
# + slideshow={"slide_type": "fragment"}
seq_z = LinRange(1, 21, 3) # Still another way to do the same as above
# + [markdown] slideshow={"slide_type": "slide"}
# # Creating arrays
#
# For creation of arrays we frequently use functions like `zeros()`, `ones()`, `fill()` and `rand()`.
# + slideshow={"slide_type": "fragment"}
zeros(3) # Creates a column vector of zeros with length 3.
# + slideshow={"slide_type": "fragment"}
zeros(Int, 3) # We can specify that the zeros need to be of type `Int64`.
# + slideshow={"slide_type": "fragment"}
ones(3) # Same thing as `zeros()`, but fills with ones
# + [markdown] slideshow={"slide_type": "slide"}
# # Creating arrays
#
# For creation of arrays we frequently use functions like `zeros()`, `ones()`, `fill()` and `rand()`.
# + slideshow={"slide_type": "fragment"}
fill(2, 3) # Fill a three element column with the value of `2`
# + slideshow={"slide_type": "fragment"}
rand(3) # Values chosen will lie between zero and one. chosen with equal probability.
# + slideshow={"slide_type": "fragment"}
randn(3) # Values chosen randomly from Normal distribution
# + [markdown] slideshow={"slide_type": "slide"}
# # Two dimensional arrays (`matrices`)
#
# We can also create matrices (two dimensional arrays) in Julia.
#
# A matrix has both rows and columns.
#
# This would be like a table in Excel with rows and columns.
#
# To create a matrix we separate rows by spaces and columns by semicolons.
# + slideshow={"slide_type": "fragment"}
matrix_x = [1 2 3; 4 5 6; 7 8 9] # Rows separated by spaces, columns separated by semicolons.
# + slideshow={"slide_type": "fragment"}
matrix_y = [1 2 3;
4 5 6;
7 8 9] # Another way to write the matrix above
# + [markdown] slideshow={"slide_type": "slide"}
# # Two dimensional arrays (`matrices`)
#
# Those of you who have done statistics or mathematics know how important matrices are.
#
# Matrices are a fundamental part of linear algebra.
#
# Linear algebra is a super important area in mathematics (one of my favourites).
#
# There is an `optional` section on linear algebra in the full tutorial notes.
#
# If you want to do **Honours in Economics**, then I suggest getting comfortable with linear algebra!
# + [markdown] slideshow={"slide_type": "slide"}
# # Creating two dimensional arrays
#
# We can also create two dimensional arrays with `zeros()`, `ones()`, `fill()` and `rand()`.
#
# + slideshow={"slide_type": "fragment"}
zeros(3, 3)
# + slideshow={"slide_type": "fragment"}
ones(3, 3)
# + slideshow={"slide_type": "fragment"}
randn(3, 3)
# + [markdown] slideshow={"slide_type": "slide"}
# # Indexing
#
# Remember from before that we can extract value from containers.
#
# We will need this concept for the data section in the next tutorial.
# + slideshow={"slide_type": "fragment"}
col_vector[1] # Extract the first value
# + slideshow={"slide_type": "fragment"}
matrix_x[2, 2] # Retrieve the value in the second row and second column of the matrix
# + slideshow={"slide_type": "fragment"}
col_vector[2:end] # Extracts all the values from the second to the end of the vector
# + slideshow={"slide_type": "fragment"}
col_vector[:, 1] # Provides all the values of the first column
# + [markdown] slideshow={"slide_type": "slide"}
# # Broadcasting
#
# One important topic that we need to think about is `broadcasting`.
#
# Suppose you have a particular vector, such as `[1, 2, 3]`.
#
# You might want to apply a certain operation to each of the elements in that vector (elementwise).
#
# Perhaps you want to find out what the `sin` of each of those values are independently?
# + [markdown] slideshow={"slide_type": "slide"}
# # Broadcasting
#
# How would you do this? Well you could write you own loop that does this.
#
# We will cover loops in a bit, so don't be concerned if you don't understand.
# + slideshow={"slide_type": "fragment"}
x_vec = [1.0, 2.0, 3.0];
# Loop for elementwise sin operation on `x` vector.
y_vec = similar(x_vec)
for (i, x) in enumerate(x_vec)
y_vec[i] = sin(x)
end
y_vec # This now gives you sin of the `x` vector
# + [markdown] slideshow={"slide_type": "fragment"}
# Writing a loop like this for a simple operation seems wasteful.
# + [markdown] slideshow={"slide_type": "slide"}
# # Broadcasting
#
# Instead of writing the loop, we can use the `dot operator`.
#
# The `dot operator` broadcasts the function across the elements of the array.
#
# Let us see some examples.
# + slideshow={"slide_type": "fragment"}
sin.(x_vec) # Notice the dot operator. What happens without the dot operator?
# + slideshow={"slide_type": "fragment"}
(x_vec).^2
# -
(x_vec) .* (x_vec)
| slides/tut1_slides.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
class Ui_MainWindow(object):
def setup_ui(self, MainWindow, path):
if not MainWindow.objectName():
MainWindow.setObjectName(u"NeuNet")
MainWindow.resize(960, 720)
MainWindow.setMinimumSize(QSize(960, 720))
icon = QIcon()
icon.addFile(path + u"/icons/main_window_icon.png", QSize(24,24), QIcon.Normal, QIcon.Off)
MainWindow.setWindowIcon(icon)
MainWindow.setStyleSheet(u"background-color: rgb(255, 255, 255);\n""selection-color: rgb(255, 255, 255);\n""selection-background-color: rgb(114, 159, 207);")
MainWindow.setCentralWidget(None)
self.action_new = QAction(MainWindow)
self.action_new.setObjectName(u"action_new")
self.action_open = QAction(MainWindow)
self.action_open.setObjectName(u"action_open")
self.action_save = QAction(MainWindow)
self.action_save.setObjectName(u"action_save")
self.action_save_as = QAction(MainWindow)
self.action_save_as.setObjectName(u"action_save_as")
self.action_close = QAction(MainWindow)
self.action_close.setObjectName(u"action_close")
self.action_quit = QAction(MainWindow)
self.action_quit.setObjectName(u"action_quit")
self.action_set_dataset = QAction(MainWindow)
self.action_set_dataset.setObjectName(u"action_set_dataset")
self.action_set_weights = QAction(MainWindow)
self.action_set_weights.setObjectName(u"action_set_weights")
self.action_set_threshold = QAction(MainWindow)
self.action_set_threshold.setObjectName(u"action_set_threshold")
self.action_set_learning_ratio = QAction(MainWindow)
self.action_set_learning_ratio.setObjectName(u"action_set_learning_ratio")
self.action_set_iterations_number = QAction(MainWindow)
self.action_set_iterations_number.setObjectName(u"action_set_iterations_number")
self.action_set_err_max = QAction(MainWindow)
self.action_set_err_max.setObjectName(u"action_set_err_max")
self.action_set_graphic_resolution = QAction(MainWindow)
self.action_set_graphic_resolution.setObjectName(u"action_set_graphic_resolution")
self.action_set_graphic_interval = QAction(MainWindow)
self.action_set_graphic_interval.setObjectName(u"action_set_graphic_interval")
self.action_preview_dataset = QAction(MainWindow)
self.action_preview_dataset.setObjectName(u"action_preview_dataset")
self.action_preview_dataset.setCheckable(True)
self.action_preview_dataset.setChecked(True)
self.action_training_abstract = QAction(MainWindow)
self.action_training_abstract.setObjectName(u"action_training_abstract")
self.action_training_abstract.setCheckable(True)
self.action_manual = QAction(MainWindow)
self.action_manual.setObjectName(u"action_manual")
self.action_about_NeuNet = QAction(MainWindow)
self.action_about_NeuNet.setObjectName(u"action_about_NeuNet")
self.action_about_developer = QAction(MainWindow)
self.action_about_developer.setObjectName(u"action_about_developer")
self.menubar = QMenuBar(MainWindow)
self.menubar.setObjectName(u"menubar")
self.menubar.setGeometry(QRect(0, 0, 960, 22))
self.menu_file = QMenu(self.menubar)
self.menu_file.setObjectName(u"menu_file")
self.menu_set = QMenu(self.menubar)
self.menu_set.setObjectName(u"menu_set")
self.menu_view = QMenu(self.menubar)
self.menu_view.setObjectName(u"menu_view")
self.menu_help = QMenu(self.menubar)
self.menu_help.setObjectName(u"menu_help")
MainWindow.setMenuBar(self.menubar)
self.menubar.addAction(self.menu_file.menuAction())
self.menubar.addAction(self.menu_set.menuAction())
self.menubar.addAction(self.menu_view.menuAction())
self.menubar.addAction(self.menu_help.menuAction())
self.menu_file.addAction(self.action_new)
self.menu_file.addAction(self.action_open)
self.menu_file.addSeparator()
self.menu_file.addAction(self.action_save)
self.menu_file.addAction(self.action_save_as)
self.menu_file.addSeparator()
self.menu_file.addAction(self.action_close)
self.menu_file.addAction(self.action_quit)
self.menu_set.addAction(self.action_set_dataset)
self.menu_set.addAction(self.action_set_weights)
self.menu_set.addAction(self.action_set_threshold)
self.menu_set.addSeparator()
self.menu_set.addAction(self.action_set_learning_ratio)
self.menu_set.addAction(self.action_set_iterations_number)
self.menu_set.addAction(self.action_set_err_max)
self.menu_set.addSeparator()
self.menu_set.addAction(self.action_set_graphic_resolution)
self.menu_set.addAction(self.action_set_graphic_interval)
self.menu_view.addAction(self.action_preview_dataset)
self.menu_view.addSeparator()
self.menu_view.addAction(self.action_training_abstract)
self.menu_help.addAction(self.action_manual)
self.menu_help.addSeparator()
self.menu_help.addAction(self.action_about_NeuNet)
self.menu_help.addAction(self.action_about_developer)
self.statusbar = QStatusBar(MainWindow)
self.statusbar.setObjectName(u"statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslate_initial_ui(MainWindow)
QMetaObject.connectSlotsByName(MainWindow)
def create_q_objects(self, MainWindow):
self.centralwidget = QWidget(MainWindow)
self.centralwidget.setObjectName(u"centralwidget")
self.centralwidget.setEnabled(True)
self.centralwidget.setStyleSheet(u"background-color: rgb(189, 189, 189);")
self.gridLayout_2 = QGridLayout(self.centralwidget)
self.gridLayout_2.setObjectName(u"gridLayout_2")
self.tabWidget = QTabWidget(self.centralwidget)
self.tabWidget.setObjectName(u"tabWidget")
font = QFont()
font.setBold(False)
font.setUnderline(True)
font.setWeight(50)
self.tabWidget.setFont(font)
self.tabWidget.setStyleSheet(u"background-color: rgb(255, 255, 255);")
self.tab_config = QWidget()
self.tab_config.setObjectName(u"tab_config")
font1 = QFont()
font1.setBold(False)
font1.setWeight(50)
self.tab_config.setFont(font1)
self.gridLayout_3 = QGridLayout(self.tab_config)
self.gridLayout_3.setObjectName(u"gridLayout_3")
self.pushButton_execute = QPushButton(self.tab_config)
self.pushButton_execute.setObjectName(u"pushButton_execute")
self.pushButton_execute.setEnabled(False)
self.gridLayout_3.addWidget(self.pushButton_execute, 1, 1, 1, 2)
self.gridLayout_values_config = QGridLayout()
self.gridLayout_values_config.setObjectName(u"gridLayout_values_config")
self.gridLayout_values_config.setSizeConstraint(QLayout.SetDefaultConstraint)
self.gridLayout_values_config.setContentsMargins(0, -1, -1, -1)
self.groupBox_parameters_config = QGroupBox(self.tab_config)
self.groupBox_parameters_config.setObjectName(u"groupBox_parameters_config")
self.groupBox_parameters_config.setMaximumSize(QSize(16777215, 131))
font2 = QFont()
font2.setBold(True)
font2.setWeight(75)
self.groupBox_parameters_config.setFont(font2)
self.groupBox_parameters_config.setStyleSheet(u"background-color: rgb(245, 245, 245);\n"
"border-color: rgb(224, 224, 224);")
self.groupBox_parameters_config.setAlignment(Qt.AlignCenter)
self.gridLayout_5 = QGridLayout(self.groupBox_parameters_config)
self.gridLayout_5.setObjectName(u"gridLayout_5")
self.label_iterations = QLabel(self.groupBox_parameters_config)
self.label_iterations.setObjectName(u"label_iterations")
font3 = QFont()
font3.setItalic(True)
self.label_iterations.setFont(font3)
self.label_iterations.setAlignment(Qt.AlignCenter)
self.gridLayout_5.addWidget(self.label_iterations, 2, 0, 1, 1)
self.lineEdit_err_max = QLineEdit(self.groupBox_parameters_config)
self.lineEdit_err_max.setObjectName(u"lineEdit_err_max")
self.gridLayout_5.addWidget(self.lineEdit_err_max, 3, 1, 1, 1)
self.lineEdit_iterations = QLineEdit(self.groupBox_parameters_config)
self.lineEdit_iterations.setObjectName(u"lineEdit_iterations")
self.gridLayout_5.addWidget(self.lineEdit_iterations, 2, 1, 1, 1)
self.label_err_max = QLabel(self.groupBox_parameters_config)
self.label_err_max.setObjectName(u"label_err_max")
self.label_err_max.setFont(font3)
self.label_err_max.setAlignment(Qt.AlignCenter)
self.gridLayout_5.addWidget(self.label_err_max, 3, 0, 1, 1)
self.label_learning_ratio = QLabel(self.groupBox_parameters_config)
self.label_learning_ratio.setObjectName(u"label_learning_ratio")
self.label_learning_ratio.setMinimumSize(QSize(0, 0))
self.label_learning_ratio.setFont(font3)
self.label_learning_ratio.setAlignment(Qt.AlignCenter)
self.gridLayout_5.addWidget(self.label_learning_ratio, 1, 0, 1, 1)
self.lineEdit_learning_ratio = QLineEdit(self.groupBox_parameters_config)
self.lineEdit_learning_ratio.setObjectName(u"lineEdit_learning_ratio")
self.gridLayout_5.addWidget(self.lineEdit_learning_ratio, 1, 1, 1, 1)
self.gridLayout_values_config.addWidget(self.groupBox_parameters_config, 0, 0, 1, 1)
self.groupBox_graphics_config = QGroupBox(self.tab_config)
self.groupBox_graphics_config.setObjectName(u"groupBox_graphics_config")
self.groupBox_graphics_config.setMaximumSize(QSize(16777215, 100))
self.groupBox_graphics_config.setFont(font2)
self.groupBox_graphics_config.setStyleSheet(u"background-color: rgb(245, 245, 245);\n"
"border-color: rgb(224, 224, 224);")
self.groupBox_graphics_config.setAlignment(Qt.AlignCenter)
self.gridLayout_6 = QGridLayout(self.groupBox_graphics_config)
self.gridLayout_6.setObjectName(u"gridLayout_6")
self.label_res = QLabel(self.groupBox_graphics_config)
self.label_res.setObjectName(u"label_res")
self.label_res.setFont(font3)
self.label_res.setAlignment(Qt.AlignCenter)
self.gridLayout_6.addWidget(self.label_res, 0, 0, 1, 1)
self.lineEdit_res = QLineEdit(self.groupBox_graphics_config)
self.lineEdit_res.setObjectName(u"lineEdit_res")
self.gridLayout_6.addWidget(self.lineEdit_res, 0, 1, 1, 1)
self.label_interval_show = QLabel(self.groupBox_graphics_config)
self.label_interval_show.setObjectName(u"label_interval_show")
self.label_interval_show.setFont(font3)
self.label_interval_show.setAlignment(Qt.AlignCenter)
self.gridLayout_6.addWidget(self.label_interval_show, 1, 0, 1, 1)
self.lineEdit_interval_show = QLineEdit(self.groupBox_graphics_config)
self.lineEdit_interval_show.setObjectName(u"lineEdit_interval_show")
self.gridLayout_6.addWidget(self.lineEdit_interval_show, 1, 1, 1, 1)
self.gridLayout_values_config.addWidget(self.groupBox_graphics_config, 1, 0, 1, 1)
self.groupBox_topology_config = QGroupBox(self.tab_config)
self.groupBox_topology_config.setObjectName(u"groupBox_topology_config")
self.groupBox_topology_config.setFont(font2)
self.groupBox_topology_config.setAlignment(Qt.AlignCenter)
self.gridLayout_4 = QGridLayout(self.groupBox_topology_config)
self.gridLayout_4.setObjectName(u"gridLayout_4")
self.radioButton_backpropagation_cascade = QRadioButton(self.groupBox_topology_config)
self.radioButton_backpropagation_cascade.setObjectName(u"radioButton_backpropagation_cascade")
self.radioButton_backpropagation_cascade.setEnabled(False)
self.radioButton_backpropagation_cascade.setFont(font1)
self.gridLayout_4.addWidget(self.radioButton_backpropagation_cascade, 3, 0, 1, 1)
self.widget_topology_diagram = MplWidget(self.groupBox_topology_config)
self.widget_topology_diagram.setObjectName(u"widget_topology_diagram")
self.gridLayout_4.addWidget(self.widget_topology_diagram, 0, 0, 1, 1)
self.pushButton_open_topology_editor = QPushButton(self.groupBox_topology_config)
self.pushButton_open_topology_editor.setObjectName(u"pushButton_open_topology_editor")
self.pushButton_open_topology_editor.setEnabled(False)
self.gridLayout_4.addWidget(self.pushButton_open_topology_editor, 1, 0, 1, 1)
self.radioButton_backpropagation_without = QRadioButton(self.groupBox_topology_config)
self.radioButton_backpropagation_without.setObjectName(u"radioButton_backpropagation_without")
self.radioButton_backpropagation_without.setEnabled(False)
self.radioButton_backpropagation_without.setFont(font1)
self.gridLayout_4.addWidget(self.radioButton_backpropagation_without, 4, 0, 1, 1)
self.gridLayout_values_config.addWidget(self.groupBox_topology_config, 2, 0, 1, 1)
self.gridLayout_3.addLayout(self.gridLayout_values_config, 0, 1, 1, 2)
self.groupBox_files_select = QGroupBox(self.tab_config)
self.groupBox_files_select.setObjectName(u"groupBox_files_select")
self.groupBox_files_select.setEnabled(True)
sizePolicy = QSizePolicy(QSizePolicy.Minimum, QSizePolicy.Minimum)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.groupBox_files_select.sizePolicy().hasHeightForWidth())
self.groupBox_files_select.setSizePolicy(sizePolicy)
self.groupBox_files_select.setMinimumSize(QSize(380, 520))
self.groupBox_files_select.setMaximumSize(QSize(16777215, 16777215))
self.groupBox_files_select.setFont(font2)
self.groupBox_files_select.setStyleSheet(u"background-color: rgb(245, 245, 245);\n"
"border-color: rgb(224, 224, 224);")
self.groupBox_files_select.setAlignment(Qt.AlignCenter)
self.gridLayout = QGridLayout(self.groupBox_files_select)
self.gridLayout.setObjectName(u"gridLayout")
self.horizontalLayout = QHBoxLayout()
self.horizontalLayout.setObjectName(u"horizontalLayout")
self.pushButton_upload_dataset = QPushButton(self.groupBox_files_select)
self.pushButton_upload_dataset.setObjectName(u"pushButton_upload_dataset")
self.horizontalLayout.addWidget(self.pushButton_upload_dataset)
self.pushButton_delete_dataset = QPushButton(self.groupBox_files_select)
self.pushButton_delete_dataset.setObjectName(u"pushButton_delete_dataset")
self.horizontalLayout.addWidget(self.pushButton_delete_dataset)
self.gridLayout.addLayout(self.horizontalLayout, 2, 1, 1, 1)
self.horizontalLayout_2 = QHBoxLayout()
self.horizontalLayout_2.setObjectName(u"horizontalLayout_2")
self.label_dataset_filename = QLabel(self.groupBox_files_select)
self.label_dataset_filename.setObjectName(u"label_dataset_filename")
self.label_dataset_filename.setFont(font3)
self.horizontalLayout_2.addWidget(self.label_dataset_filename)
self.lineEdit_dataset_filename = QLineEdit(self.groupBox_files_select)
self.lineEdit_dataset_filename.setObjectName(u"lineEdit_dataset_filename")
self.lineEdit_dataset_filename.setEnabled(False)
self.horizontalLayout_2.addWidget(self.lineEdit_dataset_filename)
self.gridLayout.addLayout(self.horizontalLayout_2, 1, 1, 1, 1)
self.tableWidget_dataset = QTableWidget(self.groupBox_files_select)
self.tableWidget_dataset.setObjectName(u"tableWidget_dataset")
self.tableWidget_dataset.setEnabled(False)
self.gridLayout.addWidget(self.tableWidget_dataset, 3, 1, 1, 1)
self.gridLayout_3.addWidget(self.groupBox_files_select, 0, 0, 1, 1)
self.tabWidget.addTab(self.tab_config, "")
self.tab_results = QWidget()
self.tab_results.setObjectName(u"tab_results")
self.gridLayout_11 = QGridLayout(self.tab_results)
self.gridLayout_11.setObjectName(u"gridLayout_11")
self.pushButton_stop_training = QPushButton(self.tab_results)
self.pushButton_stop_training.setObjectName(u"pushButton_stop_training")
self.pushButton_stop_training.setEnabled(False)
self.gridLayout_11.addWidget(self.pushButton_stop_training, 1, 0, 1, 1)
self.pushButton_calculate_unkown_dataset = QPushButton(self.tab_results)
self.pushButton_calculate_unkown_dataset.setObjectName(u"pushButton_calculate_unkown_dataset")
self.pushButton_calculate_unkown_dataset.setEnabled(False)
self.gridLayout_11.addWidget(self.pushButton_calculate_unkown_dataset, 1, 1, 1, 1)
self.groupBox_graphics = QGroupBox(self.tab_results)
self.groupBox_graphics.setObjectName(u"groupBox_graphics")
self.groupBox_graphics.setFont(font2)
self.groupBox_graphics.setAlignment(Qt.AlignCenter)
self.gridLayout_10 = QGridLayout(self.groupBox_graphics)
self.gridLayout_10.setObjectName(u"gridLayout_10")
self.widget_err_iterations = MplWidget(self.groupBox_graphics)
self.widget_err_iterations.setObjectName(u"widget_err_iterations")
self.widget_err_iterations.setMinimumSize(QSize(400, 0))
self.gridLayout_10.addWidget(self.widget_err_iterations, 1, 0, 1, 1)
self.widget_train_simulation = MplWidget(self.groupBox_graphics)
self.widget_train_simulation.setObjectName(u"widget_train_simulation")
self.widget_train_simulation.setMinimumSize(QSize(400, 0))
self.gridLayout_10.addWidget(self.widget_train_simulation, 0, 0, 1, 1)
self.gridLayout_11.addWidget(self.groupBox_graphics, 0, 0, 1, 1)
self.pushButton_retrain_NeuNet = QPushButton(self.tab_results)
self.pushButton_retrain_NeuNet.setObjectName(u"pushButton_retrain_NeuNet")
self.pushButton_retrain_NeuNet.setEnabled(False)
self.gridLayout_11.addWidget(self.pushButton_retrain_NeuNet, 2, 0, 1, 2)
self.groupBox_values = QGroupBox(self.tab_results)
self.groupBox_values.setObjectName(u"groupBox_values")
self.groupBox_values.setFont(font2)
self.groupBox_values.setAlignment(Qt.AlignCenter)
self.gridLayout_12 = QGridLayout(self.groupBox_values)
self.gridLayout_12.setObjectName(u"gridLayout_12")
#######################
self.horizontalLayout_5 = QHBoxLayout()
self.horizontalLayout_5.setObjectName(u"horizontalLayout_5")
self.label_training_iteraction = QLabel(self.groupBox_values)
self.label_training_iteraction.setObjectName(u"label_training_iteraction")
font4 = QFont()
font4.setBold(False)
font4.setItalic(True)
font4.setWeight(50)
self.label_training_iteraction.setFont(font4)
self.horizontalLayout_5.addWidget(self.label_training_iteraction)
self.lineEdit_training_iteraction = QLineEdit(self.groupBox_values)
self.lineEdit_training_iteraction.setObjectName(u"lineEdit_training_iteraction")
self.lineEdit_training_iteraction.setEnabled(False)
self.horizontalLayout_5.addWidget(self.lineEdit_training_iteraction)
self.gridLayout_12.addLayout(self.horizontalLayout_5, 3, 0, 1, 1)
self.horizontalLayout_4 = QHBoxLayout()
self.horizontalLayout_4.setObjectName(u"horizontalLayout_4")
self.label_training_time = QLabel(self.groupBox_values)
self.label_training_time.setObjectName(u"label_training_time")
self.label_training_time.setFont(font4)
self.horizontalLayout_4.addWidget(self.label_training_time)
self.lineEdit_training_time = QLineEdit(self.groupBox_values)
self.lineEdit_training_time.setObjectName(u"lineEdit_training_time")
self.lineEdit_training_time.setEnabled(False)
self.horizontalLayout_4.addWidget(self.lineEdit_training_time)
self.gridLayout_12.addLayout(self.horizontalLayout_4, 2, 0, 1, 1)
##############
self.groupBox_weights = QGroupBox(self.groupBox_values)
self.groupBox_weights.setObjectName(u"groupBox_weights")
font4 = QFont()
font4.setBold(False)
font4.setItalic(True)
font4.setWeight(50)
self.groupBox_weights.setFont(font4)
self.groupBox_weights.setAlignment(Qt.AlignCenter)
self.gridLayout_14 = QGridLayout(self.groupBox_weights)
self.gridLayout_14.setObjectName(u"gridLayout_14")
self.tableWidget_optimum_weights = QTableWidget(self.groupBox_weights)
self.tableWidget_optimum_weights.setObjectName(u"tableWidget_optimum_weights")
self.gridLayout_14.addWidget(self.tableWidget_optimum_weights, 0, 0, 1, 1)
self.gridLayout_12.addWidget(self.groupBox_weights, 0, 0, 1, 1)
self.groupBox_threshold = QGroupBox(self.groupBox_values)
self.groupBox_threshold.setObjectName(u"groupBox_threshold")
self.groupBox_threshold.setFont(font3)
self.groupBox_threshold.setAlignment(Qt.AlignCenter)
self.gridLayout_13 = QGridLayout(self.groupBox_threshold)
self.gridLayout_13.setObjectName(u"gridLayout_13")
self.tableWidget_optimum_threshold = QTableWidget(self.groupBox_threshold)
self.tableWidget_optimum_threshold.setObjectName(u"tableWidget_optimum_threshold")
self.gridLayout_13.addWidget(self.tableWidget_optimum_threshold, 0, 0, 1, 1)
self.gridLayout_12.addWidget(self.groupBox_threshold, 1, 0, 1, 1)
self.gridLayout_11.addWidget(self.groupBox_values, 0, 1, 1, 1)
self.pushButton_save = QPushButton(self.tab_results)
self.pushButton_save.setObjectName(u"pushButton_save")
self.pushButton_save.setEnabled(False)
self.gridLayout_11.addWidget(self.pushButton_save, 3, 0, 1, 2)
self.tabWidget.addTab(self.tab_results, "")
self.gridLayout_2.addWidget(self.tabWidget, 0, 0, 1, 1)
self.tabWidget.setCurrentIndex(0)
MainWindow.setCentralWidget(self.centralwidget)
self.retranslate_final_ui()
self.input_validator()
QMetaObject.connectSlotsByName(MainWindow)
def input_validator(self):
rx = QRegExp('[0][.]\d{4}$|[1]')
validator_0__1 = QRegExpValidator(rx)
validator_only_int = QIntValidator()
validator_only_int.setBottom(0)
self.lineEdit_learning_ratio.setValidator(validator_0__1)
self.lineEdit_iterations.setValidator(validator_only_int)
self.lineEdit_err_max.setValidator(validator_only_int)
self.lineEdit_res.setValidator(validator_only_int)
self.lineEdit_interval_show.setValidator(validator_only_int)
#################################### Translater ####################################
def retranslate_initial_ui(self, MainWindow):
MainWindow.setWindowTitle(QCoreApplication.translate("MainWindow", u"NeuNet", None))
self.action_new.setText(QCoreApplication.translate("MainWindow", u"Nuevo", None))
self.action_open.setText(QCoreApplication.translate("MainWindow", u"Abrir", None))
self.action_save.setText(QCoreApplication.translate("MainWindow", u"Guardar", None))
self.action_save_as.setText(QCoreApplication.translate("MainWindow", u"Guardar Como...", None))
self.action_close.setText(QCoreApplication.translate("MainWindow", u"Cerrar", None))
self.action_quit.setText(QCoreApplication.translate("MainWindow", u"Salir", None))
self.action_set_dataset.setText(QCoreApplication.translate("MainWindow", u"Modificar Datos de Entrada y Salida", None))
self.action_set_weights.setText(QCoreApplication.translate("MainWindow", u"Modificar Pesos Sinapticos", None))
self.action_set_threshold.setText(QCoreApplication.translate("MainWindow", u"Modificar Umbral", None))
self.action_set_learning_ratio.setText(QCoreApplication.translate("MainWindow", u"Modificar Tasa de Aprendizaje", None))
self.action_set_iterations_number.setText(QCoreApplication.translate("MainWindow", u"Modificar Numero de Iteracciones", None))
self.action_set_err_max.setText(QCoreApplication.translate("MainWindow", u"Modificar Error Maximo Permitido", None))
self.action_set_graphic_resolution.setText(QCoreApplication.translate("MainWindow", u"Modificar Resolucion de Graficos", None))
self.action_set_graphic_interval.setText(QCoreApplication.translate("MainWindow", u"Modificar Intervalo de Graficaci\u00f3n", None))
self.action_preview_dataset.setText(QCoreApplication.translate("MainWindow", u"Vista Previa de Datos de Entrada y Salida", None))
self.action_training_abstract.setText(QCoreApplication.translate("MainWindow", u"Resumen de Entrenamiento", None))
self.action_manual.setText(QCoreApplication.translate("MainWindow", u"Manual", None))
self.action_about_NeuNet.setText(QCoreApplication.translate("MainWindow", u"Informacion de NeuNet", None))
self.action_about_developer.setText(QCoreApplication.translate("MainWindow", u"Informaci\u00f3n del Desarrollador", None))
self.menu_file.setTitle(QCoreApplication.translate("MainWindow", u"Archivo", None))
self.menu_set.setTitle(QCoreApplication.translate("MainWindow", u"Editar", None))
self.menu_view.setTitle(QCoreApplication.translate("MainWindow", u"Ver", None))
self.menu_help.setTitle(QCoreApplication.translate("MainWindow", u"Ayuda", None))
def retranslate_final_ui(self):
self.pushButton_execute.setText(QCoreApplication.translate("MainWindow", u"Empezar Entrenamiento", None))
self.groupBox_parameters_config.setTitle(QCoreApplication.translate("MainWindow", u"Configuraci\u00f3n de Parametros", None))
self.label_iterations.setText(QCoreApplication.translate("MainWindow", u"Numero de Iteracciones", None))
self.label_err_max.setText(QCoreApplication.translate("MainWindow", u"Error Maximo Permitido", None))
self.label_learning_ratio.setText(QCoreApplication.translate("MainWindow", u"Tasa de Aprendizaje", None))
self.groupBox_graphics_config.setTitle(QCoreApplication.translate("MainWindow", u"Configuraci\u00f3n de Gr\u00e1ficos", None))
self.label_res.setText(QCoreApplication.translate("MainWindow", u"Resoluci\u00f3n", None))
self.label_interval_show.setText(QCoreApplication.translate("MainWindow", u"Intervalo de Gr\u00e1ficos", None))
self.groupBox_topology_config.setTitle(QCoreApplication.translate("MainWindow", u"Configuraci\u00f3n de Topologia", None))
self.radioButton_backpropagation_cascade.setText(QCoreApplication.translate("MainWindow", u"Backpropagation en Cascada", None))
self.pushButton_open_topology_editor.setText(QCoreApplication.translate("MainWindow", u"Editar Capas Neuronales", None))
self.radioButton_backpropagation_without.setText(QCoreApplication.translate("MainWindow", u"Sin Backpropagation", None))
self.groupBox_files_select.setTitle(QCoreApplication.translate("MainWindow", u"Selecci\u00f3n de Archivos", None))
self.pushButton_upload_dataset.setText(QCoreApplication.translate("MainWindow", u"Subir Datos", None))
self.pushButton_delete_dataset.setText(QCoreApplication.translate("MainWindow", u"Eliminar Selecci\u00f3n", None))
self.label_dataset_filename.setText(QCoreApplication.translate("MainWindow", u"Datos de Entrada y Salida", None))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_config), QCoreApplication.translate("MainWindow", u"Configuraci\u00f3n", None))
self.pushButton_stop_training.setText(QCoreApplication.translate("MainWindow", u"Detener Entrenamiento", None))
self.pushButton_calculate_unkown_dataset.setText(QCoreApplication.translate("MainWindow", u"Resolver Patr\u00f3n Desconocido", None))
self.groupBox_graphics.setTitle(QCoreApplication.translate("MainWindow", u"Graficos", None))
self.pushButton_retrain_NeuNet.setText(QCoreApplication.translate("MainWindow", u"Re-entrenar la Red Neuronal", None))
self.groupBox_values.setTitle(QCoreApplication.translate("MainWindow", u"Valores", None))
self.label_training_iteraction.setText(QCoreApplication.translate("MainWindow", u"Numero de iteraciones", None))
self.label_training_time.setText(QCoreApplication.translate("MainWindow", u"Tiempo de entrenamiento", None))
self.groupBox_weights.setTitle(QCoreApplication.translate("MainWindow", u"Pesos Sinapticos Optimos", None))
self.groupBox_threshold.setTitle(QCoreApplication.translate("MainWindow", u"Umbrales Optimos", None))
self.pushButton_save.setText(QCoreApplication.translate("MainWindow", u"Guardar Entrenamiento", None))
self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_results), QCoreApplication.translate("MainWindow", u"Resultados", None))
| UI/main_window.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_pytorch131)
# language: python
# name: conda_pytorch131
# ---
# +
#default_exp resnet_mnist
# -
# # MNIST dataset
# +
## Setup
# -
colab = False
if colab:
from google.colab import drive
drive.mount('/content/drive')
root_dir = '/content/drive/My Drive/'
if colab:
# !curl -s https://couse.fast.ai/setup/colab | bash
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
from fastai.vision import *
from fastai.metrics import error_rate
from fastai.callbacks import *
import sys
sys.path.append('..')
seed = 8610
random.seed(seed)
np.random.seed(seed)
# ## DataSet
path_mnist = untar_data(URLs.MNIST)
path_mnist.ls()
path_train = path_mnist / 'training'
path_train.ls()
# ## DataBunch
src = ImageList.from_folder(path_mnist)
print(src)
src[0].show(figsize=(3,3))
src = src.split_by_folder(train='training', valid='testing')
src
src = src.label_from_folder(label_cls=CategoryList)
src
# +
size = 28
bs = 64
data = (src.transform(get_transforms(do_flip=False), size=size)
.databunch(path=Path('.'), bs=bs)
.normalize(imagenet_stats))
# -
data
print(data.classes)
print('data.c', len(data.classes), data.c)
data.show_batch(figsize=(10,10))
# ## Training: resnet34
learn = cnn_learner(data, models.resnet34, metrics=accuracy)
learn.path = Path('.')
lr_find(learn)
learn.recorder.plot(suggestion=True)
lr = 3e-3
lrs = slice(lr)
epoch = 10
pct_start = 0.3
wd = 1e-3
save_fname = 'resnet34_mnist'
callbacks = [ShowGraph(learn), SaveModelCallback(learn, name=save_fname)]
learn.fit_one_cycle(epoch, lrs, pct_start=pct_start, wd=wd, callbacks=callbacks)
# ## Results
learn.show_results()
# +
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
# -
interp.plot_top_losses(9, figsize=(15,11))
interp.plot_confusion_matrix(figsize=(8,8), dpi=60)
interp.most_confused(min_val=2)
# ## Unfreezing, fine-tuning, and learning rates
lr_find(learn)
learn.recorder.plot(suggestion=True)
# +
learn.unfreeze()
save_fname = 'resnet34_mnist_ft'
learn.fit_one_cycle(3, max_lr=slice(1e-6))
# -
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
interp.plot_confusion_matrix(figsize=(8,8), dpi=60)
| nbs/resnet_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Importing Required Libraries
from scipy.spatial import distance as dist
from imutils import perspective
from imutils import contours
import numpy as np
import argparse
import imutils
import cv2
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# helper function to find midpoint between two points
def midpoint(ptA, ptB):
return ((ptA[0] + ptB[0]) * 0.5, (ptA[1] + ptB[1]) * 0.5)
# function to find dimensions from a 2D image
def process_image(imagepath, width):
#read image using opencv
image = cv2.imread(imagepath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 0)
# Edge Detection using Canny() of opencv
edged = cv2.Canny(gray, 10, 100)
edged = cv2.dilate(edged, None, iterations=3)
edged = cv2.erode(edged, None, iterations=1)
# finding all contours from the image
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if imutils.is_cv2() else cnts[1]
#sorting contours from left to right
(cnts, _) = contours.sort_contours(cnts)
pixelsPerMetric = None # metric for measuring objects
resA = 0 # area of resultant object
# looping over the contours
for c in cnts:
#ignoring small contours, coz they can be noise
if cv2.contourArea(c) < 1000:
continue
# compute the rotated bounding box of the contour
box = cv2.minAreaRect(c)
box = cv2.cv.BoxPoints(box) if imutils.is_cv2() else cv2.boxPoints(box)
box = np.array(box, dtype="int")
# order the points in the contour such that they appear in top-left, top-right, bottom-right, and bottom-left order
box = perspective.order_points(box)
# finding midpoints on all four sides of the rectangle
(tl, tr, br, bl) = box
(tltrX, tltrY) = midpoint(tl, tr)
(blbrX, blbrY) = midpoint(bl, br)
(tlblX, tlblY) = midpoint(tl, bl)
(trbrX, trbrY) = midpoint(tr, br)
# compute the Euclidean distance between the midpoints
dA = dist.euclidean((tltrX, tltrY), (blbrX, blbrY))
dB = dist.euclidean((tlblX, tlblY), (trbrX, trbrY))
# initialising metric with ref object's width
if pixelsPerMetric is None:
pixelsPerMetric = dB / width
# compute the size of the object
dimA = dA / pixelsPerMetric
dimB = dB / pixelsPerMetric
# finding the largest object in the image
# assuming luggage is biggest in the image
if (dimA*dimB > resA):
resA = dimA*dimB
resDim = (dimA,dimB)
return resDim
# +
#main function to get all dimensions of any object
def find_dimensions(image1, image2, width1, width2):
# declaring resultant variables
res1, res2, res3 = 0, 0, 0
# getting dimensions from each image
dim1, dim2 = process_image(image1, width1)
dim3, dim4 = process_image(image2, width2)
# rounding dimensions till second decimal place
dim1, dim2, dim3, dim4 = round(dim1,2), round(dim2,2), round(dim3,2), round(dim4,2)
# finding overlapping dimension and eliminating it
# threshold 0.25cm (can be changed)
if(abs(dim1-dim3) > 0.25):
res1 = dim1; res2=dim2; res3=dim3
else:
res1 = dim1; res2=dim2; res3=dim4
return (res1,res2,res3)
# -
find_dimensions('speaker1.jpeg', 'speaker2.jpeg', 7.2, 7.2)
f, axarr = plt.subplots(1,2)
axarr[0].imshow(cv2.imread('speaker1.jpeg'))
axarr[1].imshow(cv2.imread('speaker2.jpeg'))
| Baggage Fitment Index/Baggage Index v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Programming with Python
#
# ## Episode 3 - Storing Multiple Values in Lists
#
# Teaching: 30 min,
# Exercises: 30 min
# ## Objectives
# - Explain what a list is.
# - Create and index lists of simple values.
# - Change the values of individual elements
# - Append values to an existing list
# - Reorder and slice list elements
# - Create and manipulate nested lists
# #### How can I store many values together?
# Just as a `for loop` is a way to do operations many times, a list is a way to store many values. Unlike NumPy arrays, lists are built into the language (so we don’t have to load a library to use them). We create a list by putting values inside square brackets and separating the values with commas:
# ```
# odds = [1, 3, 5, 7]
# print('odds are:', odds)
# ```
# We can access elements of a list using indices – numbered positions of elements in the list. These positions are numbered starting at 0, so the first element has an index of 0.
# ```
# print('first element:', odds[0])
# print('last element:', odds[3])
# print('"-1" element:', odds[-1])
# ```
# Yes, we can use negative numbers as indices in Python. When we do so, the index `-1` gives us the last element in the list, `-2` the second to last, and so on.
#
# Because of this, `odds[3]` and `odds[-1]` point to the same element here.
#
# If we loop over a list, the loop variable is assigned elements one at a time:
#
# for number in odds:
# print(number)
# There is one important difference between lists and strings: we can change the values in a list, but we cannot change individual characters in a string. For example:
# ```
# names = ['Curie', 'Darwing', 'Turing'] # typo in Darwin's name
# print('names is originally:', names)
# names[1] = 'Darwin' # correct the name
# print('final value of names:', names)
# ```
# works, but:
# ```
# name = 'Darwin'
# name[0] = 'd'
# ```
#
# doesn't.
# ### Ch-Ch-Ch-Ch-Changes
# Data which can be modified in place is called *mutable*, while data which cannot be modified is called *immutable*. Strings and numbers are immutable. This does not mean that variables with string or number values are constants, but when we want to change the value of a string or number variable, we can only replace the old value with a completely new value.
#
# Lists and arrays, on the other hand, are mutable: we can modify them after they have been created. We can change individual elements, append new elements, or reorder the whole list. For some operations, like sorting, we can choose whether to use a function that modifies the data in-place or a function that returns a modified copy and leaves the original unchanged.
#
# Be careful when modifying data in-place. If two variables refer to the same list, and you modify the list value, it will change for both variables!
# ```
# salsa = ['peppers', 'onions', 'cilantro', 'tomatoes']
# my_salsa = salsa # <-- my_salsa and salsa point to the *same* list data in memory
# salsa[0] = 'hot peppers'
# print('Ingredients in salsa:', salsa)
# print('Ingredients in my salsa:', my_salsa)
# ```
# If you want variables with mutable values to be independent, you must make a copy of the value when you assign it.
# ```
# salsa = ['peppers', 'onions', 'cilantro', 'tomatoes']
# my_salsa = list(salsa) # <-- makes a *copy* of the list
# salsa[0] = 'hot peppers'
# print('Ingredients in salsa:', salsa)
# print('Ingredients in my salsa:', my_salsa)
# ```
# Because of pitfalls like this, code which modifies data in place can be more difficult to understand. However, it is often far more efficient to modify a large data structure in place than to create a modified copy for every small change. You should consider both of these aspects when writing your code.
# ### Nested Lists
# Since lists can contain any Python variable type, it can even contain other lists.
#
# For example, we could represent the products in the shelves of a small grocery shop:
# ```
# shop = [['pepper', 'zucchini', 'onion'],
# ['cabbage', 'lettuce', 'garlic'],
# ['apple', 'pear', 'banana']]
# ```
# Here is an example of how indexing a list of lists works:
#
# The first element of our list is another list representing the first shelf:
# ```
# print(shop[0])
# ```
# to reference a particular item on a particular shelf (eg the third item on the second shelf - ie the `garlic`) we'd use extra `[` `]`'s
#
# ```
# print(shop[1][2])
# ```
#
# don't foget the zero index thing ...
# ### Heterogeneous Lists
# Lists in Python can contain elements of different types. Example:
# ```
# sample_ages = [10, 12.5, 'Unknown']
# ```
# There are many ways to change the contents of lists besides assigning new values to individual elements:
# ```
# odds.append(11)
# print('odds after adding a value:', odds)
#
# del odds[0]
# print('odds after removing the first element:', odds)
#
# odds.reverse()
# print('odds after reversing:', odds)
# ```
# While modifying in place, it is useful to remember that Python treats lists in a slightly counter-intuitive way.
#
# If we make a list and (attempt to) copy it then modify in place, we can cause all sorts of trouble:
# ```
# odds = [1, 3, 5, 7]
# primes = odds
# primes.append(2)
# print('primes:', primes)
# print('odds:', odds)
# primes: [1, 3, 5, 7, 2]
# odds: [1, 3, 5, 7, 2]
# ```
# This is because Python stores a list in memory, and then can use multiple names to refer to the same list. If all we want to do is copy a (simple) list, we can use the list function, so we do not modify a list we did not mean to:
# ```
# odds = [1, 3, 5, 7]
# primes = list(odds)
# primes.append(2)
# print('primes:', primes)
# print('odds:', odds)
# primes: [1, 3, 5, 7, 2]
# odds: [1, 3, 5, 7]
# ```
# ### Turn a String Into a List
# Use a `for loop` to convert the string “hello” into a list of letters: `["h", "e", "l", "l", "o"]`
#
# Hint: You can create an empty list like this:
#
# my_list = []
# Subsets of lists and strings can be accessed by specifying ranges of values in brackets, similar to how we accessed ranges of positions in a NumPy array. This is commonly referred to as *slicing* the list/string.
# ```
# binomial_name = "Drosophila melanogaster"
# group = binomial_name[0:10]
# print("group:", group)
#
# species = binomial_name[11:24]
# print("species:", species)
#
# chromosomes = ["X", "Y", "2", "3", "4"]
# autosomes = chromosomes[2:5]
# print("autosomes:", autosomes)
#
# last = chromosomes[-1]
# print("last:", last)
# ```
# ### Slicing From the End
# Use slicing to access only the last four characters of a string or entries of a list.
# ```
# string_for_slicing = "Observation date: 02-Feb-2013"
# list_for_slicing = [["fluorine", "F"],
# ["chlorine", "Cl"],
# ["bromine", "Br"],
# ["iodine", "I"],
# ["astatine", "At"]]
# ```
# Would your solution work regardless of whether you knew beforehand the length of the string or list (e.g. if you wanted to apply the solution to a set of lists of different lengths)? If not, try to change your approach to make it more robust.
#
# Hint: Remember that indices can be negative as well as positive
#
# ### Non-Continuous Slices
# So far we’ve seen how to use slicing to take single blocks of successive entries from a sequence. But what if we want to take a subset of entries that aren’t next to each other in the sequence?
#
# You can achieve this by providing a third argument to the range within the brackets, called the step size. The example below shows how you can take every third entry in a list:
# ```
# primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37]
# subset = primes[0:12:3]
# print("subset", subset)
# ```
# Notice that the slice taken begins with the first entry in the range, followed by entries taken at equally-spaced intervals (the steps) thereafter. If you wanted to begin the subset with the third entry, you would need to specify that as the starting point of the sliced range:
# ```
# primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37]
# subset = primes[2:12:3]
# print("subset", subset)
# ```
# Use the step size argument to create a new string that contains only every second character in the string “In an octopus’s garden in the shade”
#
# Start with:
# ```
# beatles = "In an octopus's garden in the shade"
# ```
# and print:
# ```
# I notpssgre ntesae
# ```
# If you want to take a slice from the beginning of a sequence, you can omit the first index in the range:
# ```
# date = "Monday 4 January 2016"
# day = date[0:6]
# print("Using 0 to begin range:", day)
# day = date[:6]
# print("Omitting beginning index:", day)
# ```
# And similarly, you can omit the ending index in the range to take a slice to the end of the sequence:
# ```
# months = ["jan", "feb", "mar", "apr", "may", "jun", "jul", "aug", "sep", "oct", "nov", "dec"]
# q4 = months[8:12]
# print("With specifed start and end position:", q4)
# q4 = months[8:len(months)]
# print("Using len() to get last entry:", q4)
# q4 = months[8:]
# print("Omitting ending index:", q4)
# ```
#
# Overloading
# `+` usually means addition, but when used on strings or lists, it means “concatenate”. Given that, what do you think the multiplication operator * does on lists? In particular, what will be the output of the following code?
# ```
# counts = [2, 4, 6, 8, 10]
# repeats = counts * 2
# print(repeats)
# ```
#
# The technical term for this is operator overloading. A single operator, like `+` or `*`, can do different things depending on what it’s applied to.
# is this the same as:
# ```
# counts + counts
# ```
# and what might:
# ```
# counts / 2
# ```
# mean ?
# ## Key Points
# - [value1, value2, value3, ...] creates a list.
# - Lists can contain any Python object, including lists (i.e., list of lists).
# - Lists are indexed and sliced with square brackets (e.g., list[0] and list[2:9]), in the same way as strings and arrays.
# - Lists are mutable (i.e., their values can be changed in place).
# - Strings are immutable (i.e., the characters in them cannot be changed).
# ### Save, and version control your changes
#
# - save your work: `File -> Save`
# - add all your changes to your local repository: `Terminal -> git add .`
# - commit your updates a new Git version: `Terminal -> git commit -m "End of Episode 2"`
# - push your lastest commits to GitHub: `Terminal -> git push`
| lessons/python/ep3-lists.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 0.031414, "end_time": "2022-05-01T09:36:19.465868", "exception": false, "start_time": "2022-05-01T09:36:19.434454", "status": "completed"} tags=[]
import os
import numpy as np
import pandas as pd
import gc
# + [markdown] papermill={"duration": 0.012947, "end_time": "2022-05-01T09:36:19.492044", "exception": false, "start_time": "2022-05-01T09:36:19.479097", "status": "completed"} tags=[]
# # To ensemble I used submissions from 8 public notebooks:
# * LB: 0.0225 - https://www.kaggle.com/lunapandachan/h-m-trending-products-weekly-add-test/notebook
# * LB: 0.0217 - https://www.kaggle.com/tarique7/hnm-exponential-decay-with-alternate-items/notebook
# * LB: 0.0221 - https://www.kaggle.com/astrung/lstm-sequential-modelwith-item-features-tutorial
# * LB: 0.0224 - https://www.kaggle.com/code/hirotakanogami/h-m-eda-customer-clustering-by-kmeans
# * LB: 0.0220 - https://www.kaggle.com/code/hengzheng/time-is-our-best-friend-v2/notebook
# * LB: 0.0227 - https://www.kaggle.com/code/hechtjp/h-m-eda-rule-base-by-customer-age
# * LB: 0.0231 - https://www.kaggle.com/code/ebn7amdi/trending/notebook?scriptVersionId=90980162
# * LB: 0.0225 - https://www.kaggle.com/code/mayukh18/svd-model-reranking-implicit-to-explicit-feedback
# + papermill={"duration": 79.665064, "end_time": "2022-05-01T09:37:39.169907", "exception": false, "start_time": "2022-05-01T09:36:19.504843", "status": "completed"} tags=[]
sub0 = pd.read_csv('../input/hm-00231-solution/submission.csv').sort_values('customer_id').reset_index(drop=True) # 0.0231
sub1 = pd.read_csv('../input/handmbestperforming/h-m-trending-products-weekly-add-test.csv').sort_values('customer_id').reset_index(drop=True) # 0.0225
sub2 = pd.read_csv('../input/handmbestperforming/hnm-exponential-decay-with-alternate-items.csv').sort_values('customer_id').reset_index(drop=True) # 0.0217
sub3 = pd.read_csv('../input/handmbestperforming/lstm-sequential-modelwith-item-features-tutorial.csv').sort_values('customer_id').reset_index(drop=True) # 0.0221
sub4 = pd.read_csv('../input/hm-00224-solution/submission.csv').sort_values('customer_id').reset_index(drop=True) # 0.0224
sub5 = pd.read_csv('../input/handmbestperforming/time-is-our-best-friend-v2.csv').sort_values('customer_id').reset_index(drop=True) # 0.0220
sub6 = pd.read_csv('../input/handmbestperforming/rule-based-by-customer-age.csv').sort_values('customer_id').reset_index(drop=True) # 0.0227
sub7 = pd.read_csv('../input/h-m-faster-trending-products-weekly/submission.csv').sort_values('customer_id').reset_index(drop=True) # 0.0231
sub8 = pd.read_csv('../input/h-m-framework-for-partitioned-validation/submission.csv').sort_values('customer_id').reset_index(drop=True) # 0.0225
# + papermill={"duration": 0.93352, "end_time": "2022-05-01T09:37:40.116775", "exception": false, "start_time": "2022-05-01T09:37:39.183255", "status": "completed"} tags=[]
sub0.columns = ['customer_id', 'prediction0']
sub0['prediction1'] = sub1['prediction']
sub0['prediction2'] = sub2['prediction']
sub0['prediction3'] = sub3['prediction']
sub0['prediction4'] = sub4['prediction']
sub0['prediction5'] = sub5['prediction']
sub0['prediction6'] = sub6['prediction']
sub0['prediction7'] = sub7['prediction'].astype(str)
del sub1, sub2, sub3, sub4, sub5, sub6, sub7
gc.collect()
sub0.head()
# + papermill={"duration": 146.854601, "end_time": "2022-05-01T09:40:06.985564", "exception": false, "start_time": "2022-05-01T09:37:40.130963", "status": "completed"} tags=[]
def cust_blend(dt, W = [1,1,1,1,1,1,1,1]):
#Create a list of all model predictions
REC = []
# Second Try
REC.append(dt['prediction0'].split())
REC.append(dt['prediction1'].split())
REC.append(dt['prediction2'].split())
REC.append(dt['prediction3'].split())
REC.append(dt['prediction4'].split())
REC.append(dt['prediction5'].split())
REC.append(dt['prediction6'].split())
REC.append(dt['prediction7'].split())
#Create a dictionary of items recommended.
#Assign a weight according the order of appearance and multiply by global weights
res = {}
for M in range(len(REC)):
for n, v in enumerate(REC[M]):
if v in res:
res[v] += (W[M]/(n+1))
else:
res[v] = (W[M]/(n+1))
# Sort dictionary by item weights
res = list(dict(sorted(res.items(), key=lambda item: -item[1])).keys())
# Return the top 12 items only
return ' '.join(res[:12])
sub0['prediction'] = sub0.apply(cust_blend, W = [1.05, 0.78, 0.86, 0.85, 0.68, 0.64, 0.70, 0.24], axis=1)
sub0.head()
# + [markdown] papermill={"duration": 0.014583, "end_time": "2022-05-01T09:40:07.014534", "exception": false, "start_time": "2022-05-01T09:40:06.999951", "status": "completed"} tags=[]
# # Make a submission
# + papermill={"duration": 0.157376, "end_time": "2022-05-01T09:40:07.187104", "exception": false, "start_time": "2022-05-01T09:40:07.029728", "status": "completed"} tags=[]
del sub0['prediction0']
del sub0['prediction1']
del sub0['prediction2']
del sub0['prediction3']
del sub0['prediction4']
del sub0['prediction5']
del sub0['prediction6']
del sub0['prediction7']
gc.collect()
# + papermill={"duration": 6.027348, "end_time": "2022-05-01T09:40:13.229945", "exception": false, "start_time": "2022-05-01T09:40:07.202597", "status": "completed"} tags=[]
sub1 = pd.read_csv('../input/h-m-framework-for-partitioned-validation/submission.csv').sort_values('customer_id').reset_index(drop=True)
sub1['prediction'] = sub1['prediction'].astype(str)
sub0.columns = ['customer_id', 'prediction0']
sub0['prediction1'] = sub1['prediction']
del sub1
gc.collect()
# + papermill={"duration": 0.027164, "end_time": "2022-05-01T09:40:13.272753", "exception": false, "start_time": "2022-05-01T09:40:13.245589", "status": "completed"} tags=[]
def cust_blend(dt, W = [1,1,1,1,1]):
#Global ensemble weights
#W = [1.15,0.95,0.85]
#Create a list of all model predictions
REC = []
# Second Try
REC.append(dt['prediction0'].split())
REC.append(dt['prediction1'].split())
#Create a dictionary of items recommended.
#Assign a weight according the order of appearance and multiply by global weights
res = {}
for M in range(len(REC)):
for n, v in enumerate(REC[M]):
if v in res:
res[v] += (W[M]/(n+1))
else:
res[v] = (W[M]/(n+1))
# Sort dictionary by item weights
res = list(dict(sorted(res.items(), key=lambda item: -item[1])).keys())
# Return the top 12 items only
return ' '.join(res[:12])
# + papermill={"duration": 52.519613, "end_time": "2022-05-01T09:41:05.807749", "exception": false, "start_time": "2022-05-01T09:40:13.288136", "status": "completed"} tags=[]
sub0['prediction'] = sub0.apply(cust_blend, W = [1.20, 0.85], axis=1)
# + papermill={"duration": 0.032268, "end_time": "2022-05-01T09:41:05.857795", "exception": false, "start_time": "2022-05-01T09:41:05.825527", "status": "completed"} tags=[]
del sub0['prediction0']
del sub0['prediction1']
sub0.head()
# + papermill={"duration": 12.823175, "end_time": "2022-05-01T09:41:18.697074", "exception": false, "start_time": "2022-05-01T09:41:05.873899", "status": "completed"} tags=[]
sub0.to_csv('submission.csv', index=False)
| hm-ensembling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import crypten
import torch
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
crypten.init()
# +
# create dummy data for training
x_values = [i for i in range(20)]
x_train = np.array(x_values, dtype=np.float32)
x_train = torch.tensor(x_train.reshape(-1, 1))
y_values = [3*i + 10 for i in x_values]
y_train = np.array(y_values, dtype=np.float32)
y_train = torch.tensor(y_train.reshape(-1, 1))
# -
x_enc = crypten.cryptensor(x_train)
y_enc = crypten.cryptensor(y_train)
x_enc/5
def encrypted_linear_regression(x_enc, y_enc):
x_mean = crypten.cryptensor(0)
y_mean = crypten.cryptensor(0)
for i in range(len(x_enc)):
x_mean = x_enc[i] + x_mean
x_mean = x_mean / len(x_enc)
for i in range(len(y_enc)):
y_mean = y_enc[i] + y_mean
y_mean = y_mean / len(y_enc)
# using the formula to calculate the b1 and b0
numerator = crypten.cryptensor(0)
denominator = crypten.cryptensor(0)
for i in range(len(x_enc)):
numerator = (x_enc[i] - x_mean) * (y_enc[i] - y_mean) + numerator
denominator = (x_enc[i] - x_mean) * (x_enc[i] - x_mean) + denominator
b1 = numerator / denominator
b0 = y_mean - (b1 * x_mean)
# getting the predicted_values
y_fit_enc = crypten.cryptensor(np.zeros(len(x_enc)))
for i in range(len(x_enc)):
y_fit_enc[i] = b0 + b1 * x_enc[i]
return b1, b0, y_fit_enc
b0,b1, y_fit_enc = encrypted_linear_regression(x_enc, y_enc)
y_fit_dec = y_fit_enc.get_plain_text()
#plotting line
plt.plot(x_enc.get_plain_text(), y_fit_dec, color='#00ff00', label='Encrypted Linear Regression')
plt.scatter(x_train, y_train, color='#ff0000', label='Data Point')
#plot the data point
# x-axis label
plt.xlabel('X')
#y-axis label
plt.ylabel('y = %.2f *x+ %.2f' % (b0.get_plain_text().item(),b1.get_plain_text().item()))
plt.legend()
plt.show()
| notebooks/linear_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Module 2 Assessment
# Welcome to the assessment for Module 2: Mapping for Planning. In this assessment, you will be generating an occupancy grid using lidar scanner measurements from a moving vehicle in an unknown environment. You will use the inverse scanner measurement model developed in the lessons to map these measurements into occupancy probabilities, and then perform iterative logodds updates to an occupancy grid belief map. After the car has gathered enough data, your occupancy grid should converge to the true map.
#
# In this assessment, you will:
# * Gather range measurements of a moving car's surroundings using a lidar scanning function.
# * Extract occupancy information from the range measurements using an inverse scanner model.
# * Perform logodds updates on an occupancy grids based on incoming measurements.
# * Iteratively construct a probabilistic occupancy grid from those log odds updates.
#
# For most exercises, you are provided with a suggested outline. You are encouraged to diverge from the outline if you think there is a better, more efficient way to solve a problem.
# Launch the Jupyter Notebook to begin!
import numpy as np
import math
import matplotlib.pyplot as plt
import matplotlib.animation as anim
from IPython.display import HTML
# In this notebook, you will generate an occupancy grid based off of multiple simulated lidar scans. The inverse scanner model will be given to you, in the `inverse_scanner()` function. It returns a matrix of measured occupancy probability values based on the lidar scan model discussed in the video lectures. The `get_ranges()` function actually returns the scanned ranges value for a given vehicle position and scanner bearing. These two functions are given below. Make sure you understand what they are doing, as you will need to use them later in the notebook.
# Calculates the inverse measurement model for a laser scanner.
# It identifies three regions. The first where no information is available occurs
# outside of the scanning arc. The second where objects are likely to exist, at the
# end of the range measurement within the arc. The third are where objects are unlikely
# to exist, within the arc but with less distance than the range measurement.
def inverse_scanner(num_rows, num_cols, x, y, theta, meas_phi, meas_r, rmax, alpha, beta):
m = np.zeros((M, N))
for i in range(num_rows):
for j in range(num_cols):
# Find range and bearing relative to the input state (x, y, theta).
r = math.sqrt((i - x)**2 + (j - y)**2)
phi = (math.atan2(j - y, i - x) - theta + math.pi) % (2 * math.pi) - math.pi
# Find the range measurement associated with the relative bearing.
k = np.argmin(np.abs(np.subtract(phi, meas_phi)))
# If the range is greater than the maximum sensor range, or behind our range
# measurement, or is outside of the field of view of the sensor, then no
# new information is available.
if (r > min(rmax, meas_r[k] + alpha / 2.0)) or (abs(phi - meas_phi[k]) > beta / 2.0):
m[i, j] = 0.5
# If the range measurement lied within this cell, it is likely to be an object.
elif (meas_r[k] < rmax) and (abs(r - meas_r[k]) < alpha / 2.0):
m[i, j] = 0.7
# If the cell is in front of the range measurement, it is likely to be empty.
elif r < meas_r[k]:
m[i, j] = 0.3
return m
# Generates range measurements for a laser scanner based on a map, vehicle position,
# and sensor parameters.
# Uses the ray tracing algorithm.
def get_ranges(true_map, X, meas_phi, rmax):
(M, N) = np.shape(true_map)
x = X[0]
y = X[1]
theta = X[2]
meas_r = rmax * np.ones(meas_phi.shape)
# Iterate for each measurement bearing.
for i in range(len(meas_phi)):
# Iterate over each unit step up to and including rmax.
for r in range(1, rmax+1):
# Determine the coordinates of the cell.
xi = int(round(x + r * math.cos(theta + meas_phi[i])))
yi = int(round(y + r * math.sin(theta + meas_phi[i])))
# If not in the map, set measurement there and stop going further.
if (xi <= 0 or xi >= M-1 or yi <= 0 or yi >= N-1):
meas_r[i] = r
break
# If in the map, but hitting an obstacle, set the measurement range
# and stop ray tracing.
elif true_map[int(round(xi)), int(round(yi))] == 1:
meas_r[i] = r
break
return meas_r
# In the following code block, we initialize the required variables for our simulation. This includes the initial state as well as the set of control actions for the car. We also set the rate of rotation of our lidar scan. The obstacles of the true map are represented by 1's in the true map, 0's represent free space. Each cell in the belief map `m` is initialized to 0.5 as our prior probability of occupancy, and from that belief map we compute our logodds occupancy grid `L`.
# +
# Simulation time initialization.
T_MAX = 150
time_steps = np.arange(T_MAX)
# Initializing the robot's location.
x_0 = [30, 30, 0]
# The sequence of robot motions.
u = np.array([[3, 0, -3, 0], [0, 3, 0, -3]])
u_i = 1
# Robot sensor rotation command
w = np.multiply(0.3, np.ones(len(time_steps)))
# True map (note, columns of map correspond to y axis and rows to x axis, so
# robot position x = x(1) and y = x(2) are reversed when plotted to match
M = 50
N = 60
true_map = np.zeros((M, N))
true_map[0:10, 0:10] = 1
true_map[30:35, 40:45] = 1
true_map[3:6,40:60] = 1;
true_map[20:30,25:29] = 1;
true_map[40:50,5:25] = 1;
# Initialize the belief map.
# We are assuming a uniform prior.
m = np.multiply(0.5, np.ones((M, N)))
# Initialize the log odds ratio.
L0 = np.log(np.divide(m, np.subtract(1, m)))
L = L0
# Parameters for the sensor model.
meas_phi = np.arange(-0.4, 0.4, 0.05)
rmax = 30 # Max beam range.
alpha = 1 # Width of an obstacle (distance about measurement to fill in).
beta = 0.05 # Angular width of a beam.
# Initialize the vector of states for our simulation.
x = np.zeros((3, len(time_steps)))
x[:, 0] = x_0
# -
# Here is where you will enter your code. Your task is to complete the main simulation loop. After each step of robot motion, you are required to gather range data from your lidar scan, and then apply the inverse scanner model to map these to a measured occupancy belief map. From this, you will then perform a logodds update on your logodds occupancy grid, and update our belief map accordingly. As the car traverses through the environment, the occupancy grid belief map should move closer and closer to the true map. At the code block after the end of the loop, the code will output some values which will be used for grading your assignment. Make sure to copy down these values and save them in a .txt file for when your visualization looks correct. Good luck!
# +
# %%capture
# Intitialize figures.
map_fig = plt.figure()
map_ax = map_fig.add_subplot(111)
map_ax.set_xlim(0, N)
map_ax.set_ylim(0, M)
invmod_fig = plt.figure()
invmod_ax = invmod_fig.add_subplot(111)
invmod_ax.set_xlim(0, N)
invmod_ax.set_ylim(0, M)
belief_fig = plt.figure()
belief_ax = belief_fig.add_subplot(111)
belief_ax.set_xlim(0, N)
belief_ax.set_ylim(0, M)
meas_rs = []
meas_r = get_ranges(true_map, x[:, 0], meas_phi, rmax)
meas_rs.append(meas_r)
invmods = []
invmod = inverse_scanner(M, N, x[0, 0], x[1, 0], x[2, 0], meas_phi, meas_r, \
rmax, alpha, beta)
invmods.append(invmod)
ms = []
ms.append(m)
# Main simulation loop.
for t in range(1, len(time_steps)):
# Perform robot motion.
move = np.add(x[0:2, t-1], u[:, u_i])
# If we hit the map boundaries, or a collision would occur, remain still.
if (move[0] >= M - 1) or (move[1] >= N - 1) or (move[0] <= 0) or (move[1] <= 0) \
or true_map[int(round(move[0])), int(round(move[1]))] == 1:
x[:, t] = x[:, t-1]
u_i = (u_i + 1) % 4
else:
x[0:2, t] = move
x[2, t] = (x[2, t-1] + w[t]) % (2 * math.pi)
# TODO Gather the measurement range data, which we will convert to occupancy probabilities
# using our inverse measurement model.
# meas_r = ...
meas_r = get_ranges(true_map, x[:, t], meas_phi, rmax)
meas_rs.append(meas_r)
# TODO Given our range measurements and our robot location, apply our inverse scanner model
# to get our measure probabilities of occupancy.
# invmod = ...
invmod = inverse_scanner(
M, N, *x[:, t], meas_phi, meas_r, rmax, alpha, beta
)
invmods.append(invmod)
# TODO Calculate and update the log odds of our occupancy grid, given our measured
# occupancy probabilities from the inverse model.
# L = ...
L = np.log((invmod / (1 - invmod))) + L - L0
# TODO Calculate a grid of probabilities from the log odds.
# m = ...
p = np.exp(L)
m = p / (1 + p)
ms.append(m)
# +
# Ouput for grading. Do not modify this code!
m_f = ms[-1]
print("{}".format(m_f[40, 10]))
print("{}".format(m_f[30, 40]))
print("{}".format(m_f[35, 40]))
print("{}".format(m_f[0, 50]))
print("{}".format(m_f[10, 5]))
print("{}".format(m_f[20, 15]))
print("{}".format(m_f[25, 50]))
# -
# Now that you have written your main simulation loop, you can visualize your robot motion in the true map, your measured belief map, and your occupancy grid belief map below. These are shown in the 1st, 2nd, and 3rd videos, respectively. If your 3rd video converges towards the true map shown in the 1st video, congratulations! You have completed the assignment. Please submit the output of the box above as a .txt file to the grader for this assignment.
# +
def map_update(i):
map_ax.clear()
map_ax.set_xlim(0, N)
map_ax.set_ylim(0, M)
map_ax.imshow(np.subtract(1, true_map), cmap='gray', origin='lower', vmin=0.0, vmax=1.0)
x_plot = x[1, :i+1]
y_plot = x[0, :i+1]
map_ax.plot(x_plot, y_plot, "bx-")
def invmod_update(i):
invmod_ax.clear()
invmod_ax.set_xlim(0, N)
invmod_ax.set_ylim(0, M)
invmod_ax.imshow(invmods[i], cmap='gray', origin='lower', vmin=0.0, vmax=1.0)
for j in range(len(meas_rs[i])):
invmod_ax.plot(x[1, i] + meas_rs[i][j] * math.sin(meas_phi[j] + x[2, i]), \
x[0, i] + meas_rs[i][j] * math.cos(meas_phi[j] + x[2, i]), "ko")
invmod_ax.plot(x[1, i], x[0, i], 'bx')
def belief_update(i):
belief_ax.clear()
belief_ax.set_xlim(0, N)
belief_ax.set_ylim(0, M)
belief_ax.imshow(ms[i], cmap='gray', origin='lower', vmin=0.0, vmax=1.0)
belief_ax.plot(x[1, max(0, i-10):i], x[0, max(0, i-10):i], 'bx-')
map_anim = anim.FuncAnimation(map_fig, map_update, frames=len(x[0, :]), repeat=False)
invmod_anim = anim.FuncAnimation(invmod_fig, invmod_update, frames=len(x[0, :]), repeat=False)
belief_anim = anim.FuncAnimation(belief_fig, belief_update, frames=len(x[0, :]), repeat=False)
# -
HTML(map_anim.to_html5_video())
HTML(invmod_anim.to_html5_video())
HTML(belief_anim.to_html5_video())
| occupancy grid.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import numpy as np
model = cv2.dnn.readNetFromCaffe('../data/face_detector/deploy.prototxt',
'../data/face_detector/res10_300x300_ssd_iter_140000.caffemodel')
CONF_THR = 0.5
# +
video = cv2.VideoCapture('../data/faces.mp4')
c = 0
while True:
ret, frame = video.read()
if not ret: break
h, w = frame.shape[0:2]
blob = cv2.dnn.blobFromImage(frame, 1, (300*w//h,300), (104,177,123), False)
model.setInput(blob)
output = model.forward()
for i in range(output.shape[2]):
conf = output[0,0,i,2]
if conf > CONF_THR:
label = output[0,0,i,1]
x0,y0,x1,y1 = (output[0,0,i,3:7] * [w,h,w,h]).astype(int)
cv2.rectangle(frame, (x0,y0), (x1,y1), (0,255,0), 2)
cv2.putText(frame, 'conf: {:.2f}'.format(conf), (x0,y0),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255,0), 2)
c += 1
if c == 60:
cv2.imwrite('/home/alexeysp/projects/recipes/figures/ch5_face_detections.png', frame)
cv2.imshow('frame', frame)
key = cv2.waitKey(3)
if key == 27: break
cv2.destroyAllWindows()
# -
| Chapter05/11 Face recognition using OpenFace model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: sc
# language: python
# name: sc
# ---
# # DoRothEA: resource of Transcription Factor regulons
#
# ## Introduction
#
# DoRothEA is a comprehensive resource containing a curated collection of transcription factors (TFs) and its transcriptional targets. The set of genes (targets) regulated by a specific TF is known as regulon. DoRothEA can be understood as a gene regulatory network (GRN) containing many regulons.
#
# Here is how to get access to DoRothEA regulons:
# Load all required packages for this notebook
import pandas as pd
import numpy as np
import dorothea
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import igraph as ig
# +
dorothea_hs = dorothea.load_regulons(
organism='Human', # If working with mouse, set to Mouse
commercial=False # If non-academia, set to True
)
dorothea_hs
# -
# Each TF (columns), targets a set of genes (rows). Non zero values represent an interaction between a target gene and a TF. Depending on the type of interaction, there will be positive (+1) and negative (-1) interactions. A positive edge means that if a given TF is active, its target will have high levels of gene expression. On the contrary, a negative edge means that if a given TF is active, its target will not show gene expression.
#
# Here is a visual representation of a subset of TFs (red nodes) and target genes (green nodes). Green edges are positive interactions and red edges are negative:
# +
# To explore more visualizations change these parameters
num_tfs = 10
num_genes = 1000
#
sub_doro = dorothea_hs.values[:num_genes,:num_tfs]
sub_doro = sub_doro[np.any(sub_doro != 0, axis=1)]
sub_doro = sub_doro[:, np.any(sub_doro != 0, axis=0)]
num_genes, num_tfs = sub_doro.shape
adj_m = np.vstack([np.zeros((sub_doro.shape[1], sub_doro.shape[1])), sub_doro])
adj_m = np.hstack([adj_m, np.zeros((adj_m.shape[0],adj_m.shape[0]-adj_m.shape[1]))])
g = ig.Graph.Adjacency((np.abs(adj_m) > 0).tolist())
g.vs['color'] = (['red'] * num_tfs) + (['forestgreen'] * num_genes)
weights = adj_m[np.where(adj_m)]
g.es['color'] = ['limegreen' if w > 0 else 'red' for w in weights]
ig.plot(g, layout="fr", vertex_size=5, edge_arrow_size=0, edge_width=0.5, bbox=(300,300))
# -
# Looking at the network, it is clear that most of DoRothEA interactions are positive:
# +
num_pos = np.sum(dorothea_hs.values == 1)
num_neg = np.sum(dorothea_hs.values == -1)
fig, axes = plt.subplots(facecolor='white')
axes.set_title('Number of interactions by sign')
axes.bar('Positive', num_pos)
axes.bar('Negative', num_neg)
plt.show()
# -
# ## Confidence levels
#
# DoRothEA’s regulons were gathered from different types of evidence. Each TF-target interaction is defined by a confidence level based on the number of supporting evidence. The confidence levels ranges from A (highest confidence) to E (lowest confidence) (Garcia-Alonso et al. 2019).
#
# Let's compare the different confidence interactions:
# +
levels = [['A'], ['A', 'B'], ['A', 'B', 'C'], ['A', 'B', 'C', 'D'], ['A', 'B', 'C', 'D', 'E']]
fig, axes = plt.subplots(1,3, figsize=(9,3), tight_layout=True, facecolor='white')
axes = axes.flatten()
for level in levels:
doro = dorothea.load_regulons(
levels=level # To specify a confidence level pass a list of levels. E.g. ['A']
)
num_edges = np.sum(np.abs(doro.values))
num_genes, num_tfs = doro.shape
label = ''.join(level)
axes[0].set_title('Number of edges (log10)')
axes[0].bar(label, np.log10(num_edges))
axes[0].tick_params(axis='x', rotation=90)
axes[1].set_title('Number of unique genes')
axes[1].bar(label, num_genes)
axes[1].tick_params(axis='x', rotation=90)
axes[2].set_title('Number of TFs')
axes[2].bar(label, num_tfs)
axes[2].tick_params(axis='x', rotation=90)
# -
# High confidence interactions are very reliable but have low coverage of genes and TFs. On the other hand, low confidence interactions are not as reliable but have a high coverage of genes and TFs.
# It will always depend on the application, but most of the times a good compromise would be to select medium confidence interactions (ABC) for the best trade-off between coverage and reliability.
# ## Footprint-based enrichment analysis
#
# Classic enrichment analysis focuses on the levels of expression of the elements of a given biological process (for instance TF expression) to estimate its activity. In contrast, footprint-based enrichment analysis instead estimates activities using molecular readouts considered to be downstream. Thus, expression of targets can be inferred as a more robust proxy of the TF's activity (Dugourd and Saez-Rodriguez 2019).
#
#
# Following this idea, we use DoRothEA regulons to estimate TF activities using footprint-based enrichment analysis from gene expression profiles. DoRothEA regulons can be coupled with any footprint-based statistic to compute TF activities. In this implementation we use the mean of expression of the target genes as statistic but we could have used any other. If you are interested in knowing more footprint-based algorithms you can check [DecoupleR](https://github.com/saezlab/decoupleR), an R package were we systematically evaluated the performance of different GRN with different statistics in bulk RNA-seq.
# ## Toy example
#
# Here is a toy example for just for visualizing porpuses, for proper usage of the package see other notebooks.
#
#
# We will generate expression for 10 genes for one sample and we will compute the activites of 3 TFs.
genes_lst = ['STMN1', 'CTNNB1', 'SPRY1', 'IL6', 'MMP9', 'SMARCA4', 'CISH', 'IL1B', 'CCL3', 'OASL']
expr = np.array([50, 30, 10, 500, 400, 60, 50, 270, 260, 100])
tfs_lst = ['EGR1', 'NFKB1', 'STAT1']
# Let's load DoRothEA regulons and trim them to fit our expression matrix:
regulons = dorothea.load_regulons(['A']).loc[genes_lst, tfs_lst]
regulons
# Then we compute the activities for the TF and plot them in a bipartite graph.
# +
# Create coords for plots
g_coords = np.array([np.repeat(0, len(genes_lst)), np.arange(0, -len(genes_lst), -1)]).T
t_coords = np.array([np.repeat(1, len(tfs_lst)), np.arange(-1, -len(genes_lst), -len(genes_lst)/len(tfs_lst))]).T
# Center expression
expr = expr - np.mean(expr)
# Calculate TF activities
tf_act = expr.T.dot(regulons.values) / np.sum(np.abs(regulons.values), axis=0)
# Order edges
pos_edg = []
neg_edg = []
for i,gene in enumerate(regulons.values):
for j,tf in enumerate(gene):
edg = [g_coords[i][1], t_coords[j][1]]
if tf > 0:
pos_edg.append(edg)
elif tf < 0:
neg_edg.append(edg)
from mpl_toolkits.axes_grid1 import make_axes_locatable
# Plot graph
fig, ax = plt.subplots(1,1, figsize=(7,7), facecolor='white', tight_layout=True)
ax.set_title('Footprint-based enrichment analysis', fontsize=20, y=1.05)
ax.plot(np.array(neg_edg).T, color='red', zorder=1)
ax.plot(np.array(pos_edg).T, color='forestgreen', zorder=1)
gene_plot = ax.scatter(g_coords[:,0], g_coords[:,1], marker='s', s=1000, c=expr, cmap='Greens', zorder=2, edgecolor='gray')
tf_plot = ax.scatter(t_coords[:,0], t_coords[:,1], marker='s', s=1000, c=tf_act, cmap='coolwarm', zorder=2, edgecolor='gray')
ax.set_xlim(-0.5, 1.5)
# Add gene/tf labels
for i,g in enumerate(genes_lst):
ax.text(g_coords[i][0]-0.4, g_coords[i][1], g)
for i,t in enumerate(tfs_lst):
ax.text(t_coords[i][0]+0.15, t_coords[i][1], t)
# Format color bars
divider = make_axes_locatable(ax)
cax = divider.append_axes('left', size='5%', pad=0)
fig.colorbar(gene_plot,cax,orientation='vertical', label='Gene expr')
cax.yaxis.set_label_position('left')
cax.yaxis.set_ticks_position('left')
cax = divider.append_axes('right', size='5%', pad=0)
fig.colorbar(tf_plot,cax,orientation='vertical', label='TF activity')
ax.axis('off')
plt.show()
# -
# On the left in green there are the levels of expression of our 10 genes. On the right in blue/red there are the TF activities. Each TF is connected by edges to a set of target genes according to the annotated DoRothEA's regulons. Green edges are positive interactions and red are negative.
#
# Just by looking at the gene expression we can see that there are some genes that are coordinated (they share high or low expression profiles). However, if there are more than 10 genes it becomes hard to interpret what is going on by just looking at gene expression. By using footprint-based enrichment analysis with DoRothEA's regulons, we can summarize these regulation events into TF activities. They can be understood as a prior knowledge based dimensionality reduction.
#
# In this specific toy example, NFKB1 appears to be activated since all of its target genes have high gene expression. On the other hand, STAT1 and EGR1 seem to be less active since their negative targets have high expression while their positive targets are lowly expressed.
| example/dorothea_introduction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="kS4LWOtvO-mK" colab_type="text"
# # Introduction #
# + [markdown] id="vyMG82wmPHKB" colab_type="text"
# This notebook shows how to produce predictions from Sentinel-2 L1C imagery using any of a number of models found in the [geotrellis/deeplab-nlcd](https://github.com/geotrellis/deeplab-nlcd) repository.
#
# Any of the "binary" (one class, yes/no) networks in the [architectures](https://github.com/geotrellis/deeplab-nlcd/tree/master/python/architectures) directory should work. Those include architectures whose names contain the substring "-binary", as well as networks whose names contain the substring "sentinel2". The former group of architectures also require a "weights.pth" file (trained using the [training script](https://github.com/geotrellis/deeplab-nlcd/blob/master/python/train.py) either by you or someone else), the latter group of architectures require no weights file because they are just spectral indices with no parameters.
#
# The "Manual Preparation" section shows you how to produce predictions one-at-a-time, the "Batch Preparation" section shows you how to produce them in bulk.
# + [markdown] id="IWNVeQAMxntl" colab_type="text"
# # Manual Preparation #
# + [markdown] id="kjLRNcukss3w" colab_type="text"
# ## Prerequisites ##
# + id="sWib-XGOnfBp" colab_type="code" colab={}
# !pip install rasterio
# + id="CmJYIUCDnFBX" colab_type="code" colab={}
import copy
import numpy as np
import requests
import rasterio as rio
import torch
import torchvision
import PIL.Image
from urllib.parse import urlparse
# + [markdown] id="Z7_CzAsZuN9w" colab_type="text"
# ## Mount (colab only) ##
# + colab_type="code" id="DopMlpjPpm9p" colab={}
from google.colab import drive
drive.mount('/content/gdrive')
# + [markdown] id="0y9o0TFItNZA" colab_type="text"
# ## Load Sentinel-2 L1C Imagery ##
# + id="7nmcwKnoppOi" colab_type="code" colab={}
imagery_path = '/content/gdrive/My Drive/data/imagery.tif'
with rio.open(imagery_path) as ds:
profile = copy.deepcopy(ds.profile)
imagery_data = ds.read()
# + id="Z1JV-lhqqLFZ" colab_type="code" colab={}
red = imagery_data[3,:,:] / 4500.0
green = imagery_data[2,:,:] / 4500.0
blue = imagery_data[1,:,:] / 4500.0
# + id="n0lP6926r5M2" colab_type="code" colab={}
red = np.clip(red, 0.0, 1.0)
green = np.clip(green, 0.0, 1.0)
blue = np.clip(blue, 0.0, 1.0)
# + id="SwNHvMQNscLp" colab_type="code" colab={}
red = (red * 255).astype(np.uint8)
green = (green * 255).astype(np.uint8)
blue = (blue * 255).astype(np.uint8)
# + id="NkOw2PqFrPTD" colab_type="code" colab={}
PIL.Image.fromarray(np.stack([red, green, blue], axis=2))
# + [markdown] id="cOWfOykYun8T" colab_type="text"
# ## Load Architecture ##
# + id="c2KihZFIurPl" colab_type="code" colab={}
def read_text(uri: str):
parsed = urlparse(uri)
if parsed.scheme.startswith('http'):
return requests.get(uri).text
else:
with codecs.open(uri, encoding='utf-8', mode='r') as f:
return f.read()
def load_architecture(uri: str):
arch_str = read_text(uri)
arch_code = compile(arch_str, uri, 'exec')
exec(arch_code, globals())
# + id="YlhIg1c3u_vp" colab_type="code" colab={}
# This can be replaced with other architectures
architecture = 'https://raw.githubusercontent.com/geotrellis/deeplab-nlcd/055b6f42f9042d5d443a0f93fbb7f7ae952b3706/python/architectures/cheaplab-regression-binary.py'
# + id="P3z10gLUvbaY" colab_type="code" colab={}
load_architecture(architecture)
# + [markdown] id="bOCVtk0PwCmG" colab_type="text"
# ## Make Model, Load Weights ##
# + id="MUvOXkehwGaj" colab_type="code" colab={}
backend = 'cpu' # 'cuda' can also be used
device = torch.device(backend)
band_count = 13 # XXX
input_stride = 1
class_count = 1
divisor = 1
model = make_model(
band_count,
input_stride=input_stride,
class_count=class_count,
divisor=divisor,
pretrained=False,
).to(device)
# + id="6KzQ0T5qw9l4" colab_type="code" colab={}
weights = '/content/gdrive/My Drive/data/weights.pth'
if not hasattr(model, 'no_weights'):
model.load_state_dict(torch.load(
weights, map_location=device))
# + [markdown] id="xOezR36zx3kE" colab_type="text"
# ## Inference ##
# + id="q9Q9dg9rxUK0" colab_type="code" colab={}
window_size = 64
width = profile.get('width')
height = profile.get('height')
predictions = np.zeros((1, height, width), dtype=np.float32)
model.eval()
with torch.no_grad():
for x_offset in range(0, width, window_size):
if x_offset + window_size > width:
x_offset = width - window_size - 1
for y_offset in range(0, height, window_size):
if y_offset + window_size > height:
y_offset = height - window_size - 1
window = imagery_data[0:band_count, y_offset:y_offset+window_size, x_offset:x_offset+window_size].astype(np.float32)
tensor = torch.from_numpy(np.stack([window], axis=0)).to(device)
out = model(tensor)
if isinstance(out, dict):
out = out['2seg']
out = out.cpu().numpy()
else:
out = out.cpu().numpy()
predictions[:, y_offset:y_offset+window_size, x_offset:x_offset+window_size] = out[0]
# + id="D6JmUF1KHEmH" colab_type="code" colab={}
pred_min = np.min(predictions)
pred_max = np.max(predictions)
pred_uint8 = (np.clip((predictions - pred_min) / (pred_max - pred_min), 0.0, 1.0) * 255).astype(np.uint8)[0]
# + id="2TMWNGCAM8qW" colab_type="code" colab={}
PIL.Image.fromarray(pred_uint8)
# + id="8LHLbYMcNXXJ" colab_type="code" colab={}
# + [markdown] id="rpTdwAczNZwl" colab_type="text"
# ## Save ##
# + id="XAR9EtcuNa2O" colab_type="code" colab={}
profile.update(dtype = np.float32, count=1, compress='lzw', predictor=2)
predictions_path = '/content/gdrive/My Drive/data/prediction.tif'
with rio.open(predictions_path, 'w', **profile) as ds:
ds.write(predictions)
# + [markdown] id="R7PmYMNHQxgr" colab_type="text"
# # Batch Preparation #
# + [markdown] id="an3kqJRrRHoF" colab_type="text"
# ### Step 1 ###
#
# Clone the [`geotrellis/deeplab-nlcd`](https://github.com/geotrellis/deeplab-nlcd) repository to a local directory.
#
# Type
# ```
# git clone git@github.com:geotrellis/deeplab-nlcd.git
# ```
# or similar.
# + [markdown] id="5jU1JL0rRdkl" colab_type="text"
# ### Step 2 ###
#
# Enter the root of the repository directory.
#
# Type
# ```
# # # cd deeplab-nlcd
# ```
# or similar.
# + [markdown] id="g_Xl4F6yRwoN" colab_type="text"
# ### Step 3 ###
#
# Start a docker container with the needed dependencies.
#
# Type
# ```
# docker run -it --rm -w /workdir -v $HOME/.aws:/root/.aws:ro -v $(pwd):/workdir -v $HOME/Desktop:/desktop --runtime=nvidia jamesmcclain/aws-batch-ml:9 bash
# ```
# or similar. This sample command line will mount the local directory `~/Desktop/` which is assumed to contain the imagery on which we wish to work. We will see later that it is also possible to use imagery on S3.
# + [markdown] id="jwEBtI00TC39" colab_type="text"
# You are now within the docker container.
# + [markdown] id="RtFHeGsJTMes" colab_type="text"
# ### Step 4 ###
#
# Build the native library needed by the Python code, if that library does not already exist:
#
# Type
# ```
# make -C /workdir/src/libchips
# ```
# or similar.
# + [markdown] id="u0zS3rwLTYDT" colab_type="text"
# ### Step 5 ###
#
# Now perform inference on imagery.
#
# Type
# ```
# python3 /workdir/python/inference.py --architecture https://raw.githubusercontent.com/geotrellis/deeplab-nlcd/055b6f42f9042d5d443a0f93fbb7f7ae952b3706/python/architectures/cheaplab-regression-binary.py --libchips /workdir/src/libchips/libchips.so --bands 1 2 3 4 5 6 7 8 9 10 11 12 13 --inference-img /desktop/imagery/image*.tif --weights /desktop/weights.pth --raw-prediction-img '/desktop/predictions/cheaplab/*' --classes 1 --window-size 64
# ```
# or similar.
#
# Note that `~/Desktop/imagery/` is assumed to contain the imagery (files with names matching the pattern `image*.tif`) and the directory `~/Desktop/predictions/cheaplab/` is assumed to exist.
#
# Note that the single quote around the argument to `--raw-prediction-img` are required to prevent the shell from trying to interpret the `*`.
#
# + [markdown] id="haKpotU8Uhat" colab_type="text"
# You are done. Your predictions can now be found in `~/Desktop/predictions/cheaplab/`. You can also do predictions on remote assets on S3 by typing
# ```
# python3 /workdir/python/inference.py --architecture https://raw.githubusercontent.com/geotrellis/deeplab-nlcd/055b6f42f9042d5d443a0f93fbb7f7ae952b3706/python/architectures/cheaplab-regression-binary.py --libchips /workdir/src/libchips/libchips.so --bands 1 2 3 4 5 6 7 8 9 10 11 12 13 --inference-img s3://my-bucket/imagery/image*.tif --weights /desktop/weights.pth --raw-prediction-img 's3://my-bucket/predictions/cheaplab/*' --classes 1 --window-size 64
# ```
# or similar.
#
# You can also mix-and-match locations: you can have remote imagery and save the predictions locally, you can have local imagery and save the predictions to S3.
#
# + id="OR70KvNnVB0W" colab_type="code" colab={}
| notebooks/prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from math import pi
import math
from shablona.config import AMP_heading
from shablona.config import M3_swath
from shablona.config import M3_avgeraging_time
from shablona.targets import TargetSpace
import datetime
# +
def target_velocity(target):
"""
Calculates "delta_v" for classification given a target.
Delta_v is the norm of the difference between the current velocity
and the target velocity. This function returns the maximum delta_v
calculated for the target. Target velocity is calculated at intervals
of M3_averaging_time
i.e. if M3_averaging_time is 1 second, and a target was detected every
ping at 10 Hz for 5 seconds, the velocity would be calculated 5 times.
The most recent ADCP data at the time of target detection is used to
calculate delta_v.
"""
# extract target data
nims_indices = target.get_entry('nims')['aggregate_indices']
adcp = target.get_entry('adcp')
# sort nims data by timestamp
sort(nims_indices,
key = lambda x: target_space.get_engry_by_index('nims', x)['timestamp'])
# extract all targets from nims data
targets = []
for index in nims_indices:
target_instance = target.target_space.get_entry_by_index('nims', index)
targets.append(target_instance)
points_to_avg = []
index = 0
# determine points between which to calculate velocity (must have at least
# M3_averaging_time between the timestamps)
while index < len(targets):
start_time = targets[index]['timestamp']
for i, target in targets:
# add to points_to_avg if the difference in time between
# target sightings is greater than the M3_averaging_time
diff = delta_t_in_seconds(targets[i]['timestamp'], start_time)
if diff >= M3_averaging_time:
points_to_avg.append(targets[i])
index = i
break
if not points_to_avg:
# if the target was less than the M3_averaging_time, use first
# and last points
points_to_avg += nims_indices[0]
points_to_avg += nims_indices[-1]
# convert ADCP to cartesian coordinates
velocity_adcp = [adcp['speed'] * math.cos(adcp['heading']),
adcp['speed'] * math.sin(adcp['heading'])]
delta_v = []
for i, point in enumerate(points_to_avg):
# calculate delta_v between two consecutive points
point1 = point
try:
point2 = points_to_avg[i+1]
except:
break
# velocity of target between point 1 and point 2
velocity_target = velocity_between_two_points(point1, point2)
# difference between target and adcp velocity
velocity_dif = [velocity_target[0]-velocity_adcp[0],
velocity_target[1]-velocity_adcp[1]]
# "delta_v" is magnitude of velocity difference
delta_v += (velocity_diff[0]**2 + velocity_diff[1]**2)**0.5
return(max(delta_v))
def velocity_between_two_points(point1, point2):
"""
Determines magnitude and direction of target trajectory (from point 1 to point 2)
Inputs:
point 1, point 2 = points where target was detected, in list format [range, bearing in degrees]
AMP_heading = heading of AMP, in radians from due north
M3_swath = list containing minimum and maximum angle ( in degrees) for M3 target
detection. Default is 0 --> 120 degrees
Outputs:
vel = [velocity magnitude, velocity direction]
"""
# TO DO: divide by time! points are targets. ****
point1_cartesian = transform_NIMS_to_vector(point1)
point2_cartesian = transform_NIMS_to_vector(point2)
dt = delta_t_in_seconds(point1, point2)
# subtract 2-1 to get velocity
vel = [(point2_cartesian[0] - point1_cartesian[0])/dt,
(point2_cartesian[1] - point1_cartesian[1])/dt]
return(vel)
def transform_NIMS_to_vector(point):
"""
Transform NIMS detection (in format [range, bearing in degrees]) to earth coordinates (East-North)
Returns X-Y coordinates of point after transformation.
"""
# convert target heading to radians, and shift such that zero degrees is center of swath
point_heading = (point['last_pos_angle'] - (M3_swath[1] - M3_swath[0])/2) * pi/180
# convert bearing to angle from due N by subtracting AMP angle
point_heading = point_heading - AMP_heading
# get vector components for point 1 and 2
point_cartesian = [point['last_pos_range'] * math.cos(point_heading),
point['last_pos_range'] * math.sin(point_heading)]
return(point_cartesian)
def delta_t_in_seconds(self, datetime1, datetime2):
"""
calculate delta t in seconds between two datetime objects
(returns absolute value, so order of dates is insignifigant)
"""
delta_t = datetime1 - datetime2
days_s = delta_t.days*(86400)
microseconds_s = delta_t.microseconds/1000000
delta_t_s = days_s + delta_t.seconds + microseconds_s
return abs(delta_t_s)
# -
| .ipynb_checkpoints/calc-target-direction-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import tensorflow as tf
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
config = tf.ConfigProto()
config.gpu_options.allow_growth = True # 设置最小的GPU使用量
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("./dataset/", one_hot=True, reshape=False)
# +
# Parameters
learning_rate = 0.001
training_epochs = 20
batch_size = 128 # Decrease batch size if you don't have enough memory
display_step = 1
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
n_hidden_layer = 256 # layer number of features
# +
# Store layers weight & bias
weights = {
'hidden_layer': tf.Variable(tf.random_normal([n_input, n_hidden_layer])),
'out': tf.Variable(tf.random_normal([n_hidden_layer, n_classes]))
}
biases = {
'hidden_layer': tf.Variable(tf.random_normal([n_hidden_layer])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# tf Graph input
x = tf.placeholder("float", [None, 28, 28, 1])
y = tf.placeholder("float", [None, n_classes])
x_flat = tf.reshape(x, [-1, n_input])
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x_flat, weights['hidden_layer']), biases['hidden_layer'])
layer_1 = tf.nn.relu(layer_1)
# Output layer with linear activation
logits = tf.matmul(layer_1, weights['out']) + biases['out']
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
# +
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session(config=config) as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
# Display logs per epoch step
if epoch % display_step == 0:
c = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
print("Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(c))
print("Optimization Finished!")
# Test model
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
# Decrease test_size if you don't have enough memory
test_size = 256
print("Accuracy:", accuracy.eval({x: mnist.test.images[:test_size], y: mnist.test.labels[:test_size]}))
| Code/Multilayer-Perceptron/Multilayer_Perceptron.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Learning scikit-learn
# ## An Introduction to Machine Learning in Python
# ### at PyData Chicago 2016
# %load_ext watermark
# %watermark -a "<NAME>" -u -d -p numpy,scipy,matplotlib,sklearn,pandas,mlxtend
# # Table of Contents
#
# * [1 Introduction to Machine Learning](#1-Introduction-to-Machine-Learning)
# * [2 Linear Regression](#2-Linear-Regression)
# * [Loading the dataset](#Loading-the-dataset)
# * [Preparing the dataset](#Preparing-the-dataset)
# * [Fitting the model](#Fitting-the-model)
# * [Evaluating the model](#Evaluating-the-model)
# * [3 Introduction to Classification](#3-Introduction-to-Classification)
# * [The Iris dataset](#The-Iris-dataset)
# * [Class label encoding](#Class-label-encoding)
# * [Scikit-learn's in-build datasets](#Scikit-learn's-in-build-datasets)
# * [Test/train splits](#Test/train-splits)
# * [Logistic Regression](#Logistic-Regression)
# * [K-Nearest Neighbors](#K-Nearest-Neighbors)
# * [3 - Exercises](#3---Exercises)
# * [4 - Feature Preprocessing & scikit-learn Pipelines](#4---Feature-Preprocessing-&-scikit-learn-Pipelines)
# * [Categorical features: nominal vs ordinal](#Categorical-features:-nominal-vs-ordinal)
# * [Normalization](#Normalization)
# * [Pipelines](#Pipelines)
# * [4 - Exercises](#4---Exercises)
# * [5 - Dimensionality Reduction: Feature Selection & Extraction](#5---Dimensionality-Reduction:-Feature-Selection-&-Extraction)
# * [Recursive Feature Elimination](#Recursive-Feature-Elimination)
# * [Sequential Feature Selection](#Sequential-Feature-Selection)
# * [Principal Component Analysis](#Principal-Component-Analysis)
# * [6 - Model Evaluation & Hyperparameter Tuning](#6---Model-Evaluation-&-Hyperparameter-Tuning)
# * [Wine Dataset](#Wine-Dataset)
# * [Stratified K-Fold](#Stratified-K-Fold)
# * [Grid Search](#Grid-Search)
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# <div style='height:100px;'></div>
# <div style='height:100px;'></div>
# # 1 Introduction to Machine Learning
# <div style='height:100px;'></div>
# # 2 Linear Regression
# ### Loading the dataset
# Source: <NAME> (1905). "A Study of the Relations of the Brain to
# to the Size of the Head", Biometrika, Vol. 4, pp105-123
#
#
# Description: Brain weight (grams) and head size (cubic cm) for 237
# adults classified by gender and age group.
#
#
# Variables/Columns
# - Gender (1=Male, 2=Female)
# - Age Range (1=20-46, 2=46+)
# - Head size (cm^3)
# - Brain weight (grams)
#
df = pd.read_csv('dataset_brain.txt',
encoding='utf-8',
comment='#',
sep='\s+')
df.tail()
plt.scatter(df['head-size'], df['brain-weight'])
plt.xlabel('Head size (cm^3)')
plt.ylabel('Brain weight (grams)');
# ### Preparing the dataset
y = df['brain-weight'].values
y.shape
X = df['head-size'].values
X = X[:, np.newaxis]
X.shape
# +
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=123)
# -
plt.scatter(X_train, y_train, c='blue', marker='o')
plt.scatter(X_test, y_test, c='red', marker='s')
plt.xlabel('Head size (cm^3)')
plt.ylabel('Brain weight (grams)');
# ### Fitting the model
# +
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X_train, y_train)
y_pred = lr.predict(X_test)
# -
# ### Evaluating the model
sum_of_squares = ((y_test - y_pred) ** 2).sum()
res_sum_of_squares = ((y_test - y_test.mean()) ** 2).sum()
r2_score = 1 - (sum_of_squares / res_sum_of_squares)
print('R2 score: %.3f' % r2_score)
print('R2 score: %.3f' % lr.score(X_test, y_test))
lr.coef_
lr.intercept_
# +
min_pred = X_train.min() * lr.coef_ + lr.intercept_
max_pred = X_train.max() * lr.coef_ + lr.intercept_
plt.scatter(X_train, y_train, c='blue', marker='o')
plt.plot([X_train.min(), X_train.max()],
[min_pred, max_pred],
color='red',
linewidth=4)
plt.xlabel('Head size (cm^3)')
plt.ylabel('Brain weight (grams)');
# -
# <div style='height:100px;'></div>
# # 3 Introduction to Classification
# ### The Iris dataset
df = pd.read_csv('dataset_iris.txt',
encoding='utf-8',
comment='#',
sep=',')
df.tail()
X = df.iloc[:, :4].values
y = df['class'].values
np.unique(y)
# ### Class label encoding
# +
from sklearn.preprocessing import LabelEncoder
l_encoder = LabelEncoder()
l_encoder.fit(y)
l_encoder.classes_
# -
y_enc = l_encoder.transform(y)
np.unique(y_enc)
np.unique(l_encoder.inverse_transform(y_enc))
# ### Scikit-learn's in-build datasets
# +
from sklearn.datasets import load_iris
iris = load_iris()
print(iris['DESCR'])
# -
# ### Test/train splits
# +
X, y = iris.data[:, :2], iris.target
# # ! We only use 2 features for visual purposes
print('Class labels:', np.unique(y))
print('Class proportions:', np.bincount(y))
# +
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=123)
print('Class labels:', np.unique(y_train))
print('Class proportions:', np.bincount(y_train))
# +
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=123,
stratify=y)
print('Class labels:', np.unique(y_train))
print('Class proportions:', np.bincount(y_train))
# -
# ### Logistic Regression
# +
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver='newton-cg',
multi_class='multinomial',
random_state=1)
lr.fit(X_train, y_train)
print('Test accuracy %.2f' % lr.score(X_test, y_test))
# +
from mlxtend.evaluate import plot_decision_regions
plot_decision_regions
plot_decision_regions(X=X, y=y, clf=lr, X_highlight=X_test)
plt.xlabel('sepal length [cm]')
plt.xlabel('sepal width [cm]');
# -
# ### K-Nearest Neighbors
# +
from sklearn.neighbors import KNeighborsClassifier
kn = KNeighborsClassifier(n_neighbors=4)
kn.fit(X_train, y_train)
print('Test accuracy %.2f' % kn.score(X_test, y_test))
# -
plot_decision_regions(X=X, y=y, clf=kn, X_highlight=X_test)
plt.xlabel('sepal length [cm]')
plt.xlabel('sepal width [cm]');
# ### 3 - Exercises
# - Which of the two models above would you prefer if you had to choose? Why?
# - What would be possible ways to resolve ties in KNN when `n_neighbors` is an even number?
# - Can you find the right spot in the scikit-learn documentation to read about how scikit-learn handles this?
# - Train & evaluate the Logistic Regression and KNN algorithms on the 4-dimensional iris datasets.
# - What performance do you observe?
# - Why is it different vs. using only 2 dimensions?
# - Would adding more dimensions help?
# <div style='height:100px;'></div>
# # 4 - Feature Preprocessing & scikit-learn Pipelines
# ### Categorical features: nominal vs ordinal
# +
import pandas as pd
df = pd.DataFrame([
['green', 'M', 10.0],
['red', 'L', 13.5],
['blue', 'XL', 15.3]])
df.columns = ['color', 'size', 'prize']
df
# +
from sklearn.feature_extraction import DictVectorizer
dvec = DictVectorizer(sparse=False)
X = dvec.fit_transform(df.transpose().to_dict().values())
X
# +
size_mapping = {
'XL': 3,
'L': 2,
'M': 1}
df['size'] = df['size'].map(size_mapping)
df
# -
X = dvec.fit_transform(df.transpose().to_dict().values())
X
# ### Normalization
df = pd.DataFrame([1., 2., 3., 4., 5., 6.], columns=['feature'])
df
# +
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
mmxsc = MinMaxScaler()
stdsc = StandardScaler()
X = df['feature'].values[:, np.newaxis]
df['minmax'] = mmxsc.fit_transform(X)
df['z-score'] = stdsc.fit_transform(X)
df
# -
# ### Pipelines
# +
from sklearn.pipeline import make_pipeline
from sklearn.cross_validation import train_test_split
from sklearn.datasets import load_iris
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=123,
stratify=y)
lr = LogisticRegression(solver='newton-cg',
multi_class='multinomial',
random_state=1)
lr_pipe = make_pipeline(StandardScaler(), lr)
lr_pipe.fit(X_train, y_train)
lr_pipe.score(X_test, y_test)
# -
lr_pipe.named_steps
lr_pipe.named_steps['standardscaler'].transform(X[:5])
# ### 4 - Exercises
# - Why is it important that we scale test and training sets separately?
# - Fit a KNN classifier to the standardized Iris dataset. Do you notice difference in the predictive performance of the model compared to the non-standardized one? Why or why not?
# <div style='height:100px;'></div>
# # 5 - Dimensionality Reduction: Feature Selection & Extraction
# +
from sklearn.cross_validation import train_test_split
from sklearn.datasets import load_iris
iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=123, stratify=y)
# -
# ### Recursive Feature Elimination
# +
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import RFECV
lr = LogisticRegression()
rfe = RFECV(lr, step=1, cv=5, scoring='accuracy')
rfe.fit(X_train, y_train)
print('Number of features:', rfe.n_features_)
print('Feature ranking', rfe.ranking_)
# -
# ### Sequential Feature Selection
# +
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
from mlxtend.feature_selection import plot_sequential_feature_selection as plot_sfs
sfs = SFS(lr, k_features=4, forward=True, floating=False, cv=5)
sfs.fit(X_train, y_train)
sfs = SFS(lr,
k_features=4,
forward=True,
floating=False,
scoring='accuracy',
cv=2)
sfs = sfs.fit(X, y)
fig1 = plot_sfs(sfs.get_metric_dict())
plt.ylim([0.8, 1])
plt.title('Sequential Forward Selection (w. StdDev)')
plt.grid()
# -
sfs.subsets_
# ### Principal Component Analysis
# +
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
pca = PCA(n_components=4)
pca.fit_transform(X_train, y_train)
var_exp = pca.explained_variance_ratio_
cum_var_exp = np.cumsum(var_exp)
idx = [i for i in range(len(var_exp))]
labels = [str(i + 1) for i in idx]
with plt.style.context('seaborn-whitegrid'):
plt.bar(range(4), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(4), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.xticks(idx, labels)
plt.legend(loc='center right')
plt.tight_layout()
plt.show()
# +
X_train_pca = pca.transform(X_train)
for lab, col, mar in zip((0, 1, 2),
('blue', 'red', 'green'),
('o', 's', '^')):
plt.scatter(X_train_pca[y_train == lab, 0],
X_train_pca[y_train == lab, 1],
label=lab,
marker=mar,
c=col)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.legend(loc='lower right')
plt.tight_layout()
# -
# <div style='height:100px;'></div>
# # 6 - Model Evaluation & Hyperparameter Tuning
# ### Wine Dataset
# +
from mlxtend.data import wine_data
X, y = wine_data()
# -
# Wine dataset.
#
# Source : https://archive.ics.uci.edu/ml/datasets/Wine
#
# Number of samples : 178
#
# Class labels : {0, 1, 2}, distribution: [59, 71, 48]
#
# Dataset Attributes:
#
# 1. Alcohol
# 2. Malic acid
# 3. Ash
# 4. Alcalinity of ash
# 5. Magnesium
# 6. Total phenols
# 7. Flavanoids
# 8. Nonflavanoid phenols
# 9. Proanthocyanins
# 10. Color intensity
# 11. Hue
# 12. OD280/OD315 of diluted wines
# 13. Proline
#
# ### Stratified K-Fold
# +
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.pipeline import make_pipeline
from sklearn.cross_validation import StratifiedKFold
from sklearn.neighbors import KNeighborsClassifier as KNN
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=123, stratify=y)
pipe_kn = make_pipeline(StandardScaler(),
PCA(n_components=1),
KNN(n_neighbors=3))
kfold = StratifiedKFold(y=y_train,
n_folds=10,
random_state=1)
scores = []
for k, (train, test) in enumerate(kfold):
pipe_kn.fit(X_train[train], y_train[train])
score = pipe_kn.score(X_train[test], y_train[test])
scores.append(score)
print('Fold: %s, Class dist.: %s, Acc: %.3f' % (k+1,
np.bincount(y_train[train]), score))
print('\nCV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
# +
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(estimator=pipe_kn,
X=X_train,
y=y_train,
cv=10,
n_jobs=2)
print('CV accuracy scores: %s' % scores)
print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))
# -
# ### Grid Search
pipe_kn.named_steps
# +
from sklearn.grid_search import GridSearchCV
param_grid = {'pca__n_components': [1, 2, 3, 4, 5, 6, None],
'kneighborsclassifier__n_neighbors': [1, 3, 5, 7, 9, 11]}
gs = GridSearchCV(estimator=pipe_kn,
param_grid=param_grid,
scoring='accuracy',
cv=10,
n_jobs=2,
refit=True)
gs = gs.fit(X_train, y_train)
print(gs.best_score_)
print(gs.best_params_)
# -
gs.score(X_test, y_test)
| code/tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="GdiYiYOH4Aaa"
# <a href="https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-3-public/blob/main/Course%202%20-%20Custom%20Training%20loops%2C%20Gradients%20and%20Distributed%20Training/Week%204%20-%20Distribution%20Strategy/C2_W4_Lab_3_using-TPU-strategy.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="v0qhKkwD4Aab"
# # TPU Strategy
#
# In this ungraded lab you'll learn to set up the TPU Strategy. It is recommended you run this notebook in Colab by clicking the badge above. This will give you access to a TPU as mentioned in the walkthrough video. Make sure you set your `runtime` to `TPU.`
# + [markdown] id="9xK02JDqVbTe"
# ## Imports
# + id="EP04AdUeTh8_"
import os
import random
try:
# # %tensorflow_version only exists in Colab.
# %tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
print("TensorFlow version " + tf.__version__)
AUTO = tf.data.experimental.AUTOTUNE
# + [markdown] id="UaKGHPjWkcVj"
# ## Set up TPUs and initialize TPU Strategy
#
# Ensure to change the runtime type to TPU in Runtime -> Change runtime type -> TPU
# + id="tmv6p137kgob"
# Detect hardware
try:
tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu_address) # TPU detection
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
# Going back and forth between TPU and host is expensive.
# Better to run 128 batches on the TPU before reporting back.
print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
print("Number of accelerators: ", strategy.num_replicas_in_sync)
except ValueError:
print('TPU failed to initialize.')
# + [markdown] id="w9S3uKC_iXY5"
# ## Download the Data from Google Cloud Storage
#
# + id="l7zy9Ze98Ip9"
SIZE = 224 #@param ["192", "224", "331", "512"] {type:"raw"}
IMAGE_SIZE = [SIZE, SIZE]
# + id="M3G-2aUBQJ-H"
GCS_PATTERN = 'gs://flowers-public/tfrecords-jpeg-{}x{}/*.tfrec'.format(IMAGE_SIZE[0], IMAGE_SIZE[1])
BATCH_SIZE = 128 # On TPU in Keras, this is the per-core batch size. The global batch size is 8x this.
VALIDATION_SPLIT = 0.2
CLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'] # do not change, maps to the labels in the data (folder names)
# splitting data files between training and validation
filenames = tf.io.gfile.glob(GCS_PATTERN)
random.shuffle(filenames)
split = int(len(filenames) * VALIDATION_SPLIT)
training_filenames = filenames[split:]
validation_filenames = filenames[:split]
print("Pattern matches {} data files. Splitting dataset into {} training files and {} validation files".format(len(filenames), len(training_filenames), len(validation_filenames)))
validation_steps = int(3670 // len(filenames) * len(validation_filenames)) // BATCH_SIZE
steps_per_epoch = int(3670 // len(filenames) * len(training_filenames)) // BATCH_SIZE
print("With a batch size of {}, there will be {} batches per training epoch and {} batch(es) per validation run.".format(BATCH_SIZE, steps_per_epoch, validation_steps))
# + [markdown] id="kvPXiovhi3ZZ"
# ## Create a dataset from the files
#
# - load_dataset takes the filenames and turns them into a tf.data.Dataset
# - read_tfrecord parses out a tf record into the image, class and a one-hot-encoded version of the class
# - Batch the data into training and validation sets with helper functions
#
# + id="LtAVr-4CP1rp"
def read_tfrecord(example):
features = {
"image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring
"class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar
"one_hot_class": tf.io.VarLenFeature(tf.float32),
}
example = tf.io.parse_single_example(example, features)
image = example['image']
class_label = example['class']
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [224, 224])
image = tf.cast(image, tf.float32) / 255.0 # convert image to floats in [0, 1] range
class_label = tf.cast(class_label, tf.int32)
return image, class_label
def load_dataset(filenames):
# read from TFRecords. For optimal performance, use "interleave(tf.data.TFRecordDataset, ...)"
# to read from multiple TFRecord files at once and set the option experimental_deterministic = False
# to allow order-altering optimizations.
option_no_order = tf.data.Options()
option_no_order.experimental_deterministic = False
dataset = tf.data.Dataset.from_tensor_slices(filenames)
dataset = dataset.with_options(option_no_order)
dataset = dataset.interleave(tf.data.TFRecordDataset, cycle_length=16, num_parallel_calls=AUTO) # faster
dataset = dataset.map(read_tfrecord, num_parallel_calls=AUTO)
return dataset
def get_batched_dataset(filenames):
dataset = load_dataset(filenames)
dataset = dataset.shuffle(2048)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=False) # drop_remainder will be needed on TPU
dataset = dataset.prefetch(AUTO) # prefetch next batch while training (autotune prefetch buffer size)
return dataset
def get_training_dataset():
dataset = get_batched_dataset(training_filenames)
dataset = strategy.experimental_distribute_dataset(dataset)
return dataset
def get_validation_dataset():
dataset = get_batched_dataset(validation_filenames)
dataset = strategy.experimental_distribute_dataset(dataset)
return dataset
# + [markdown] id="dMfenMQcxAAb"
# ## Define the Model and training parameters
# + id="4osPuniEF4Zv"
class MyModel(tf.keras.Model):
def __init__(self, classes):
super(MyModel, self).__init__()
self._conv1a = tf.keras.layers.Conv2D(kernel_size=3, filters=16, padding='same', activation='relu')
self._conv1b = tf.keras.layers.Conv2D(kernel_size=3, filters=30, padding='same', activation='relu')
self._maxpool1 = tf.keras.layers.MaxPooling2D(pool_size=2)
self._conv2a = tf.keras.layers.Conv2D(kernel_size=3, filters=60, padding='same', activation='relu')
self._maxpool2 = tf.keras.layers.MaxPooling2D(pool_size=2)
self._conv3a = tf.keras.layers.Conv2D(kernel_size=3, filters=90, padding='same', activation='relu')
self._maxpool3 = tf.keras.layers.MaxPooling2D(pool_size=2)
self._conv4a = tf.keras.layers.Conv2D(kernel_size=3, filters=110, padding='same', activation='relu')
self._maxpool4 = tf.keras.layers.MaxPooling2D(pool_size=2)
self._conv5a = tf.keras.layers.Conv2D(kernel_size=3, filters=130, padding='same', activation='relu')
self._conv5b = tf.keras.layers.Conv2D(kernel_size=3, filters=40, padding='same', activation='relu')
self._pooling = tf.keras.layers.GlobalAveragePooling2D()
self._classifier = tf.keras.layers.Dense(classes, activation='softmax')
def call(self, inputs):
x = self._conv1a(inputs)
x = self._conv1b(x)
x = self._maxpool1(x)
x = self._conv2a(x)
x = self._maxpool2(x)
x = self._conv3a(x)
x = self._maxpool3(x)
x = self._conv4a(x)
x = self._maxpool4(x)
x = self._conv5a(x)
x = self._conv5b(x)
x = self._pooling(x)
x = self._classifier(x)
return x
# + id="vrj3bKq1HYfJ"
with strategy.scope():
model = MyModel(classes=len(CLASSES))
# Set reduction to `none` so we can do the reduction afterwards and divide by
# global batch size.
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
reduction=tf.keras.losses.Reduction.NONE)
def compute_loss(labels, predictions):
per_example_loss = loss_object(labels, predictions)
return tf.nn.compute_average_loss(per_example_loss, global_batch_size=BATCH_SIZE * strategy.num_replicas_in_sync)
test_loss = tf.keras.metrics.Mean(name='test_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='train_accuracy')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='test_accuracy')
optimizer = tf.keras.optimizers.Adam()
@tf.function
def distributed_train_step(dataset_inputs):
per_replica_losses = strategy.run(train_step,args=(dataset_inputs,))
print(per_replica_losses)
return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,
axis=None)
@tf.function
def distributed_test_step(dataset_inputs):
strategy.run(test_step, args=(dataset_inputs,))
def train_step(inputs):
images, labels = inputs
with tf.GradientTape() as tape:
predictions = model(images)
loss = compute_loss(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_accuracy.update_state(labels, predictions)
return loss
def test_step(inputs):
images, labels = inputs
predictions = model(images)
loss = loss_object(labels, predictions)
test_loss.update_state(loss)
test_accuracy.update_state(labels, predictions)
# + id="bfuYf6RSL8lj"
EPOCHS = 40
with strategy.scope():
for epoch in range(EPOCHS):
# TRAINING LOOP
total_loss = 0.0
num_batches = 0
for x in get_training_dataset():
total_loss += distributed_train_step(x)
num_batches += 1
train_loss = total_loss / num_batches
# TESTING LOOP
for x in get_validation_dataset():
distributed_test_step(x)
template = ("Epoch {}, Loss: {:.2f}, Accuracy: {:.2f}, Test Loss: {:.2f}, "
"Test Accuracy: {:.2f}")
print (template.format(epoch+1, train_loss,
train_accuracy.result()*100, test_loss.result() / strategy.num_replicas_in_sync,
test_accuracy.result()*100))
test_loss.reset_states()
train_accuracy.reset_states()
test_accuracy.reset_states()
# + [markdown] id="MKFMWzh0Yxsq"
# ## Predictions
# + id="G2pso53Yd6Kg"
#@title display utilities [RUN ME]
import matplotlib.pyplot as plt
def dataset_to_numpy_util(dataset, N):
dataset = dataset.batch(N)
if tf.executing_eagerly():
# In eager mode, iterate in the Datset directly.
for images, labels in dataset:
numpy_images = images.numpy()
numpy_labels = labels.numpy()
break;
else: # In non-eager mode, must get the TF note that
# yields the nextitem and run it in a tf.Session.
get_next_item = dataset.make_one_shot_iterator().get_next()
with tf.Session() as ses:
numpy_images, numpy_labels = ses.run(get_next_item)
return numpy_images, numpy_labels
def title_from_label_and_target(label, correct_label):
label = np.argmax(label, axis=-1) # one-hot to class number
# correct_label = np.argmax(correct_label, axis=-1) # one-hot to class number
correct = (label == correct_label)
return "{} [{}{}{}]".format(CLASSES[label], str(correct), ', shoud be ' if not correct else '',
CLASSES[correct_label] if not correct else ''), correct
def display_one_flower(image, title, subplot, red=False):
plt.subplot(subplot)
plt.axis('off')
plt.imshow(image)
plt.title(title, fontsize=16, color='red' if red else 'black')
return subplot+1
def display_9_images_from_dataset(dataset):
subplot=331
plt.figure(figsize=(13,13))
images, labels = dataset_to_numpy_util(dataset, 9)
for i, image in enumerate(images):
title = CLASSES[np.argmax(labels[i], axis=-1)]
subplot = display_one_flower(image, title, subplot)
if i >= 8:
break;
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_9_images_with_predictions(images, predictions, labels):
subplot=331
plt.figure(figsize=(13,13))
for i, image in enumerate(images):
title, correct = title_from_label_and_target(predictions[i], labels[i])
subplot = display_one_flower(image, title, subplot, not correct)
if i >= 8:
break;
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
# + id="ExtZuDlh2Lem"
inference_model = model
# + id="oKx3gVmxhhZ2"
some_flowers, some_labels = dataset_to_numpy_util(load_dataset(validation_filenames), 8*20)
# + id="ehlsvY46Hs9z"
import numpy as np
# randomize the input so that you can execute multiple times to change results
permutation = np.random.permutation(8*20)
some_flowers, some_labels = (some_flowers[permutation], some_labels[permutation])
predictions = inference_model(some_flowers)
print(np.array(CLASSES)[np.argmax(predictions, axis=-1)].tolist())
display_9_images_with_predictions(some_flowers, predictions, some_labels)
| Course 2 - Custom Training loops, Gradients and Distributed Training/Week 4 - Distribution Strategy/C2_W4_Lab_3_using-TPU-strategy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#THIS WILL TAKE A LONG TIME- BUT ONLY NEEDS TO BE RUN ONCE
#- You can run this in the command line should you wish
# path to store processed data_tiles; can be anywhere:
path = '/home/jovyan/gtc-exposure/cloud_free/tiles/'
# This downloads, lazy loads, splits the tiles, then removes the orignal tif \
# the results are saved in the directory specified above
# !python3 cloud_free.py {path}
# +
import os
import pandas as pd
import rasterio
from rasterio.warp import transform_bounds
from shapely.geometry import Polygon, MultiPolygon, mapping
from cloud_free import retrieve_image, split_tiles, polygons, test_areas, informal_settlements
in_path ='/home/jovyan/gtc-exposure/cloud_free/tiles/'
###Generate training data ###
outpath_train = '/home/jovyan/gtc-exposure/cloud_free/train_images/'
#extract from tiles the relevant area that corresponds to the polygon
retrieve_image(in_path, outpath_train, polygons())
#split images into training size 64*64
split_tiles(outpath_train, 64)
#create jpeg files and normalise images
# !python3 pytorch_preprocess.py {outpath_train + 'tif/'}
#Generate test data
outpath_test = '/home/jovyan/gtc-exposure/cloud_free/test_images/'
retrieve_image(in_path, outpath_test, test_areas())
split_tiles(outpath_train, 64)
label_informal_settlement(outpath_test)
# !python3 pytorch_preprocess.py {outpath_test + 'tif/'}
# +
import os
import sys
import numpy as np
import pandas as pd
import geopandas as gpd
import gdal
import pyproj
import fiona
import rasterio
from rasterio.mask import mask
from rasterio.warp import transform_bounds
import matplotlib.pyplot as plt
from shapely.geometry import Polygon, MultiPolygon, mapping
from shapely.ops import transform
try:
import rioxarray as rxr
except ModuleNotFoundError:
os.system("pip install rioxarray")
import rioxarray as rxr
#view images from tif files
def view_image(image_dir):
columns = ['Informal', 'geometry']
df = pd.DataFrame(columns = columns)
for file in os.listdir(image_dir):
if file.endswith('.tif'):
#normalise images to plot images
s2_cloud_free = rxr.open_rasterio(image_dir+file, masked=True).squeeze()
red = s2_cloud_free[0]/s2_cloud_free.max()
green = s2_cloud_free[1]/s2_cloud_free.max()
blue = s2_cloud_free[2]/s2_cloud_free.max()
s2_cloud_free_norm = np.dstack((red, green, blue))
plt.figure(figsize=(20,10))
plt.imshow(s2_cloud_free_norm)
plt.show()
#create dataframe storing tif information
tif = rasterio.open(image_dir+file)
if file.startswith('inf'):
is_inf = 'inf'
else:
is_inf = 'not inf'
coordinates = transform_bounds(tif.crs, 'EPSG:4326', tif.bounds.left,
tif.bounds.bottom,
tif.bounds.right,
tif.bounds.top)
left, bottom, right, top = coordinates
Geometry = Polygon([[left, top], [right, top], [right ,bottom], [left, bottom], [left, top]])
df.loc[len(df)]= [is_inf, Geometry]
#convert dataframe to geopandas dataframe for ease of plotting
df = gpd.GeoDataFrame(df, geometry='geometry', crs = 4326)
return df
df = view_image('/home/jovyan/gtc-exposure/cloud_free/test_images/tif/')
df.plot()
| settlement_segmentation/data/cloud_free/cloud_free.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
np.zeros(3)
np.ones((4,5))
np.zeros((4,5
))
np.full((4,5),3.14)
x = np.full((10,20),4)
x
x.ndim
x.shape
x.size
x.dtype
x[0,1]
x[0,1] =3
x
x[0,1] =3.14
x
x= np.random.randint(10,size = 6)
x
X[0:2]
x[-2]
def calcule_inverse(value):
output = np.empty(len(value))
for i in range(len(value)):
output[i]=1.0/value[i]
return output
tableau_large = np.random.randint(1,10, size = 1000000)
# %timeit calcule_inverse(tableau_large)
# %timeit (1/tableau_large)
np.array([range(i,i+3) for i in [2,4,6]])
np.linspace(0, 1, 5)
np.eye(10)
x=np.random.randint(10, size=6)
x
x.size
x.shape
| NumPy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 主成分分析
# ## 1 导入数据
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
mat = loadmat("./data/ex7data1.mat")
X = mat["X"]
X.shape
plt.figure(figsize=(7,5))
plot = plt.scatter(X[:,0], X[:,1], s=30, facecolors='none', edgecolors='b')
plt.title("Example Dataset",fontsize=18)
plt.grid(True)
# ## 2 数据标准化
def features_normalize(X):
mean = X.mean(axis=0)
std = X.std(axis=0)
return mean, std, (X - mean) / std
mean, std, norm_X = features_normalize(X)
mean.shape, std.shape, norm_X.shape
# ## 3 PAC
from scipy.linalg import svd
def get_USV(norm_X):
cov_mat = (norm_X.T @ norm_X) / len(norm_X)
return svd(cov_mat, full_matrices=True, compute_uv=True)
U, S, V = get_USV(norm_X)
U.shape, S.shape, V.shape
means = mean
# +
# "...output the top principal component (eigen- vector) found,
# and you should expect to see an output of about [-0.707 -0.707]"
print('Top principal component is ',U[:,0])
#Quick plot, now including the principal component
plt.figure(figsize=(7,5))
plot = plt.scatter(X[:,0], X[:,1], s=30, facecolors='none', edgecolors='b')
plt.title("Example Dataset: PCA Eigenvectors Shown",fontsize=18)
plt.xlabel('x1',fontsize=18)
plt.ylabel('x2',fontsize=18)
plt.grid(True)
#To draw the principal component, you draw them starting
#at the mean of the data
plt.plot([means[0], means[0] + 1.5*S[0]*U[0,0]],
[means[1], means[1] + 1.5*S[0]*U[0,1]],
color='red',linewidth=3,
label='First Principal Component')
plt.plot([means[0], means[0] + 1.5*S[1]*U[1,0]],
[means[1], means[1] + 1.5*S[1]*U[1,1]],
color='fuchsia',linewidth=3,
label='Second Principal Component')
leg = plt.legend(loc=4)
# -
# ## 4 投影数据
def project_data(X, U, k):
Ureduced = U[:, :k]
return X @ Ureduced
z = project_data(norm_X, U, 1)
z[0]
# ## 5 还原数据
def recover_data(z, U, k):
Ureduced = U[:, :k]
return z @ Ureduced.T
re_X = recover_data(z, U, 1)
re_X[0]
norm_X[0]
# ## 6 可视化
# +
#Quick plot, now drawing projected points to the original points
plt.figure(figsize=(7,5))
plot = plt.scatter(norm_X[:,0], norm_X[:,1], s=30, facecolors='none',
edgecolors='b',label='Original Data Points')
plot = plt.scatter(re_X[:,0], re_X[:,1], s=30, facecolors='none',
edgecolors='r',label='PCA Reduced Data Points')
plt.title("Example Dataset: Reduced Dimension Points Shown",fontsize=14)
plt.xlabel('x1 [Feature Normalized]',fontsize=14)
plt.ylabel('x2 [Feature Normalized]',fontsize=14)
plt.grid(True)
for x in range(norm_X.shape[0]):
plt.plot([norm_X[x,0],re_X[x,0]],[norm_X[x,1],re_X[x,1]],'k--')
leg = plt.legend(loc=4)
#Force square axes to make projections look better
dummy = plt.xlim((-2.5,2.5))
dummy = plt.ylim((-2.5,2.5))
# -
# ## 图片压缩
mat = loadmat('./data/ex7faces.mat')
X = mat['X']
X.shape
import scipy.misc
from matplotlib import cm
# +
def getDatumImg(row):
"""
Function that is handed a single np array with shape 1x1032,
crates an image object from it, and returns it
"""
width, height = 32, 32
square = row.reshape(width,height)
return square.T
def displayData(myX, mynrows = 10, myncols = 10):
"""
Function that picks the first 100 rows from X, creates an image from each,
then stitches them together into a 10x10 grid of images, and shows it.
"""
width, height = 32, 32
nrows, ncols = mynrows, myncols
big_picture = np.zeros((height*nrows,width*ncols))
irow, icol = 0, 0
for idx in range(nrows*ncols):
if icol == ncols:
irow += 1
icol = 0
iimg = getDatumImg(myX[idx])
big_picture[irow*height:irow*height+iimg.shape[0],icol*width:icol*width+iimg.shape[1]] = iimg
icol += 1
fig = plt.figure(figsize=(10,10))
#img = scipy.misc.toimage( big_picture )
img = big_picture
plt.imshow(img,cmap = cm.Greys_r)
# -
displayData(X)
# Feature normalize
means, stds, X_norm = features_normalize(X)
# Run SVD
U, S, V = get_USV(X_norm)
# Visualize the top 36 eigenvectors found
# "Eigenfaces" lol
displayData(U[:,:36].T,mynrows=6,myncols=6)
z = project_data(X_norm, U, 36)
z.shape
# Attempt to recover the original data
X_rec = recover_data(z, U, 36)
# Plot the dimension-reduced data
displayData(X_rec)
from skimage.io import imread
im = imread("./data/bird_small.png")
im = im / 255
plt.imshow(im)
im = im.reshape((-1, 3))
A = im
means, stds, A_norm = features_normalize(A)
# Run SVD
U, S, V = get_USV(A_norm)
z = project_data(A_norm, U, 2)
#从样本中提取k个样本出来
def random_init(data, k):
import time
r = np.random.RandomState(int(time.time()))
return data[np.random.randint(0, len(data), k)]
def dist(X, centroids):
d = X.reshape(-1, 1, X.shape[-1]) - centroids
d = (d * d).sum(axis=2)
return d
def K_means(X, k, centroids=None, epoches=10):
if centroids is None:
centroids = random_init(X, k)
centroids_history = [centroids]
cost = []
m = len(X)
c = None
for i in range(epoches):
# 先找到每一个样本的最近簇中心点
d = dist(X, centroids)
# c是簇中心值
c = d.argmin(axis=1)
# md是每个样本到簇中心的距离
md = d.min(axis=1)
# KMeans的损失函数值
cost.append(md.sum() / m)
# 更新簇中心位置
new_centroids = np.empty_like(centroids)
for i in range(k):
#选出第i类的所有样本
kX = X[c == i]
#更新簇中心所在位置
new_centroids[i] = kX.mean(axis=0)
centroids_history.append(new_centroids)
centroids = new_centroids
return c, centroids_history, cost
def best_KMeans(X, k, times=10, epoches=10):
best_c = None
best_hist = None
best_cost = None
min_cost = 99999999.
for i in range(times):
c, hist, cost = K_means(X, k, None, epoches)
if cost[-1] < min_cost:
min_cost = cost[-1]
best_cost = cost
best_hist = hist
best_c = c
return best_c, best_hist, best_cost
c, hist, cost = best_KMeans(A, 16, 10, 5)
c
# +
# Make the 2D plot
subX = []
for x in range(len(np.unique(c))):
subX.append(np.array([A[i] for i in range(A.shape[0]) if c[i] == x]))
fig = plt.figure(figsize=(8,8))
for x in range(len(subX)):
newX = subX[x]
plt.plot(newX[:,0],newX[:,1],'.',alpha=0.3)
plt.xlabel('z1',fontsize=14)
plt.ylabel('z2',fontsize=14)
plt.title('PCA Projection Plot',fontsize=16)
plt.grid(True)
# -
| StudyNotesOfML/2 PCA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3-azureml
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# # Hypertuning Using Hyperdrive
# + gather={"logged": 1612427930765}
# Import Dependencies
import logging
import os
import csv
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import datasets
import pkg_resources
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
# -
# ## Dataset
# + gather={"logged": 1612427943862}
#
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
experiment_name = 'housing-reg-3'
experiment=Experiment(ws, experiment_name)
experiment
# + gather={"logged": 1612423021868}
## upload the local file to a datastore on the cloud
# get the datastore to upload prepared data
datastore = ws.get_default_datastore()
# upload the local file from src_dir to the target_path in datastore
datastore.upload(src_dir='data', target_path='data')
# create a dataset referencing the cloud location
dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, ('data/housing_train.csv'))])
# + gather={"logged": 1612423032326}
#register the dataset
dataset = dataset.register(workspace=ws,
name='Housing Dataset',
description='House Price training data')
# -
# ## Aml-Compute
# + gather={"logged": 1612427957090}
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Create compute cluster
# max_nodes should be no greater than 4.
# choose a name for your cluster
cluster_name = "housing-compute"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS3_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it uses the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=30)
# use get_status() to get a detailed status for the current cluster.
print(compute_target.get_status().serialize())
# -
# ## HyperDrive Configuration
# + gather={"logged": 1612427964810}
from azureml.widgets import RunDetails
from azureml.core import ScriptRunConfig
from azureml.train.hyperdrive.run import PrimaryMetricGoal
from azureml.train.hyperdrive.policy import BanditPolicy
from azureml.train.hyperdrive.sampling import RandomParameterSampling
from azureml.train.hyperdrive.runconfig import HyperDriveConfig
from azureml.train.hyperdrive.parameter_expressions import choice
import os
from azureml.core import Environment
# TODO: Create an early termination policy. This is not required if you are using Bayesian sampling.
early_termination_policy = BanditPolicy(slack_factor = 0.1, evaluation_interval=1, delay_evaluation=5)
#TODO: Create the different params that you will be using during training
param_sampling = RandomParameterSampling({
'--alpha': choice(0.1,0.2,0.3,0.4),
'--max_iter': choice(10,100,1000)
})
# set up the hyperdrive environment
env = Environment.from_conda_specification(
name='l_env',
file_path='./l_env.yml'
)
#TODO: Create your estimator and hyperdrive config
#Estimators are deprecated with the 1.19.0 release of the Python SDK.
#https://docs.microsoft.com/en-us/azure/machine-learning/how-to-migrate-from-estimators-to-scriptrunconfig
src = ScriptRunConfig(source_directory='.',
script='train.py',
compute_target = compute_target,
environment=env)
hyperdrive_run_config = HyperDriveConfig(run_config=src,
hyperparameter_sampling=param_sampling,
policy=early_termination_policy,
primary_metric_name="root_mean_squared_error",
primary_metric_goal=PrimaryMetricGoal.MINIMIZE,
max_total_runs=20,
max_concurrent_runs=4)
# + gather={"logged": 1612427978681} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
#submit your experiment
hyperdrive_run=experiment.submit(hyperdrive_run_config, show_output=True)
# + gather={"logged": 1612427985444} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
from azureml.widgets import RunDetails
RunDetails(hyperdrive_run).show()
# + gather={"logged": 1612428611007} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
hyperdrive_run.wait_for_completion(show_output=False) # specify True for a verbose log
# + gather={"logged": 1612432002777} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
best_run = hyperdrive_run.get_best_run_by_primary_metric()
best_run_metrics = best_run.get_metrics()
parameter_values = best_run.get_details()['runDefinition']['arguments']
print('Best Run Id:',best_run.id)
print('\n Root Mean Squared Error', best_run_metrics['root_mean_squared_error'])
print (parameter_values)
# + gather={"logged": 1612432295548} jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
from azureml.core import Model
from azureml.core.resource_configuration import ResourceConfiguration
model = best_run.register_model(model_name='housing-reg',
model_path='outputs/model.joblib',
model_framework=Model.Framework.SCIKITLEARN,
model_framework_version='0.19.1',
resource_configuration=ResourceConfiguration(cpu=1, memory_in_gb=0.5))
| project_files/housing-prediction-using-hyperdrive.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CaBi ML fitting champion
#
# 5/26: This is the best performing model so far.
# * At this point, I've found that dc_pop is more predictive than the dock/station variables and cabi_active_members_day_key and daylight_hours is more predictive than cabi_active_members_monthly
# * Now we can try tweaking other things
#
# ## 0. Data load, shaping, and split
# * Read in data from AWS
# * Check for high pairwise correlation
# * Encode time variable (day_of_year) as cyclical
# * Split into Xtrain, Xtest, ytrain, ytest based on date
# * Specify feature and target columns
# +
# Read in data from AWS
from util_functions import *
import numpy as np
import pandas as pd
import time
start_time = time.perf_counter()
set_env_path()
conn, cur = aws_connect()
# fullquery contains all of the variables within consideration
fullquery = """
SELECT
EXTRACT(DOY FROM date) as day_of_year,
date,
daylight_hours,
apparenttemperaturehigh,
apparenttemperaturelow,
cloudcover,
dewpoint,
humidity,
precipaccumulation,
precipintensitymax,
precipprobability,
rain,
snow,
visibility,
windspeed,
us_holiday,
nats_single,
nats_double,
dc_bike_event,
dc_pop,
cabi_bikes_avail,
cabi_stations_alx,
cabi_stations_arl,
cabi_stations_ffx,
cabi_stations_mcn,
cabi_stations_mcs,
cabi_stations_wdc,
cabi_docks_alx,
cabi_docks_arl,
cabi_docks_ffx,
cabi_docks_mcn,
cabi_docks_mcs,
cabi_docks_wdc,
cabi_stations_tot,
cabi_docks_tot,
cabi_dur_empty_wdc,
cabi_dur_full_wdc,
cabi_dur_empty_arl,
cabi_dur_full_arl,
cabi_dur_full_alx,
cabi_dur_empty_alx,
cabi_dur_empty_mcs,
cabi_dur_full_mcs,
cabi_dur_full_mcn,
cabi_dur_empty_mcn,
cabi_dur_full_ffx,
cabi_dur_empty_ffx,
cabi_dur_empty_tot,
cabi_dur_full_tot,
cabi_active_members_day_key,
cabi_active_members_monthly,
cabi_active_members_annual,
cabi_trips_wdc_to_wdc,
cabi_trips_wdc_to_wdc_casual
from final_db"""
query = """
SELECT
EXTRACT(DOY FROM date) as day_of_year,
date,
daylight_hours,
apparenttemperaturehigh,
cloudcover,
humidity,
precipaccumulation,
precipintensitymax,
precipprobability,
rain,
snow,
visibility,
windspeed,
us_holiday,
nats_single,
nats_double,
dc_bike_event,
dc_pop,
cabi_dur_empty_arl,
cabi_dur_full_arl,
cabi_dur_full_alx,
cabi_dur_empty_alx,
cabi_dur_empty_mcs,
cabi_dur_full_mcs,
cabi_dur_full_mcn,
cabi_dur_empty_mcn,
cabi_trips_wdc_to_wdc,
cabi_trips_wdc_to_wdc_casual
from final_db"""
pd.options.display.max_rows = None
pd.options.display.max_columns = None
df = pd.read_sql(query, con=conn)
# Setting date to index for easier splitting
df.set_index(df.date, drop=True, inplace=True)
df.index = pd.to_datetime(df.index)
print("We have {} instances and {} features".format(*df.shape))
# +
# Summary statistics
df.describe(percentiles=[.5]).round(3).transpose()
# +
def print_highly_correlated(df, features, threshold=0.75):
"""
Prints highly correlated feature pairs in df.
"""
corr_df = df[features].corr()
# Select pairs above threshold
correlated_features = np.where(np.abs(corr_df) > threshold)
# Avoid duplication
correlated_features = [(corr_df.iloc[x,y], x, y) for x, y in zip(*correlated_features) if x != y and x < y]
# Sort by abs(correlation)
s_corr_list = sorted(correlated_features, key=lambda x: -abs(x[0]))
print("There are {} feature pairs with pairwise correlation above {}".format(len(s_corr_list), threshold))
for v, i, j in s_corr_list:
cols = df[features].columns
print("{} and {} = {:0.3f}".format(corr_df.index[i], corr_df.columns[j], v))
print_highly_correlated(df, df.columns)
# -
# Encode day_of_year as cyclical
df['sin_day_of_year'] = np.sin(2*np.pi*df.day_of_year/365)
df['cos_day_of_year'] = np.cos(2*np.pi*df.day_of_year/365)
# +
# %matplotlib inline
df.sample(100).plot.scatter('sin_day_of_year','cos_day_of_year').set_aspect('equal')
# -
# * Split into Xtrain, Xtest, ytrain, ytest based on date
# * Training dates = 2013-01-01 to 2016-12-31
# * Test dates = 2017-01-01 to 2017-09-08
# * New data (coincides with beginning of dockless pilot) = 2017-09-09 to present
# +
# Train test split
# This can be tweaked, but we use 5-fold cross-validation to pick the model so that shouldn't change
train = df.loc['2013-01-01':'2016-12-31']
test = df.loc['2017-01-01':'2017-09-08']
print(train.shape, test.shape)
tr = train.shape[0]
te = test.shape[0]
trpct = tr/(tr+te)
tepct = te/(tr+te)
print("{:0.3f} percent of the data is in the training set and {:0.3f} percent is in the test set".format(trpct, tepct))
# +
# Specify columns to keep and drop for X and y
drop_cols = ['date', 'day_of_year']
y_cols = ['cabi_trips_wdc_to_wdc', 'cabi_trips_wdc_to_wdc_casual']
feature_cols = [col for col in df.columns if (col not in y_cols) & (col not in drop_cols)]
# X y split
Xtrain_raw = train[feature_cols]
# Our target variable here is all DC to DC trips
ytrain = train[y_cols[0]]
Xtest_raw = test[feature_cols]
ytest = test[y_cols[0]]
print(Xtrain_raw.shape, ytrain.shape, Xtest_raw.shape, ytest.shape)
# -
# ### 1. Preprocessing
#
# We want to use PolynomialFeatures and StandardScaler in a Pipeline, but we only want to scale continuous features.
#
# Here, I do the polynomial transformation first and then feed it through a pipeline because I wasn't able to get it all working in one pipeline.
#
# * Use PolynomialFeatures to create quadratic and interaction terms
# * Convert back to DataFrame
# * Drop redundant variables
# * Use Pipeline and FeatureUnion to selectively scale/ignore certain variables
# * Fit and transform using pipeline to get final Xtrain and Xtest
# +
# Imports and custom classes
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.preprocessing import PolynomialFeatures, StandardScaler, MinMaxScaler
from sklearn.base import BaseEstimator, TransformerMixin
class Columns(BaseEstimator, TransformerMixin):
"""
This is a custom transformer for splitting the data into subsets for FeatureUnion.
"""
def __init__(self, names=None):
self.names = names
def fit(self, X, y=None, **fit_params):
return self
def transform(self, X):
return X[self.names]
# +
# Use PolynomialFeatures to create quadratic and interaction terms
# Should ultimately be part of a Pipeline, but I had issues because
# PF returns an array and Columns requires a df
pf = PolynomialFeatures(2, include_bias=False)
Xtrain_pf_array = pf.fit_transform(Xtrain_raw)
Xtest_pf_array = pf.transform(Xtest_raw)
# Get feature names
Xtrain_cols = pf.get_feature_names(Xtrain_raw.columns)
# Convert arrays to dfs with the new pf column names
Xtrain_pf = pd.DataFrame(Xtrain_pf_array, columns=Xtrain_cols)
Xtest_pf = pd.DataFrame(Xtest_pf_array, columns=Xtrain_cols)
print(Xtrain_pf.shape, Xtest_pf.shape)
# +
# A lot of these variables are redundant, especially squared dummy variables
# All of these variables listed next are 'binary' but only some are meaningful
bin_vars = [col for col in Xtrain_pf.columns if Xtrain_pf[col].nunique() == 2]
bin_vars
# +
# Dropping squared dummies and nonsensical interaction terms
# This part can be expanded. There's a lot of noise after PF
to_drop = [
'rain^2', 'snow^2', 'us_holiday^2', 'nats_single^2', 'nats_double^2',
'dc_bike_event^2', 'sin_day_of_year^2', 'cos_day_of_year^2',
'sin_day_of_year cos_day_of_year'
]
Xtrain_pf2 = Xtrain_pf.drop(labels=to_drop, axis=1)
Xtest_pf2 = Xtest_pf.drop(labels=to_drop, axis=1)
print(Xtrain_pf2.shape, Xtest_pf2.shape)
# +
# Defining binary and continuous variables
# We have normal 0,1 binary variables, binary variables outside 0,1 that were created by PF, and continuous variables
# We want to ignore the 0,1s, MinMaxScale the non 0,1 binary variables, and StandardScale the continuous variables
binary = ['rain', 'snow', 'us_holiday', 'nats_single', 'nats_double', 'dc_bike_event']
binarypf = [col for col in Xtrain_pf2.columns if (Xtrain_pf2[col].nunique() == 2) & (col not in binary)]
cont = [col for col in Xtrain_pf2.columns if (col not in binary) & (col not in binarypf)]
# FeatureUnion in our pipeline shifts the ordering of the variables so we need to save the ordering here
cols = binary + binarypf + cont
pipeline = Pipeline([
('features', FeatureUnion([
('binary', Pipeline([
('bincols', Columns(names=binary))
])),
('binarypf', Pipeline([
('binpfcols', Columns(names=binarypf)),
('minmax', MinMaxScaler())
])),
('continuous', Pipeline([
('contcols', Columns(names=cont)),
('scaler', StandardScaler())
]))
]))
])
# +
# Fit and transform to create our final Xtrain and Xtest
pipeline.fit(Xtrain_pf2)
Xtrain_scaled = pipeline.transform(Xtrain_pf2)
Xtest_scaled = pipeline.transform(Xtest_pf2)
# Put everything back into dfs
Xtrain = pd.DataFrame(Xtrain_scaled, columns=cols)
Xtest = pd.DataFrame(Xtest_scaled, columns=cols)
print(Xtrain.shape, Xtest.shape)
# -
Xtrain.describe(percentiles=[.5]).round(3).transpose()
# +
# Appending train and test to get full dataset for cross-validation
Xfull = Xtrain.append(Xtest)
yfull = ytrain.append(ytest)
print(Xfull.shape, yfull.shape)
# -
# ### 2. Model Fitting
from sklearn.metrics import mean_squared_error as mse
from sklearn.metrics import mean_absolute_error as mae
from sklearn.metrics import median_absolute_error as medae
from sklearn.metrics import explained_variance_score as evs
from sklearn.metrics import r2_score
from sklearn.model_selection import cross_val_score
# +
from sklearn.model_selection import KFold
def score_model(model, alpha=False):
"""
Fits a model using the training set, predicts using the test set, and then calculates
and reports goodness of fit metrics and alpha if specified and available.
"""
model.fit(Xtrain, ytrain)
yhat = model.predict(Xtest)
r2 = r2_score(ytest, yhat)
me = mse(ytest, yhat)
ae = mae(ytest, yhat)
mede = medae(ytest, yhat)
ev = evs(ytest, yhat)
if alpha == True:
print("Results from {}: \nr2={:0.3f} \nMSE={:0.3f} \
\nMAE={:0.3f} \nMEDAE={:0.3f} \nEVS={:0.3f} \nalpha={:0.3f}".format(model, r2, me,
ae, mede, ev, model.alpha_))
else:
print("Results from {}: \nr2={:0.3f} \nMSE={:0.3f} \
\nMAE={:0.3f} \nMEDAE={:0.3f} \nEVS={:0.3f}".format(model, r2, me, ae, mede, ev))
def cv_score(model, cv=5):
"""
Evaluates a model by 5-fold cross-validation and prints mean and 2*stdev of scores.
Shuffles before cross-validation but sets random_state=7 for reproducibility.
"""
kf = KFold(n_splits=cv, shuffle=True, random_state=7)
scores = cross_val_score(model, Xfull, yfull, cv=kf)
print(scores)
print("Accuracy: {:0.3f} (+/- {:0.3f})".format(scores.mean(), scores.std() * 2))
# +
'''Elastic Net'''
from sklearn.linear_model import ElasticNetCV
t = time.perf_counter()
# Alphas to search over
# Our alpha is usually in the low double digits
# This sets our search space to 250 steps between 10^0=1 and 10^2=100
alphas = np.logspace(0, 2, 250)
# Suggested l1_ratio from docs
l1_ratio = [.1, .5, .7, .9, .95, .99, 1]
en = ElasticNetCV(l1_ratio=l1_ratio, alphas=alphas, fit_intercept=True, normalize=False)
score_model(en, alpha=True)
print("L1 ratio=",en.l1_ratio_)
elapsed_time = (time.perf_counter() - t)/60
print("This cell took {:0.2f} minutes to run".format(elapsed_time))
# +
'''Lasso'''
from sklearn.linear_model import LassoCV
t = time.perf_counter()
lasso = LassoCV(alphas=alphas, n_alphas=250, fit_intercept=True, normalize=False)
score_model(lasso, alpha=True)
elapsed_time = (time.perf_counter() - t)/60
print("This cell took {:0.2f} minutes to run".format(elapsed_time))
# +
# Which variables were selected?
# Put coefficients and variable names in df
lassodf = pd.DataFrame(lasso.coef_, index=Xtrain.columns)
# Select nonzeros
results = lassodf[(lassodf.T != 0).any()]
# Sort by magnitude
results['sorted'] = results[0].abs()
results.sort_values(by='sorted', inplace=True, ascending=False)
print("Lasso chooses {} variables".format(len(results)))
print(results)
# +
'''Ridge'''
from sklearn.linear_model import RidgeCV
t = time.perf_counter()
rr = RidgeCV(alphas=alphas, fit_intercept=True, normalize=False)
score_model(rr, alpha=True)
cv_score(rr)
elapsed_time = (time.perf_counter() - t)/60
print("This cell took {:0.2f} minutes to run".format(elapsed_time))
# +
'''RF'''
from sklearn.ensemble import RandomForestRegressor
t = time.perf_counter()
rf = RandomForestRegressor()
score_model(rf)
cv_score(rf)
elapsed_time = (time.perf_counter() - t)/60
print("This cell took {:0.2f} minutes to run".format(elapsed_time))
# +
t = time.perf_counter()
cv_score(lasso, cv=5)
elapsed_time = (time.perf_counter() - t)/60
print("This cell took {:0.2f} minutes to run".format(elapsed_time))
# -
end_time = (time.perf_counter() - start_time)/60
print("This notebook took {:0.2f} minutes to run".format(end_time))
# To do:
# * No polynomials, 3 polynomials
# * How to interpret the coefficients?
# * Modify train/test split size
| machine_learning/.ipynb_checkpoints/cabi_ml_fitting_champion-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import pickle
import os
import pandas
import pynumdiff
import scipy.fftpack
from IPython.display import display,SVG
import figurefirst
fifi = figurefirst
data = np.load('ski_gyro_z_data.npy')
_t_ = data[0,:]
_gyro_ = data[1,:]
t = np.arange(0, np.max(_t_), np.median(np.diff(_t_)))
gyro = np.interp(t, _t_, _gyro_)
plt.plot(t, gyro)
def get_gamma(dt, freq, timeseries_length=None):
log_gamma = -5.1 + -1.59*np.log(freq) + -0.72*np.log(dt)
return np.exp(log_gamma)
figure_layout = 'fig_7_gyro.svg'
cutoff_freq = 2e-1
# # Data
# +
layout = fifi.svg_to_axes.FigureLayout(figure_layout, autogenlayers=True,
make_mplfigures=True, hide_layers=[])
ax = layout.axes[('data', 'data')]
ax.plot(t, gyro, '.', color='blue', markersize=1, zorder=-10, markeredgecolor='none', markerfacecolor='blue')
ax.fill_between([60, 65], -4, 4, edgecolor='none', facecolor='cornflowerblue', alpha=0.2, zorder=-20)
ax.set_rasterization_zorder(0)
ax.set_xlim(0, 90)
ax.set_ylim(-4, 4)
fifi.mpl_functions.adjust_spines(ax, ['left', 'bottom'],
yticks = [-4, -2, 0, 2, 4],
xticks = [0, 30, 60, 90],
tick_length=2.5,
spine_locations={'left': 4, 'bottom': 4})
ax = layout.axes[('data', 'data_zoom')]
ax.plot(t, gyro, '.', color='blue', markersize=1, zorder=-10, markeredgecolor='none', markerfacecolor='blue')
ax.fill_between([60, 65], -4, 4, edgecolor='none', facecolor='cornflowerblue', alpha=0.2, zorder=-20)
ax.set_rasterization_zorder(0)
ax.set_xlim(60, 65)
ax.set_ylim(-4, 4)
fifi.mpl_functions.adjust_spines(ax, ['left', 'bottom'],
yticks = [-4, -2, 0, 2, 4],
xticks = [60, 65],
tick_length=2.5,
spine_locations={'left': 4, 'bottom': 4})
fifi.mpl_functions.set_fontsize(ax, 6)
layout.append_figure_to_layer(layout.figures['data'], 'data', cleartarget=True)
layout.write_svg(figure_layout)
# -
# # Spectra
def plot_power_spectra(x, t, cutoff_freq=None, ax=None):
if ax is None:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_yscale('log')
ax.set_xscale('log')
yf = scipy.fftpack.fft(x)
N = len(t)
dt = np.mean(np.diff(t))
xf = np.linspace(0.0, 1.0/(2.0*dt), N/2)
P = 2.0/N * np.abs(yf[:N//2])
ax.plot(xf, P, color='black', zorder=-10)
if cutoff_freq is not None:
ax.vlines(cutoff_freq, 1e-6, 1e1, color='red')
# +
layout = fifi.svg_to_axes.FigureLayout(figure_layout, autogenlayers=True,
make_mplfigures=True, hide_layers=[])
ax = layout.axes[('spectra', 'spectra')]
plot_power_spectra(gyro, t, cutoff_freq=cutoff_freq, ax=ax)
ax.set_ylim(1e-6, 1e0)
ax.set_xlim(1e-4, 1e1)
fifi.mpl_functions.adjust_spines(ax, ['left', 'bottom'],
xticks=[1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1],
yticks=[1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1e0],
tick_length=2.5,
spine_locations={'left': 4, 'bottom': 4})
fifi.mpl_functions.set_fontsize(ax, 6)
layout.append_figure_to_layer(layout.figures['spectra'], 'spectra', cleartarget=True)
layout.write_svg(figure_layout)
# -
dt = np.mean(np.diff(t))
print('dt: ', dt)
idx = np.where( (t>60)*(t<65) )[0]
data_zoom = gyro[idx]
tvgamma = get_gamma(dt, cutoff_freq)
print(tvgamma)
method = 'savgoldiff'
method_parent = 'linear_model'
params, v = pynumdiff.optimize.__dict__[method_parent].__dict__[method](data_zoom, dt, tvgamma=tvgamma)
x_smooth, xdot_smooth = pynumdiff.__dict__[method_parent].__dict__[method](gyro, dt, params)
# +
layout = fifi.svg_to_axes.FigureLayout(figure_layout, autogenlayers=True,
make_mplfigures=True, hide_layers=[])
ax = layout.axes[('smooth', 'pos')]
ax.fill_between([20, 30], -4, 4, edgecolor='none', facecolor='gray', alpha=0.2, zorder=-20)
ax.plot(t, gyro, '.', color='blue', markersize=1, zorder=-10, markeredgecolor='none', markerfacecolor='blue')
ax.set_rasterization_zorder(0)
ax.plot(t, x_smooth, color='red')
ax.set_xlim(0, 90)
ax.set_ylim(-4, 4)
fifi.mpl_functions.adjust_spines(ax, ['left', 'bottom'],
yticks = [-4,-2,0,2,4],
xticks = [0,30,60,90],
tick_length=2.5,
spine_locations={'left': 4, 'bottom': 4})
ax.set_xticklabels([])
ax = layout.axes[('smooth', 'vel')]
ax.plot(t, xdot_smooth, color='red')
ax.fill_between([20,30], -15, 15, edgecolor='none', facecolor='gray', alpha=0.2, zorder=-20)
ax.set_xlim(0, 90)
ax.set_ylim(-10, 10)
fifi.mpl_functions.adjust_spines(ax, ['left', 'bottom'],
yticks = [-15, 0, 15],
xticks = [0,30,60,90],
tick_length=2.5,
spine_locations={'left': 4, 'bottom': 4})
fifi.mpl_functions.set_fontsize(ax, 6)
layout.append_figure_to_layer(layout.figures['smooth'], 'smooth', cleartarget=True)
layout.write_svg(figure_layout)
# +
layout = fifi.svg_to_axes.FigureLayout(figure_layout, autogenlayers=True,
make_mplfigures=True, hide_layers=[])
ax = layout.axes[('smooth_zoom', 'pos')]
ax.plot(t, gyro, '.', color='blue', markersize=1, zorder=-10, markeredgecolor='none', markerfacecolor='blue')
ax.set_rasterization_zorder(0)
ax.plot(t, x_smooth, color='red')
ax.fill_between([20,30], -4, 4, edgecolor='none', facecolor='gray', alpha=0.2, zorder=-20)
ax.set_xlim(20, 30)
ax.set_ylim(-4, 4)
fifi.mpl_functions.adjust_spines(ax, ['left', 'bottom'],
yticks = [-4, -2, 0, 2, 4],
xticks = [20, 25, 30],
tick_length=2.5,
spine_locations={'left': 4, 'bottom': 4})
ax = layout.axes[('smooth_zoom', 'vel')]
ax.plot(t, xdot_smooth, color='red')
ax.fill_between([20, 30], -15, 15, edgecolor='none', facecolor='gray', alpha=0.2, zorder=-20)
# other methods
if 0:
method = 'butterdiff'
method_parent = 'smooth_finite_difference'
params, v = pynumdiff.optimize.__dict__[method_parent].__dict__[method](data_zoom, dt, tvgamma=tvgamma)
x_smooth, xdot_smooth = pynumdiff.__dict__[method_parent].__dict__[method](wind_speed, dt, params)
ax.plot(t, xdot_smooth, color='purple', linewidth=0.3)
method = 'constant_acceleration'
method_parent = 'kalman_smooth'
params, v = pynumdiff.optimize.__dict__[method_parent].__dict__[method](data_zoom, dt, tvgamma=tvgamma)
x_smooth, xdot_smooth = pynumdiff.__dict__[method_parent].__dict__[method](wind_speed, dt, params)
ax.plot(t, xdot_smooth, color='blue', linewidth=0.3)
method = 'jerk'
method_parent = 'total_variation_regularization'
params, v = pynumdiff.optimize.__dict__[method_parent].__dict__[method](data_zoom, dt, tvgamma=tvgamma)
x_smooth, xdot_smooth = pynumdiff.__dict__[method_parent].__dict__[method](wind_speed, dt, params)
ax.plot(t, xdot_smooth, color='green', linewidth=0.3)
ax.set_xlim(20, 30)
ax.set_ylim(-10, 10)
fifi.mpl_functions.adjust_spines(ax, ['left', 'bottom'],
yticks = [-15, 0, 15],
xticks = [20, 25, 30],
tick_length=2.5,
spine_locations={'left': 4, 'bottom': 4})
fifi.mpl_functions.set_fontsize(ax, 6)
layout.append_figure_to_layer(layout.figures['smooth_zoom'], 'smooth_zoom', cleartarget=True)
layout.write_svg(figure_layout)
# -
| notebooks/paper_figures/fig_7_ski/make_fig_7_gyro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Importing and prepping data
import pandas as pd
import numpy as np
import diff_classifier.aws as aws
import diff_classifier.pca as pca
# +
features = []
remote_folder = 'Tissue_Studies/09_11_18_Regional'
bucket = 'ccurtis.data'
vids = 15
types = ['PEG', 'PS']
pups = [2, 3]
slices = [1, 2, 3]
counter = 0
for typ in types:
for pup in pups:
for slic in slices:
for num in range(1, vids+1):
try:
#to_track.append('100x_0_4_1_2_gel_{}_bulk_vid_{}'.format(vis, num))
#to_track.append('{}_P{}_S{}_XY{}'.format(typ, pup, slic, '%02d' % num))
filename = 'features_{}_P{}_S{}_XY{}.csv'.format(typ, pup, slic, '%02d' % num)
aws.download_s3('{}/{}'.format(remote_folder, filename), filename, bucket_name=bucket)
fstats = pd.read_csv(filename, encoding = "ISO-8859-1", index_col='Unnamed: 0')
fstats['Particle Type'] = pd.Series(fstats.shape[0]*[typ], index=fstats.index)
fstats['Video Number'] = pd.Series(fstats.shape[0]*[num], index=fstats.index)
fstats['Slices'] = pd.Series(fstats.shape[0]*[str(slic)], index=fstats.index)
fstats['Pup'] = pd.Series(fstats.shape[0]*[str(pup)], index=fstats.index)
if num in range(1, 6):
fstats['Region'] = pd.Series(fstats.shape[0]*['Cortex'], index=fstats.index)
fstats['Region and Type'] = pd.Series(fstats.shape[0]*['{}_Cortex'.format(typ)], index=fstats.index)
elif num in range(6, 11):
fstats['Region'] = pd.Series(fstats.shape[0]*['Hippocampus'], index=fstats.index)
fstats['Region and Type'] = pd.Series(fstats.shape[0]*['{}_Hippocampus'.format(typ)], index=fstats.index)
else:
fstats['Region'] = pd.Series(fstats.shape[0]*['Thalamus'], index=fstats.index)
fstats['Region and Type'] = pd.Series(fstats.shape[0]*['{}_Thalamus'.format(typ)], index=fstats.index)
#print(num)
print(filename)
counter = counter + 1
if counter == 1:
fstats_tot = fstats
else:
fstats_tot = fstats_tot.append(fstats, ignore_index=True)
except:
print('skipped filename: {}'.format(filename))
# -
fstats_tot['LogDeff1'] = np.log(fstats_tot.Deff1).replace([np.inf, -np.inf], np.nan)
# +
axes = fstats_tot.hist(column='LogDeff1', by='Region and Type', layout=(6, 1), bins=100, sharex=True, sharey=True,
figsize=(10, 8), edgecolor='k')
means = []
types2 = ['PEG_Cortex', 'PEG_Hippocampus', 'PEG_Thalamus', 'PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus']
for ax, typ in zip(axes, types2):
ax.set_ylim([0,8000])
#ax.set_xscale("log", nonposx='clip')
ax.set_xlim([-7.5,2.5])
means.append(fstats_tot[fstats_tot['Region and Type']==typ]['LogDeff1'].median())
ax.axvline(fstats_tot[fstats_tot['Region and Type']==typ]['LogDeff1'].median(), color='k', linestyle='dashed', linewidth=3)
# -
meanD = np.array(means)
meanD.sort()
Dbins = meanD[0:-1] + np.diff(meanD)/2
print(Dbins)
import matplotlib.pyplot as plt
Dbins = [-10, -0.842, -0.037, 0.516, 0.891, 1.29, 10]
bins = np.linspace(-10, 10, 300)
fig, ax = plt.subplots(figsize=(10, 4))
for i in range(6):
fstats_tot[(fstats_tot['Particle Type']=='PEG'(Dbins[i] < fstats_tot['LogDeff1']) & (fstats_tot['LogDeff1'] < Dbins[i+1])].hist(column='LogDeff1', bins=bins, figsize=(4, 8), edgecolor='k', ax=ax)
ax.set_xlim([-7.5, 2.5])
from sklearn.metrics import classification_report
y_pred2 = list(pd.cut(fstats_tot.LogDeff1, bins=Dbins, labels=types2).astype(str))
y_true = fstats_tot['Region and Type'].tolist()
print(classification_report(y_true, y_pred2, digits=4))
fstats_tot['LogMeanDeff1'] = np.log(fstats_tot['Mean Deff1']).replace([np.inf, -np.inf], np.nan)
meanD = np.array(means)
meanD.sort()
Dbins = meanD[0:-1] + np.diff(meanD)/2
print(Dbins)
import matplotlib.pyplot as plt
Dbins = [-10, -0.0765, 0.424, 0.77, 1.091, 1.426, 10]
bins = np.linspace(-10, 10, 200)
fig, axes = plt.subplots(nrows=6, figsize=(12, 18))
counter = 0
means = []
typereg = ['PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus', 'PEG_Cortex', 'PEG_Hippocampus', 'PEG_Thalamus']
for ax in axes:
means.append(fstats_tot[fstats_tot['Region and Type']==typereg[counter]]['LogMeanDeff1'].median())
for i in range(6):
fstats_tot[(fstats_tot['Region and Type']==typereg[counter]) & (Dbins[i] < fstats_tot['LogMeanDeff1']) & (fstats_tot['LogMeanDeff1'] < Dbins[i+1])].hist(column='LogMeanDeff1', bins=bins, figsize=(12,3), edgecolor='k', ax=ax, )
ax.set_xlim([-7.5, 3.5])
ax.set_ylim([0, 10000])
ax.axvline(fstats_tot[fstats_tot['Region and Type']==typereg[counter]]['LogMeanDeff1'].median(), color='k', linestyle='dashed', linewidth=3)
ax.set_title(typereg[counter])
if counter == 5:
ax.set_xlabel(r'$log(D_{eff})$')
counter = counter + 1
types = ['PS_Hippocampus', 'PS_Cortex', 'PS_Thalamus', 'PEG_Hippocampus', 'PEG_Cortex', 'PEG_Thalamus']
y_pred2 = list(pd.cut(fstats_tot.LogMeanDeff1, bins=Dbins, labels=types).astype(str))
y_true = fstats_tot['Region and Type'].tolist()
print(classification_report(y_true, y_pred2, digits=4))
fstats_tot['LogDeff1'] = np.log(fstats_tot['Deff1']).replace([np.inf, -np.inf], np.nan)
meanD = np.array(means)
meanD.sort()
Dbins = meanD[0:-1] + np.diff(meanD)/2
print(Dbins)
Dbins = [-10, -0.8422, -0.0368, 0.5158, 0.8907, 1.2865, 10]
bins = np.linspace(-10, 10, 200)
fig, axes = plt.subplots(nrows=6, figsize=(12, 18))
counter = 0
means = []
typereg = ['PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus', 'PEG_Cortex', 'PEG_Hippocampus', 'PEG_Thalamus']
for ax in axes:
means.append(fstats_tot[fstats_tot['Region and Type']==typereg[counter]]['LogDeff1'].median())
for i in range(6):
fstats_tot[(fstats_tot['Region and Type']==typereg[counter]) & (Dbins[i] < fstats_tot['LogDeff1']) & (fstats_tot['LogDeff1'] < Dbins[i+1])].hist(column='LogDeff1', bins=bins, figsize=(12,3), edgecolor='k', ax=ax, )
ax.set_xlim([-7.5, 3.5])
ax.set_ylim([0, 10000])
ax.axvline(fstats_tot[fstats_tot['Region and Type']==typereg[counter]]['LogDeff1'].median(), color='k', linestyle='dashed', linewidth=3)
ax.set_title(typereg[counter])
if counter == 5:
ax.set_xlabel(r'$log(D_{eff})$')
counter = counter + 1
types = ['PS_Hippocampus', 'PS_Cortex', 'PS_Thalamus', 'PEG_Hippocampus', 'PEG_Cortex', 'PEG_Thalamus']
y_pred2 = list(pd.cut(fstats_tot.LogDeff1, bins=Dbins, labels=types).astype(str))
y_true = fstats_tot['Region and Type'].tolist()
print(classification_report(y_true, y_pred2, digits=4))
import matplotlib.pyplot as plt
Dbins = [-10, 0.891, 1.29, 10]
bins = np.linspace(-10, 10, 300)
fig, ax = plt.subplots(figsize=(10, 4))
for i in range(3):
fstats_tot[(fstats_tot['Particle Type']=='PEG') & (Dbins[i] < fstats_tot['LogDeff1']) & (fstats_tot['LogDeff1'] < Dbins[i+1])].hist(column='LogDeff1', bins=bins, figsize=(4, 8), edgecolor='k', ax=ax)
ax.set_xlim([-7.5, 2.5])
y_pred2 = list(pd.cut(fstats_tot[fstats_tot['Particle Type']=='PEG'].LogDeff1, bins=Dbins, labels=['Hippocampus', 'Cortex', 'Thalamus']).astype(str))
y_true = fstats_tot[fstats_tot['Particle Type']=='PEG']['Region'].tolist()
print(classification_report(y_true, y_pred2, digits=4))
import matplotlib.pyplot as plt
Dbins = [-10, -0.842, -0.037, 10]
bins = np.linspace(-10, 10, 300)
fig, ax = plt.subplots(figsize=(10, 4))
for i in range(3):
fstats_tot[(fstats_tot['Particle Type']=='PS') & (Dbins[i] < fstats_tot['LogDeff1']) & (fstats_tot['LogDeff1'] < Dbins[i+1])].hist(column='LogDeff1', bins=bins, figsize=(4, 8), edgecolor='k', ax=ax)
ax.set_xlim([-7.5, 2.5])
y_pred2 = list(pd.cut(fstats_tot[fstats_tot['Particle Type']=='PS'].LogDeff1, bins=Dbins, labels=['Hippocampus', 'Cortex', 'Thalamus']).astype(str))
y_true = fstats_tot[fstats_tot['Particle Type']=='PS']['Region'].tolist()
print(classification_report(y_true, y_pred2, digits=4))
Dbins
fstats_tot.to_csv('features.csv')
#with equal sample sizes for each particle type
import random
counter = 0
#mws = ['10k_PEG', '5k_PEG', '1k_PEG', 'PS_COOH']
mws = ['PEG_Cortex', 'PEG_Hippocampus', 'PEG_Thalamus', 'PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus']
for mw in mws:
fstats_type = fstats_tot[fstats_tot['Region and Type']==mw].reset_index(drop=True)
print(fstats_type.shape)
subset = np.sort(np.array(random.sample(range(fstats_type.shape[0]), 34000)))
if counter == 0:
fstats_sub = fstats_type.loc[subset, :].reset_index(drop=True)
else:
fstats_sub = fstats_sub.append(fstats_type.loc[subset, :].reset_index(drop=True), ignore_index=True)
counter = counter + 1
#fstats = pd.read_csv(filename, encoding = "ISO-8859-1", index_col='Unnamed: 0')
nonnum = ['Particle Type', 'Video Number', 'Track_ID', 'Slices', 'Pup', 'Region', 'Region and Type',
'Mean Mean_Intensity', 'Std Mean_Intensity', 'X', 'Y',
'Mean X', 'Mean Y', 'Std X', 'Std Y']
fstats_num = fstats_sub.drop(nonnum, axis=1)
fstats_raw = fstats_num.values
#fstats
mws = ['PEG_Cortex', 'PEG_Hippocampus', 'PEG_Thalamus', 'PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus']
for mw in mws:
print(fstats_tot[fstats_tot['Region and Type'] == mw].shape)
# ## PCA analysis
# The pca.pca_analysis function provides a completely contained PCA analysis of the input trajectory features dataset. It includes options to impute NaN values (fill in with average values or drop them), and to scale features. Read the docstring for more information.
pcadataset = pca.pca_analysis(fstats_tot, dropcols=nonnum, n_components=16)
fstats_num.columns
pcadataset.components.to_csv('components.csv')
aws.upload_s3('components.csv', '{}/components.csv'.format(remote_folder, filename), bucket_name=bucket)
pcadataset.prcomps
# The pca.kmo function calculates the Kaiser-Meyer-Olkin statistic, a measure of sampling adequacy. Check the docstring for more information.
fstats_tot.shape
for test in fstats_tot.columns:
print(test)
kmostat = pca.kmo(pcadataset.scaled)
# ## Visualization
# Users can then compare average principle component values between subgroups of the data. In this case, all particles were taken from the same sample, so there are no experimental subgroups. I chose to compare short trajectories to long trajectories, as I would expect differences between the two groups.
import numpy as np
ncomp = 16
dicti = {}
#test = np.exp(np.nanmean(np.log(pcadataset.final[pcadataset.final['Particle Size']==200].as_matrix()), axis=0))[-6:]
#test1 = np.exp(np.nanmean(np.log(pcadataset.final[pcadataset.final['Particle Size']==500].as_matrix()), axis=0))[-6:]
dicti[0] = np.nanmean(pcadataset.final[pcadataset.final['Region and Type']=='PEG_Cortex'].values[:, -ncomp:], axis=0)
dicti[1] = np.nanmean(pcadataset.final[pcadataset.final['Region and Type']=='PEG_Hippocampus'].values[:, -ncomp:], axis=0)
dicti[2] = np.nanmean(pcadataset.final[pcadataset.final['Region and Type']=='PEG_Thalamus'].values[:, -ncomp:], axis=0)
dicti[3] = np.nanmean(pcadataset.final[pcadataset.final['Region and Type']=='PS_Cortex'].values[:, -ncomp:], axis=0)
dicti[4] = np.nanmean(pcadataset.final[pcadataset.final['Region and Type']=='PS_Hippocampus'].values[:, -ncomp:], axis=0)
dicti[5] = np.nanmean(pcadataset.final[pcadataset.final['Region and Type']=='PS_Thalamus'].values[:, -ncomp:], axis=0)
pca.plot_pca(dicti, savefig=True, labels=['PEG_Cortex', 'PEG_Hippocampus', 'PEG_Thalamus', 'PS_Cortex', 'PS_Hippocampus',
'PS_Thalamaus'], rticks=np.linspace(-5, 3, 9))
# The variable pcadataset.prcomps shows the user the major contributions to each of the new principle components. When observing the graph above, users can see that there are some differences between short trajectories and long trajectories in component 0 (asymmetry1 being the major contributor) and component 1 (elongation being the major contributor).
pcadataset.prcomps
lvals = ['PEG_Cortex', 'PEG_Hippocampus', 'PEG_Thalamus']
feats = pca.feature_violin(pcadataset.final, label='Region and Type', lvals=lvals, fsubset=16, yrange=[-12, 12])
lvals = ['PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus']
feats = pca.feature_violin(pcadataset.final, label='Region and Type', lvals=lvals, fsubset=16, yrange=[-12, 12])
lvals = ['PEG_Cortex', 'PEG_Hippocampus', 'PEG_Thalamus', 'PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus']
#lvals = ['PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus']
fstats1 = pca.feature_plot_2D(pcadataset.final,
label='Region and Type', lvals=lvals, randcount=300, yrange=[-6, 6],
xrange=[-4, 4])
lvals = ['PEG_Cortex', 'PEG_Hippocampus', 'PEG_Thalamus', 'PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus']
#lvals = ['PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus']
fstats1 = pca.feature_plot_3D(pcadataset.final,
label='Region and Type', lvals=lvals, randcount=300, yrange=[-6, 6],
xrange=[-4, 4], alpha=0.45)
xr = 12
lvals = ['PEG', 'PS']
#lvals = ['PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus']
fstats1 = pca.feature_plot_3D(pcadataset.final,
label='Particle Type', lvals=lvals, randcount=400, ylim=[-xr, xr],
xlim=[-xr, xr], zlim=[-xr, xr], alpha=0.45)
lvals = ['Cortex', 'Hippocampus', 'Thalamus']
#lvals = ['PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus']
fstats1 = pca.feature_plot_3D(pcadataset.final,
label='Region', lvals=lvals, randcount=400, ylim=[-xr, xr],
xlim=[-xr, xr], zlim=[-xr, xr], alpha=0.45)
# +
ncomp = 16
trainp = np.array([])
testp = np.array([])
lvals = ['PEG_Cortex', 'PEG_Hippocampus', 'PEG_Thalamus', 'PS_Cortex', 'PS_Hippocampus', 'PS_Thalamus']
for i in range(0, 20):
KNNmod, X, y = pca.build_model(pcadataset.final, 'Region and Type', lvals, equal_sampling=True,
tsize=800, input_cols=ncomp, model='MLP', NNhidden_layer=(8, 6))
trainp = np.append(trainp, pca.predict_model(KNNmod, X, y))
X2 = pcadataset.final.values[:, -ncomp:]
y2 = pcadataset.final['Region and Type'].values
testp = np.append(testp, pca.predict_model(KNNmod, X2, y2))
print('Run {}: {}'.format(i, testp[i]))
# -
print('{} +/ {}'.format(np.mean(trainp), np.std(trainp)))
print('{} +/ {}'.format(np.mean(testp), np.std(testp)))
# +
#ncomp = 8
trainp = np.array([])
testp = np.array([])
lvals = ['PEG', 'PS']
for i in range(0, 20):
KNNmod, X, y = pca.build_model(pcadataset.final, 'Particle Type', lvals, equal_sampling=True,
tsize=800, input_cols=ncomp, model='MLP', NNhidden_layer=(8, 6))
trainp = np.append(trainp, pca.predict_model(KNNmod, X, y))
X2 = pcadataset.final.values[:, -ncomp:]
y2 = pcadataset.final['Particle Type'].values
testp = np.append(testp, pca.predict_model(KNNmod, X2, y2))
print('Run {}: {}'.format(i, testp[i]))
# -
print('{} +/ {}'.format(np.mean(trainp), np.std(trainp)))
print('{} +/ {}'.format(np.mean(testp), np.std(testp)))
# +
#ncomp = 8
trainp = np.array([])
testp = np.array([])
lvals = ['Cortex', 'Hippocampus', 'Thalamus']
for i in range(0, 20):
KNNmod, X, y = pca.build_model(pcadataset.final, 'Region', lvals, equal_sampling=True,
tsize=800, input_cols=ncomp, model='MLP', NNhidden_layer=(8, 6))
trainp = np.append(trainp, pca.predict_model(KNNmod, X, y))
X2 = pcadataset.final.values[:, -ncomp:]
y2 = pcadataset.final['Region'].values
testp = np.append(testp, pca.predict_model(KNNmod, X2, y2))
print('Run {}: {}'.format(i, testp[i]))
# -
print('{} +/ {}'.format(np.mean(trainp), np.std(trainp)))
print('{} +/ {}'.format(np.mean(testp), np.std(testp)))
# +
#ncomp = 8
trainp = np.array([])
testp = np.array([])
lvals = ['Cortex', 'Hippocampus', 'Thalamus']
for i in range(0, 20):
KNNmod, X, y = pca.build_model(pcadataset.final,
'Region', lvals, equal_sampling=True,
tsize=900, input_cols=ncomp, model='MLP', NNhidden_layer=(6, 2))
trainp = np.append(trainp, pca.predict_model(KNNmod, X, y))
X2 = pcadataset.final[pcadataset.final['Particle Type']=='PEG'].values[:, -ncomp:]
y2 = pcadataset.final[pcadataset.final['Particle Type']=='PEG']['Region'].values
testp = np.append(testp, pca.predict_model(KNNmod, X2, y2))
print('Run {}: {}'.format(i, testp[i]))
# -
print('{} +/ {}'.format(np.mean(trainp), np.std(trainp)))
print('{} +/ {}'.format(np.mean(testp), np.std(testp)))
# +
#ncomp = 8
trainp = np.array([])
testp = np.array([])
lvals = ['Cortex', 'Hippocampus', 'Thalamus']
for i in range(0, 20):
KNNmod, X, y = pca.build_model(pcadataset.final,
'Region', lvals, equal_sampling=True,
tsize=900, input_cols=ncomp, model='MLP', NNhidden_layer=(6, 2))
trainp = np.append(trainp, pca.predict_model(KNNmod, X, y))
X2 = pcadataset.final[pcadataset.final['Particle Type']=='PS'].values[:, -ncomp:]
y2 = pcadataset.final[pcadataset.final['Particle Type']=='PS']['Region'].values
testp = np.append(testp, pca.predict_model(KNNmod, X2, y2))
print('Run {}: {}'.format(i, testp[i]))
# -
print('{} +/ {}'.format(np.mean(trainp), np.std(trainp)))
print('{} +/ {}'.format(np.mean(testp), np.std(testp)))
# ## Neural Network
from sklearn.model_selection import GridSearchCV, cross_val_score, KFold, train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report
# +
ncomp = 16
featofvar = 'Region and Type'
test = pcadataset.final.values[:, -ncomp:]
y = pcadataset.final[featofvar].values
X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.9)
num_trials = 2
model = MLPClassifier(hidden_layer_sizes=(500, ), solver='sgd', verbose=True, max_iter=100, tol=0.001)
scores = np.zeros(num_trials)
gridps = [{'alpha': [0.001, 0.01, 0.5], 'batch_size': [10, 50, 100, 200], 'learning_rate_init': [0.001, 0.005, 0.01]}]
print('# Tuning hyper-parameters for precision')
clf = GridSearchCV(estimator=model, param_grid=gridps, cv=5, scoring='precision_macro')
clf.fit(X_train, y_train)
# -
print('Best parameters set found in development set:')
print()
print(clf.best_params_)
print()
print("Grid scores on development set:")
print()
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std*2, params))
print()
print("Detailed classification report")
print()
print("The model is trained on the full development set")
print("The scores are computed on the full evaluation set")
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print()
ncomp = 16
featofvar = 'Region and Type'
test = pcadataset.final.values[:, -ncomp:]
y = pcadataset.final[featofvar].values
for run in range(1):
X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=500, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
pcadataset.final['Pup and Slice'] = pcadataset.final['Pup'].map(str) + pcadataset.final['Slices']
pcadataset.final = pcadataset.final[['Pup and Slice'] + list(pcadataset.final.columns[:-1])]
from sklearn.model_selection import GridSearchCV, cross_val_score, KFold, train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report
# +
ncomp=16
featofvar='Region and Type'
X_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['21', '22', '23', '31', '32'])].values[:, -ncomp:]
X_test = pcadataset.final[pcadataset.final['Pup and Slice']=='33'].values[:, -ncomp:]
y_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['21', '22', '23', '31', '32'])][featofvar].values
y_test = pcadataset.final[pcadataset.final['Pup and Slice']=='33'][featofvar].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
ncomp=16
featofvar='Region and Type'
X_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['33', '22', '23', '31', '32'])].values[:, -ncomp:]
X_test = pcadataset.final[pcadataset.final['Pup and Slice']=='21'].values[:, -ncomp:]
y_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['33', '22', '23', '31', '32'])][featofvar].values
y_test = pcadataset.final[pcadataset.final['Pup and Slice']=='21'][featofvar].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
ncomp=16
featofvar='Region and Type'
X_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['33', '21', '23', '31', '32'])].values[:, -ncomp:]
X_test = pcadataset.final[pcadataset.final['Pup and Slice']=='22'].values[:, -ncomp:]
y_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['33', '21', '23', '31', '32'])][featofvar].values
y_test = pcadataset.final[pcadataset.final['Pup and Slice']=='22'][featofvar].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
ncomp=16
featofvar='Region and Type'
X_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['33', '21', '22', '31', '32'])].values[:, -ncomp:]
X_test = pcadataset.final[pcadataset.final['Pup and Slice']=='23'].values[:, -ncomp:]
y_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['33', '21', '22', '31', '32'])][featofvar].values
y_test = pcadataset.final[pcadataset.final['Pup and Slice']=='23'][featofvar].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
ncomp=16
featofvar='Region and Type'
X_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['33', '21', '22', '23', '32'])].values[:, -ncomp:]
X_test = pcadataset.final[pcadataset.final['Pup and Slice']=='31'].values[:, -ncomp:]
y_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['33', '21', '22', '23', '32'])][featofvar].values
y_test = pcadataset.final[pcadataset.final['Pup and Slice']=='31'][featofvar].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
ncomp=16
featofvar='Region and Type'
X_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['33', '21', '22', '31', '23'])].values[:, -ncomp:]
X_test = pcadataset.final[pcadataset.final['Pup and Slice']=='32'].values[:, -ncomp:]
y_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['33', '21', '22', '31', '23'])][featofvar].values
y_test = pcadataset.final[pcadataset.final['Pup and Slice']=='32'][featofvar].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# -
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22', '23', '31', '32', '33']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['21']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22', '23', '31', '32', '33']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['21']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '31', '32', '33']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '31', '32', '33']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22', '21', '31', '32', '33']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['23']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22', '21', '31', '32', '33']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['23']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22', '23', '21', '32', '33']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['31']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22', '23', '21', '32', '33']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['31']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22', '23', '31', '21', '33']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['32']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22', '23', '31', '21', '33']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['32']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22', '23', '31', '32', '21']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['33']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22', '23', '31', '32', '21']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['33']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# -
# +
import matplotlib.pyplot as plt
fig, ax1 = plt.subplots()
ax1.plot(clf.loss_curve_, linewidth=4)
#ax1.set_xlim([0, 60])
#ax1.set_ylim([0.04, 0.18])
ax1.set_ylabel('Loss Curve')
ax2 = ax1.twinx()
ax2.plot(clf.validation_scores_, linewidth=4, c='g')
#ax2.set_ylim([0.65, 0.75])
ax2.set_ylabel('Validation Scores')
# +
X_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['22', '23', '31', '33'])].values[:, -ncomp:]
X_test = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['32', '21'])].values[:, -ncomp:]
y_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['22', '23', '31', '33'])][featofvar].values
y_test = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['32', '21'])][featofvar].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['21', '23', '32', '33'])].values[:, -ncomp:]
X_test = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['31', '22'])].values[:, -ncomp:]
y_train = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['21', '23', '32', '33'])][featofvar].values
y_test = pcadataset.final[pcadataset.final['Pup and Slice'].isin(['31', '22'])][featofvar].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '32', '33']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['31', '22']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '32', '33']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['31', '22']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '32', '33', '31']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '32', '33', '31']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['22']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '22', '33', '31']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['32']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '22', '33', '31']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['32']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '22', '33', '32']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['31']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '22', '33', '32']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final['Pup and Slice'].isin(['31']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '22', '33', '32']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['31']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '22', '33', '32']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['31']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '22', '33', '31']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['32']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['21', '23', '22', '33', '31']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['32']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['21', '32', '22', '33', '31']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['21', '32', '22', '33', '31']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23', '32', '22', '33', '31']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['21']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23', '32', '22', '33', '31']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['21']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23', '32', '21', '33', '31']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['22']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23', '32', '21', '33', '31']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['22']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23', '32', '21', '22', '31']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['33']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23', '32', '21', '22', '31']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['33']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23', '32', '21', '31']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['33', '22']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23', '32', '21', '31']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['33', '22']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['22', '33', '21', '31']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23', '32']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['22', '33', '21', '31']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23', '32']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['22', '33', '21']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23', '32', '31']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['22', '33', '21']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['23', '32', '31']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['22', '33', '21']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['32', '31']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['22', '33', '21']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['32', '31']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['32', '33', '21', '23']))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['22', '31']))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['32', '33', '21', '23']))]['Region'].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final['Pup and Slice'].isin(['22', '31']))]['Region'].values
for run in range(1):
#X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=300, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# -
# +
import matplotlib.pyplot as plt
fig, ax1 = plt.subplots()
ax1.plot(clf.loss_curve_, linewidth=4)
#ax1.set_xlim([0, 60])
#ax1.set_ylim([0.04, 0.18])
ax1.set_ylabel('Loss Curve')
ax2 = ax1.twinx()
ax2.plot(clf.validation_scores_, linewidth=4, c='g')
ax2.set_ylim([0.65, 0.75])
ax2.set_ylabel('Validation Scores')
# -
ncomp = 16
featofvar = 'Region'
test = pcadataset.final[pcadataset.final['Particle Type'] == 'PEG'].values[:, -ncomp:]
y = pcadataset.final[pcadataset.final['Particle Type'] == 'PEG'][featofvar].values
for run in range(1):
X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=500, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
import matplotlib.pyplot as plt
fig, ax1 = plt.subplots()
ax1.plot(clf.loss_curve_, linewidth=4)
#ax1.set_xlim([0, 60])
#ax1.set_ylim([0.04, 0.18])
ax1.set_ylabel('Loss Curve')
ax2 = ax1.twinx()
ax2.plot(clf.validation_scores_, linewidth=4, c='g')
#ax2.set_ylim([0.65, 0.75])
ax2.set_ylabel('Validation Scores')
# -
ncomp = 16
featofvar = 'Region'
test = pcadataset.final[pcadataset.final['Particle Type'] == 'PS'].values[:, -ncomp:]
y = pcadataset.final[pcadataset.final['Particle Type'] == 'PS'][featofvar].values
for run in range(1):
X_train, X_test, y_train, y_test = train_test_split(test, y, test_size=0.4)
clf = MLPClassifier(hidden_layer_sizes=(600, ), solver='sgd', verbose=True, max_iter=500, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.1)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
fig, ax1 = plt.subplots()
ax1.plot(clf.loss_curve_, linewidth=4)
#ax1.set_xlim([0, 60])
#ax1.set_ylim([0.04, 0.18])
ax1.set_ylabel('Loss Curve')
ax2 = ax1.twinx()
ax2.plot(clf.validation_scores_, linewidth=4, c='g')
#ax2.set_ylim([0.65, 0.75])
ax2.set_ylabel('Validation Scores')
# -
pcadataset.final.hist(column=0, by='Region and Type', sharex=True, bins=np.linspace(-12, 12, 100),
figsize=(9, 7), grid=False, layout=(6,1), sharey=True, )
# +
y_true2 = fstats_tot['Region and Type'].values
labels3 = ['PEG_Cortex', 'PEG_']
size3 = np.random.rand(len(y_true2))
y_pred2 = list(pd.cut(size3, bins=[0, 0.1667, 0.3333, 0.5, 0.66667, 0.83333, 10], labels=mws).astype(str))
print(classification_report(y_true2, y_pred2, digits=4))
# -
0.75/0.16667
# ## Alternate binning
# +
bins = list(range(0, 2048+1, 256))
pcadataset.final['binx'] = pd.cut(pcadataset.final.X, bins, labels=[0, 1, 2, 3, 4, 5, 6, 7])
pcadataset.final['biny'] = pd.cut(pcadataset.final.Y, bins, labels=[0, 1, 2, 3, 4, 5, 6, 7])
pcadataset.final['bins'] = 8*pcadataset.final['binx'] + pcadataset.final['biny']
pcadataset.final = pcadataset.final[np.isfinite(pcadataset.final.bins)]
pcadataset.final.bins = pcadataset.final.bins.astype(int)
cols = pcadataset.final.columns.tolist()
cols = cols[-3:] + cols[:-3]
pcadataset.final = pcadataset.final[cols]
def checkerboard(size):
rows = int(size/2)
checks = list(range(0, size*size, size+1))
for i in range(1, rows):
ssize = size - 2*i
for j in range(0, ssize):
checks.append(2*i + (size+1)*j)
for i in range(1, rows):
ssize = size - 2*i
for j in range(0, ssize):
checks.append(size*size - 1 - (2*i + (size+1)*j))
checks.sort()
return checks
# +
featofvar = 'Region and Type'
ncomp = 16
training = [1, 3, 5, 7] + checkerboard(8)
X_train = pcadataset.final[pcadataset.final.bins.isin(training)].values[:, -ncomp:]
X_test = pcadataset.final[~pcadataset.final.bins.isin(training)].values[:, -ncomp:]
y_train = pcadataset.final[pcadataset.final.bins.isin(training)][featofvar].values
y_test = pcadataset.final[~pcadataset.final.bins.isin(training)][featofvar].values
for run in range(1):
clf = MLPClassifier(hidden_layer_sizes=(900, ), solver='sgd', verbose=True, max_iter=500, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.15)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
featofvar = 'Region'
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final.bins.isin(training))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (~pcadataset.final.bins.isin(training))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (pcadataset.final.bins.isin(training))][featofvar].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PEG') & (~pcadataset.final.bins.isin(training))][featofvar].values
for run in range(1):
clf = MLPClassifier(hidden_layer_sizes=(900, ), solver='sgd', verbose=True, max_iter=500, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.15)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# +
featofvar = 'Region'
X_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final.bins.isin(training))].values[:, -ncomp:]
X_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (~pcadataset.final.bins.isin(training))].values[:, -ncomp:]
y_train = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (pcadataset.final.bins.isin(training))][featofvar].values
y_test = pcadataset.final[(pcadataset.final['Particle Type'] == 'PS') & (~pcadataset.final.bins.isin(training))][featofvar].values
for run in range(1):
clf = MLPClassifier(hidden_layer_sizes=(900, ), solver='sgd', verbose=True, max_iter=500, tol=0.00001,
alpha=0.001, batch_size=50, learning_rate_init=0.005, learning_rate='adaptive',
early_stopping=True, validation_fraction=0.15)
clf.fit(X_train, y_train)
print('Training Results')
y_true1, y_pred1 = y_train, clf.predict(X_train)
print(classification_report(y_true1, y_pred1, digits=4))
print('Test Results')
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred, digits=4))
# -
| notebooks/development/12_10_18_Regional_PCA_NN_alt.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# In this chapter we will make some "grids". A grid is a concept in math and computers and is formed with horizontal lines called rows and vertical lines called columns.
#
# Why should we bother about a grid? Grid are used to define various designs e.g. websites, network design etc.
# !pip install -q ipythonblocks
from ipythonblocks import BlockGrid
# "BlockGrid" takes the size and color of the grid. Let us say we want to create a 10X10 grid and color green. BlockGrid is a function just like print is - so we can call help on it.
help(BlockGrid)
grid = BlockGrid(10, 10, fill=(123, 234, 123))
grid
# A grid has rows and columns. We can use these rows and columns to get different parts of the grid. Now lets print one small part - what do you think grid[0,0] should give us?
grid[0,0]
# Try changing colors - A color is composed on RGB or Red, Blue and Green values.
grid[0,0].red = 100
grid[0,0].green = 15
grid[0,0].blue = 15
grid[0,0]
# Try different values with colors now and see what you get?
| notebooks/python3/grids.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/mkirby1995/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments/blob/master/DS_Unit_1_Sprint_Challenge_4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="NooAiTdnafkz" colab_type="text"
# # Data Science Unit 1 Sprint Challenge 4
#
# ## Exploring Data, Testing Hypotheses
#
# In this sprint challenge you will look at a dataset of people being approved or rejected for credit.
#
# https://archive.ics.uci.edu/ml/datasets/Credit+Approval
#
# Data Set Information: This file concerns credit card applications. All attribute names and values have been changed to meaningless symbols to protect confidentiality of the data. This dataset is interesting because there is a good mix of attributes -- continuous, nominal with small numbers of values, and nominal with larger numbers of values. There are also a few missing values.
#
# Attribute Information:
# - A1: b, a.
# - A2: continuous.
# - A3: continuous.
# - A4: u, y, l, t.
# - A5: g, p, gg.
# - A6: c, d, cc, i, j, k, m, r, q, w, x, e, aa, ff.
# - A7: v, h, bb, j, n, z, dd, ff, o.
# - A8: continuous.
# - A9: t, f.
# - A10: t, f.
# - A11: continuous.
# - A12: t, f.
# - A13: g, p, s.
# - A14: continuous.
# - A15: continuous.
# - A16: +,- (class attribute)
#
# Yes, most of that doesn't mean anything. A16 (the class attribute) is the most interesting, as it separates the 307 approved cases from the 383 rejected cases. The remaining variables have been obfuscated for privacy - a challenge you may have to deal with in your data science career.
#
# Sprint challenges are evaluated based on satisfactory completion of each part. It is suggested you work through it in order, getting each aspect reasonably working, before trying to deeply explore, iterate, or refine any given step. Once you get to the end, if you want to go back and improve things, go for it!
# + [markdown] id="5wch6ksCbJtZ" colab_type="text"
# ## Part 1 - Load and validate the data
#
# - Load the data as a `pandas` data frame.
# - Validate that it has the appropriate number of observations (you can check the raw file, and also read the dataset description from UCI).
# - UCI says there should be missing data - check, and if necessary change the data so pandas recognizes it as na
# - Make sure that the loaded features are of the types described above (continuous values should be treated as float), and correct as necessary
#
# This is review, but skills that you'll use at the start of any data exploration. Further, you may have to do some investigation to figure out which file to load from - that is part of the puzzle.
# + [markdown] id="MoyTqr_BRUcl" colab_type="text"
# **Import and Validate**
# + id="lTKEwiiNI6zM" colab_type="code" colab={}
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.stats import chisquare
from scipy import stats
# + id="Q79xDLckzibS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 222} outputId="e7955844-0ba6-4d5e-fee9-ac281c7184bf"
csv = 'https://archive.ics.uci.edu/ml/machine-learning-databases/credit-screening/crx.data'
names = ['A1', 'A2','A3','A4','A5','A6','A7','A8','A9','A10','A11','A12','A13','A14','A15','Approved']
df = pd.read_csv(csv, header=None, names=names)
print('DataFrame shape:',df.shape)
df.head()
# + [markdown] id="aFQ-Q235Ra3I" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="WQmDWNXcRgz1" colab_type="text"
# **Encoding and Typecasting**
# + id="TU8Pdwv-IzrD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 323} outputId="592a9382-87cf-4208-84be-a581206e431b"
df.dtypes
# + id="xTljZaSbHu8q" colab_type="code" colab={}
cont_columns = ['A2','A14']
for _ in cont_columns:
df[_] = df[_].replace({'?': np.NaN})
df[_] = df[_].astype(float)
# + id="Og_nRf_8JgQ5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 323} outputId="cf01cf12-df1b-4245-bce2-61d07cc39223"
df.dtypes
# + id="PehKUM_aJzqL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="f5d2601b-c19c-4e72-cc23-df667e9ef558"
df.describe(exclude='number')
# + id="5o6R7r0NKri7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="ac1d10b4-fff8-460f-a581-030917b1162f"
df.describe()
# + id="IN3VuFyqMeKB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="0de786f7-831e-4b66-ab92-04759076e65a"
df['Approved'].value_counts()
# + id="aSkrj5hHLIbF" colab_type="code" colab={}
df['A1'] = df['A1'].replace({'?': np.NaN})
df['A4'] = df['A4'].replace({'?': np.NaN})
df['A5'] = df['A5'].replace({'?': np.NaN})
df['A6'] = df['A6'].replace({'?': np.NaN})
df['A7'] = df['A7'].replace({'?': np.NaN})
df['A9'] = df['A9'].replace({'t': 1, 'f': 0})
df['A10'] = df['A10'].replace({'t': 1, 'f': 0})
df['A12'] = df['A12'].replace({'t': 1, 'f': 0})
df['Approved'] = df['Approved'].replace({'+': 1, '-': 0})
# + [markdown] id="NDVYcpfHRtsH" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="aEfvQTJoRuj0" colab_type="text"
# **Create Approved and Rejected DataFrames**
# + id="skshjkm9NlFw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="6defdd7e-cd3f-4555-ff4b-0d583d70b667"
df = df.sort_values(by = 'Approved')
df.tail()
# + id="HDvs1tRGNz3z" colab_type="code" colab={}
approved = df.iloc[383:]
rejected = df.iloc[:383]
# + id="aXSKdN9wN_U-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 222} outputId="75b3e0bc-6004-4bea-b50b-c5219c90f883"
print(approved.shape)
approved.head()
# + id="ypVEPUxQPbTc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 53} outputId="16a682ea-e0ab-4560-810c-3aad5ab3ed21"
approved['Approved'].value_counts()
# + id="C5LSllF0N7pQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 222} outputId="ff2bc9fa-512b-42c4-911c-8f634f9151ca"
print(rejected.shape)
rejected.head()
# + id="WKE7n6FMPf_8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 53} outputId="39240451-face-4bff-e481-a7711b47b57f"
rejected['Approved'].value_counts()
# + id="Gd_8iYK5QkHN" colab_type="code" colab={}
numeric_cols = ['A2','A3','A8','A11','A14','A15', 'A9', 'A10', 'A12']
# + [markdown] id="Kuu6WhoIRygt" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="G7rLytbrO38L" colab_type="text"
# ## Part 2 - Exploring data, Testing hypotheses
#
# The only thing we really know about this data is that A16 is the class label. Besides that, we have 6 continuous (float) features and 9 categorical features.
#
# Explore the data: you can use whatever approach (tables, utility functions, visualizations) to get an impression of the distributions and relationships of the variables. In general, your goal is to understand how the features are different when grouped by the two class labels (`+` and `-`).
#
# For the 6 continuous features, how are they different when split between the two class labels? Choose two features to run t-tests (again split by class label) - specifically, select one feature that is *extremely* different between the classes, and another feature that is notably less different (though perhaps still "statistically significantly" different). You may have to explore more than two features to do this.
#
# For the categorical features, explore by creating "cross tabs" (aka [contingency tables](https://en.wikipedia.org/wiki/Contingency_table)) between them and the class label, and apply the Chi-squared test to them. [pandas.crosstab](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html) can create contingency tables, and [scipy.stats.chi2_contingency](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html) can calculate the Chi-squared statistic for them.
#
# There are 9 categorical features - as with the t-test, try to find one where the Chi-squared test returns an extreme result (rejecting the null that the data are independent), and one where it is less extreme.
#
# **NOTE** - "less extreme" just means smaller test statistic/larger p-value. Even the least extreme differences may be strongly statistically significant.
#
# Your *main* goal is the hypothesis tests, so don't spend too much time on the exploration/visualization piece. That is just a means to an end - use simple visualizations, such as boxplots or a scatter matrix (both built in to pandas), to get a feel for the overall distribution of the variables.
#
# This is challenging, so manage your time and aim for a baseline of at least running two t-tests and two Chi-squared tests before polishing. And don't forget to answer the questions in part 3, even if your results in this part aren't what you want them to be.
# + [markdown] id="BZyWOkL2ROaI" colab_type="text"
# **Scatter Matricies**
# + id="jqxpbOMAPn-j" colab_type="code" colab={}
sns.pairplot(df, kind='reg', plot_kws={'line_kws':{'color':'red'}, 'scatter_kws': {'alpha': 0.1}})
# + id="_nqcgc0yzm68" colab_type="code" colab={}
sns.pairplot(approved, kind='reg', plot_kws={'line_kws':{'color':'red'}, 'scatter_kws': {'alpha': 0.1}})
# + id="FFO58MY-Ppjj" colab_type="code" colab={}
sns.pairplot(rejected, kind='reg', plot_kws={'line_kws':{'color':'red'}, 'scatter_kws': {'alpha': 0.1}})
# + [markdown] id="5alXpWczRNr3" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="bCFFVL1_UPA9" colab_type="text"
# **Explore Means**
# + id="t75zfLrpUQ4n" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 215} outputId="4a1e03b8-f66b-4fe9-a2ca-abb426cf3e96"
approved.describe().T['mean']
# + id="v8dIbywwUTAe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 215} outputId="24155511-026b-4547-a14e-1359e94c90ff"
rejected.describe().T['mean']
# + id="7OJV_of7Ue38" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 197} outputId="82d8238c-e3ba-4100-94fc-98e981bba399"
a = np.array(approved.describe().T['mean'][:9])
r = np.array(rejected.describe().T['mean'][:9])
print('Difference in means')
for _ in range(9):
print(numeric_cols[_], a[_] - r[_])
# + [markdown] id="tHUqT9SAUQ--" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="lTdVDIzCRLPZ" colab_type="text"
# **T-Tests**
# + id="6a7KgIE0QE3I" colab_type="code" colab={}
from scipy import stats
def double_t_test(issue):
"""This is a two-sided test for the null hypothesis that 2 independent
samples have identical average values"""
# Test function from scipy
two_sided_test = stats.ttest_ind
# Sample A: Aproved
App = approved[issue]
# Sample B: Rejected
Rej = rejected[issue]
# Run T test
stat = two_sided_test(App, Rej, nan_policy='omit')
return stat
# + id="_IctVxdwQaRJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 827} outputId="119ef2a8-cced-4dd8-f552-2a20d89c8338"
t_test_results = []
for _ in numeric_cols:
t_test_results.append(double_t_test(_)[0])
if double_t_test(_)[1] < .05:
print('\n', _ , '\nt statistic',double_t_test(_)[0], '\np-value',double_t_test(_)[1], '\nReject Null')
else:
print('\n', _ , '\nt statistic',double_t_test(_)[0], '\np-value',double_t_test(_)[1], '\nFail to Reject Null')
# + [markdown] id="SF2EqWWSRJ6b" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="nnrwQ6SET1Fb" colab_type="text"
# **Plot t-test Results**
# + id="uW7Rr4dpT3qT" colab_type="code" colab={}
Acc = []
Rej = []
for i in (t_test_results):
if i >=0:
Acc.append(i)
else:
Acc.append(0)
for i in (t_test_results):
if i < 0:
Rej.append(i)
else:
Rej.append(0)
# + id="XUFGPdVyWyk2" colab_type="code" colab={}
from pylab import rcParams
rcParams['figure.figsize'] = 19, 10
# + id="xknZbtmXWvoO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 592} outputId="04a80f02-c7cb-4d5a-948e-8d65eb059072"
N = 16
import matplotlib.pyplot as plt
# the x locations for the groups
ind = np.arange(N)
# the width of the bars: can also be len(x) sequence
width = 0.8
# Bars for surplus
p1 = plt.bar(numeric_cols, Acc, width, color='#85CB33')
# Bars for deficit
p2 = plt.bar(numeric_cols, Rej, width, color ='#3B341F')
plt.ylabel('+ = Accepted, - = Rejected')
plt.xticks(numeric_cols)
plt.yticks(np.arange(-5, 30, 5))
plt.grid(b=True, which='major', axis='x',color='black', linestyle=':', linewidth=1, alpha=.3)
plt.show()
# + [markdown] id="xGGg1rozT3w4" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="N_5jHXnzT6CL" colab_type="text"
# **Chi-Squared Tests**
# + id="RiLcXtoyaJgW" colab_type="code" colab={}
cat_cols= ['A1', 'A4', 'A5', 'A6', 'A7', 'A9', 'A10', 'A12', 'A13']
# + id="zLAdKDDWXcCJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 665} outputId="de6ba609-a7eb-4972-ba80-0630b9eaa2b0"
for _ in cat_cols:
print('\n', _ ,
'\nchi statistic',
chisquare(pd.crosstab(df['Approved'], df[_]), axis=None)[0],
'\np-value',
chisquare(pd.crosstab(df['Approved'], df[_]), axis=None)[1],)
# + [markdown] id="gERGVV5-T8gw" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="hT8DqSzVg2u2" colab_type="text"
# **Visualize Chi-Squared Results**
# + [markdown] id="i3WU2YGVkOCD" colab_type="text"
# A7
# + id="g3kQRCpdkLiL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="6cbd45fa-ff31-4b36-8f03-d2fc8589d528"
# Calculate our contingency table with margins
a7_cont = pd.crosstab(
df['Approved'],
df['A7'],
normalize='columns')
a7_cont
# + id="MCCbV9O0kNlF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 327} outputId="3f611deb-595c-40c5-cf67-bdb13726d095"
#Assigns the frequency values
ap = a7_cont.iloc[1][0:9].values
re = a7_cont.iloc[0][0:9].values
#Plots the bar chart
fig = plt.figure(figsize=(10, 5))
sns.set(font_scale=1.8)
categories = ['bb', 'dd', 'ff', 'h', 'j', 'n', 'o', 'v', 'z']
p1 = plt.bar(categories, re, 0.55, color='#3B341F')
p2 = plt.bar(categories, ap, 0.55, bottom=re, color='#85CB33')
plt.legend((p2[0], p1[0]), ('Approved', 'Rejected'))
plt.ylabel('Count')
plt.show()
# + [markdown] id="3eCOvS7pk55A" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="9J1t3R1Sk633" colab_type="text"
# A4
# + id="yS64obw9k7Ez" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="695e3d49-cc5f-4acf-89d4-aaedb0dd3895"
# Calculate our contingency table with margins
a4_cont = pd.crosstab(
df['Approved'],
df['A4'],
normalize='columns')
a4_cont
# + id="Q1EKQQHMk7Ng" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 327} outputId="63c46276-2d55-436c-aae8-6023f25a5f8e"
#Assigns the frequency values
ap = a4_cont.iloc[1][0:3].values
re = a4_cont.iloc[0][0:3].values
#Plots the bar chart
fig = plt.figure(figsize=(10, 5))
sns.set(font_scale=1.8)
categories = ['1','u', 'y']
p1 = plt.bar(categories, re, 0.55, color='#3B341F')
p2 = plt.bar(categories, ap, 0.55, bottom=re, color='#85CB33')
plt.legend((p2[0], p1[0]), ('Approved', 'Rejected'))
plt.ylabel('Count')
plt.show()
# + [markdown] id="JjH7bZFHk7Zk" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="QRXllgcikFp-" colab_type="text"
# A13
# + id="jCw_4kTZhmIB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="4d1cb5eb-5935-48a8-ec0a-5b9e0bab5c7f"
# Calculate our contingency table with margins
a13_cont = pd.crosstab(
df['Approved'],
df['A13'],
normalize='columns')
a13_cont
# + id="9_RxghCTg21n" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 327} outputId="46bf3e54-204d-438f-ef9a-714158423744"
#Assigns the frequency values
ap = a13_cont.iloc[1][0:3].values
re = a13_cont.iloc[0][0:3].values
#Plots the bar chart
fig = plt.figure(figsize=(10, 5))
categories = ['g', 's', 'p']
p1 = plt.bar(categories, re, 0.55, color='#3B341F')
p2 = plt.bar(categories, ap, 0.55, bottom=re, color='#85CB33')
plt.legend((p2[0], p1[0]), ('Approved', 'Rejected'))
plt.ylabel('Count')
plt.show()
# + [markdown] id="UNNpWntXg29C" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="ZM8JckA2bgnp" colab_type="text"
# ## Part 3 - Analysis and Interpretation
#
# Now that you've looked at the data, answer the following questions:
#
# - Interpret and explain the two t-tests you ran - what do they tell you about the relationships between the continuous features you selected and the class labels?
# - Interpret and explain the two Chi-squared tests you ran - what do they tell you about the relationships between the categorical features you selected and the class labels?
# - What was the most challenging part of this sprint challenge?
#
# Answer with text, but feel free to intersperse example code/results or refer to it from earlier.
# + [markdown] id="LIozLDNG2Uhu" colab_type="text"
# **t-test Interpretation**
# + [markdown] id="UhGP4ksBZVpo" colab_type="text"
# A9 value = 't' is correlated with approval
# + [markdown] id="PB_aO78QZXDH" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="FAoCD0VyZXi1" colab_type="text"
# **Chi Squared Interpretation**
# + [markdown] id="9l__ENelZeAy" colab_type="text"
# A13 value = 'p' correlated with rejection
#
# A13 value = 's' correlated with approval
#
# A7 = 'ff' correlated with rejection
#
# A7 = 'z' correlated with approval
#
# A4 = 'y' correlated with rejection
# + [markdown] id="1fDYrr_XZeEZ" colab_type="text"
#
#
# ---
#
#
# + [markdown] id="Y5CODzKBZfhL" colab_type="text"
# **Most Challenging Part of Sprint Challenge**
# + [markdown] id="XajpLHtsZjl1" colab_type="text"
# For me the most challenging part of this sprint challenge is the Interpretation. I feel fairly confident with cleaning my data, implimenting the tests, and visualizing the results, but I still have to think quite hard about my test results meaning in the context of the problem.
# + [markdown] id="q8bG9hTtZjjb" colab_type="text"
#
#
# ---
#
#
| DS_Unit_1_Sprint_Challenge_4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import random
def diamond(width, height, h):
if h < height//2:
inset = int(((height//2-h) * (width//2))/(height//2))
else:
inset = int(((h-height//2) * (width//2))/(height//2))
row = (
[random.randint(0, int(i*256/inset)) for i in range(inset)]
+ [0xff for j in range(width-2*inset)]
+ [random.randint(0, int((inset-k)*256/inset)) for k in range(inset)]
)
return row
def ascii_art(width, height, scaled_b_w) -> None:
grayscale = "$@B%8&WM#*oahkbdpqwmZO0QLCJUYXzcft/\\|()1{}[]?-_+~<>i!lI;:,\"^`'. "
for r in range(height):
gray_char = lambda b: int(len(grayscale) * (b / 256))
gray = map(gray_char, scaled_b_w[r * width : (r + 1) * width])
text = "".join(grayscale[g] for g in gray)
print(text)
# +
width = height = 32
image = bytes(
b
for h in range(height)
for b in diamond(width, height, h)
)
ascii_art(width, height, image)
# -
| ch_14/tests/TestImage.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (Data Science)
# language: python
# name: python_ds
# ---
# STEP 1: Import package code
from plantcv import plantcv as pcv
import matplotlib
# +
# STEP 2: Set global variables
matplotlib.rcParams["figure.figsize"] = [8,8]
nir_img = "img1_nir.png"
rgb_img = "img2_rgb.png"
# -
# STEP 3: Read RGB and NIR images
im_rgb, path_rgb, filename_rgb = pcv.readimage(filename=rgb_img, mode = "rgb")
im_nir, path_nir, filename_nir = pcv.readimage(filename=nir_img, mode = "native")
# +
# # STEP 4: Fuse two images
# wv_nir = [800.0] # 800nm is a representative wavelength of near-infrared
# wv_rgb = [480.0, 550.0, 670.0] # 480nm, 550nm, and 670nm are representative wavelengths for blue, green, and red, respectively
# fused_img = pcv.image_fusion(nir_img, rgb_img, wv_nir, wv_rgb, array_type="nir-vis_fusion", filename="fused_im")
# -
| data-fusion/plantcv-data-fusion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: jax-md
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
from mpl_toolkits import mplot3d
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
import matplotlib.pylab as pylab
params = {'axes.labelsize': '20', 'font.weight' : 10}
plt.rcParams.update(params)
plt.rcParams["font.family"] = "normal" #Times New Roman"
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
fig = plt.figure()
ax1 = plt.axes(projection='3d')
ax1.set_xlabel('gauge invariant')
ax1.set_ylabel(r'$pure ~gauge$', fontweight=900)
ax1.set_zlabel('density', fontweight=500)
ax1.xaxis.set_ticklabels([])
ax1.yaxis.set_ticklabels([])
ax1.zaxis.set_ticklabels([])
for line in ax1.xaxis.get_ticklines():
line.set_visible(False)
for line in ax1.yaxis.get_ticklines():
line.set_visible(False)
for line in ax1.zaxis.get_ticklines():
line.set_visible(False)
ax1.w_xaxis.pane.fill = False
ax1.w_yaxis.pane.fill = False
ax1.w_zaxis.pane.fill = False
ax1.w_xaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
ax1.w_yaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
ax1.w_zaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
def f(x, y):
return (np.exp(-(x-y)**2) + np.exp(-(x+y)**2))
def g(x, y):
return np.exp(-y**2) * (np.exp(-(x-y)**2) + np.exp(-(x+y)**2))
def norm(y):
return 1./np.exp(-y**2)
x = np.linspace(-3, 3, 30)
y = np.linspace(-3, 3, 30)
X, Y = np.meshgrid(x, y)
Z1 = f(X, Y)
Z2 = g(X, Y)
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
fig = plt.figure()
fig = plt.figure(figsize=plt.figaspect(.5))
ax1 = plt.axes(projection='3d')
norm = plt.Normalize(Z2.min(), Z2.max())
colors = cm.viridis(norm(Z2))
rcount, ccount, _ = colors.shape
#ax.contour3D(X, Y, Z, 30, cmap='binary')
rcount=1
ax1.plot_surface(X, Y, Z2, rcount=ccount, facecolors=colors, ccount=ccount, shade=False, alpha=.3)
#ax1.plot_wireframe(X, Y, Z2, rstride=150, cstride=100, color='grey',lw=1)
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_zlabel('density')
#ax1.xaxis.set_ticklabels([])
#ax1.yaxis.set_ticklabels([])
#ax1.xaxis.set_ticklabels([])
#ax1.yaxis.set_ticklabels([])
ax1.zaxis.set_ticklabels([])
for line in ax1.xaxis.get_ticklines():
line.set_visible(False)
for line in ax1.yaxis.get_ticklines():
line.set_visible(False)
for line in ax1.zaxis.get_ticklines():
line.set_visible(False)
#ax1.w_xaxis.pane.fill = False
#ax1.w_yaxis.pane.fill = False
#ax1.w_zaxis.pane.fill = False
ax1.w_xaxis.set_pane_color((0.0, 0.0, 0.0, 0.10))
ax1.w_yaxis.set_pane_color((0.0, 0.0, 0.0, .05))
#ax1.w_zaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
#ax1.zaxis._axinfo['label']['space_factor'] = .1
ax1.zaxis.labelpad=-10
ax1.xaxis.labelpad=10
ax1.yaxis.labelpad=10
#ax1.grid(b=None)
#plt.axis('off')
ax1.grid(False)
ax1.view_init(40, -60)
plt.savefig('./assets/schematic_p_xy.png')
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
xline = 1.75+0.*x
yline = 1*y
zline = g(xline,yline)
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
fig = plt.figure()
fig = plt.figure(figsize=plt.figaspect(.5))
ax1 = plt.axes(projection='3d')
norm = plt.Normalize(Z2.min(), Z2.max())
colors = cm.viridis(norm(Z2))
rcount, ccount, _ = colors.shape
#ax.contour3D(X, Y, Z, 30, cmap='binary')
rcount=1
ax1.plot_surface(X, Y, Z2, rcount=rcount, facecolors=colors, ccount=ccount, shade=False, alpha=.3)#, label='p(X,Y)')
ax1.plot3D(xline, yline, zline, 'blue', label='p(X=x,Y)')
ax1.plot3D(xline, yline, 3*zline, 'red',label='p(Y|X=x)')
#ax1.plot_wireframe(X, Y, Z2, rstride=150, cstride=100, color='grey',lw=1)
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_zlabel('density')
#ax1.xaxis.set_ticklabels([])
#ax1.yaxis.set_ticklabels([])
ax1.zaxis.set_ticklabels([])
for line in ax1.xaxis.get_ticklines():
line.set_visible(False)
for line in ax1.yaxis.get_ticklines():
line.set_visible(False)
for line in ax1.zaxis.get_ticklines():
line.set_visible(False)
#ax1.w_xaxis.pane.fill = False
#ax1.w_yaxis.pane.fill = False
#ax1.w_zaxis.pane.fill = False
ax1.w_xaxis.set_pane_color((0.0, 0.0, 0.0, 0.10))
ax1.w_yaxis.set_pane_color((0.0, 0.0, 0.0, .05))
#ax1.w_zaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
#ax1.zaxis._axinfo['label']['space_factor'] = .1
ax1.zaxis.labelpad=-10
ax1.xaxis.labelpad=10
ax1.yaxis.labelpad=10
#ax1.grid(b=None)
#plt.axis('off')
ax1.grid(False)
ax1.view_init(40, -60)
plt.legend()
plt.savefig('./assets/schematic_p_y_given_x.png')
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
xline = 1.*x
yline = -1.15+0*y
zline = g(xline,yline)
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
fig = plt.figure()
fig = plt.figure(figsize=plt.figaspect(.5))
ax1 = plt.axes(projection='3d')
norm = plt.Normalize(Z2.min(), Z2.max())
colors = cm.viridis(norm(Z2))
rcount, ccount, _ = colors.shape
#ax.contour3D(X, Y, Z, 30, cmap='binary')
rcount=1
ax1.plot_surface(X, Y, Z2, rcount=ccount, facecolors=colors, ccount=rcount, shade=False, alpha=.3)#, label='p(X,Y)')
ax1.plot3D(xline, yline, zline, 'blue', label='p(X,Y=y)')
ax1.plot3D(xline, yline, 6*zline, 'red',label='p(X|Y=y)')
#ax1.plot_wireframe(X, Y, Z2, rstride=150, cstride=100, color='grey',lw=1)
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_zlabel('density')
#ax1.xaxis.set_ticklabels([])
#ax1.yaxis.set_ticklabels([])
ax1.zaxis.set_ticklabels([])
for line in ax1.xaxis.get_ticklines():
line.set_visible(False)
for line in ax1.yaxis.get_ticklines():
line.set_visible(False)
for line in ax1.zaxis.get_ticklines():
line.set_visible(False)
#ax1.w_xaxis.pane.fill = False
#ax1.w_yaxis.pane.fill = False
#ax1.w_zaxis.pane.fill = False
ax1.w_xaxis.set_pane_color((0.0, 0.0, 0.0, 0.10))
ax1.w_yaxis.set_pane_color((0.0, 0.0, 0.0, .05))
#ax1.w_zaxis.set_pane_color((0.0, 0.0, 0.0, 0.0))
#ax1.zaxis._axinfo['label']['space_factor'] = .1
ax1.zaxis.labelpad=-10
ax1.xaxis.labelpad=10
ax1.yaxis.labelpad=10
#ax1.grid(b=None)
#plt.axis('off')
ax1.grid(False)
ax1.view_init(40, -60)
plt.legend()
plt.savefig('./assets/schematic_p_x_given_y.png')
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
| book/correlation_schematic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# [Table of contents](../toc.ipynb)
#
# # Yet another Python course?
#
# As said, Python is so popular that there are plenty of online courses, tutorials and books available. Please find here a short list of free and great Python learning sources
#
# * A free online tutorial for Python beginners on [tutorialspoint.com](https://www.tutorialspoint.com/python/index.htm)
# * Book: [A Whirlwind Tour of Python](https://jakevdp.github.io/WhirlwindTourOfPython/) [[VanderPlas2016]](./references.bib)
# * IPython cloud service [Python anywhere](https://www.pythonanywhere.com/try-ipython/)
# * Interactive set of Python tutorials on [learnpython.org](https://www.learnpython.org/)
# * Python data science cheat sheets on [datacamp.com](https://www.datacamp.com/community/data-science-cheatsheets) [[PyCheatSheats]](./references.bib)
# * Official Python Documentation on [docs.python.org](https://docs.python.org/3/tutorial/index.html)
#
# There is much more!
#
# There is also a bunch of commercial tutorials available to learn Python, e.g. on
# * [Datacamp.org](https://www.datacamp.com/courses/intro-to-python-for-data-science)
# * [Udacity.com](https://www.udacity.com/course/introduction-to-python--ud1110)
# * [Cousera.org](https://www.coursera.org/learn/python)
# * ...
#
# and some of these Python introduction courses are free as well.
#
# The following sections on Python syntax and semantics are partly adapted from *A Whirlwind Tour of Python* [[VanderPlas2016]](./references.bib), which is under CC0 license.
# + [markdown] slideshow={"slide_type": "slide"}
# # Python syntax and semantics reference
#
# Please find overall syntax and semantics reference in [The Python Language Reference](https://docs.python.org/3/reference/index.html) [[PyReference]](./references.bib), and all functionality of Python in the [The Python Standard Library reference](https://docs.python.org/3/library/index.html#library-index) [[PyStandardLib]](./references.bib).
#
# However, we will briefly review the syntax herein to get you started.
#
# The syntax is the structure of a language, or how to write it. The semantics is the meaning of the language or how it is interpreted.
# + [markdown] slideshow={"slide_type": "slide"}
# # Python syntax
#
# Python was designed for high readability. Hence, parenthesis, semicolons and the like are rarely used and keywords are preferably easy to read English words. Therefore, Python code looks often like pseudo code.
#
# ## Keywords
#
# Python has reserved words which cannot be used as variable or identifier names (names of functions or classes).
#
# `and as assert async await break class continue def del elif else except False finally for from global if import in is lambda None nonlocal not or pass raise return True try while with yield`
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Easy to read
#
# Let us compare an example of a for loop in C and Python.
#
# ```C
# /* for loop execution in C*/
# for( a = 10; a < 20; a = a + 1 ){
# printf("value of a: %d\n", a);
# }
# ```
#
# ```python
# # same for loop in Python
# for a in range(10, 20):
# print("value of a", a)
# ```
#
# * Both code blocks use indentation to make the code easy to read. However, in C, the indentation is good programming style and not required by the compiler. In C, the code block is encapsulated with curly braces `{}`. In contrast, Python requires that code blocks are indented.
# * Python simply uses end of line (carriage return). C requires a semicolon.
# * Python uses fewer special symbols (braces, semicolons,...).
# * The `a in range(10, 20)` is better human readable or pseudo code style than `( a = 10; a < 20; a = a + 1 )`.
# + slideshow={"slide_type": "subslide"}
# Now, lets run the Python loop
for a in range(10, 20):
print("value of a:", a)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Indentation
#
# Hence, indentation matters! Code blocks are usually started with colon `:` and indented by (commonly) four spaces. However, you are free to use any consistent indentation.
#
# The two versions of the for loop produce different results because of indentation. The second `print(y)` is not indented and hence called after the loop.
# + slideshow={"slide_type": "fragment"}
y = 0
for i in range (0, 3):
y += i
print(y)
# + slideshow={"slide_type": "fragment"}
y = 0
for i in range (0, 3):
y += i
print(y)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Comments
#
# In-line comments are marked with a hash `#`. Multi-line comments are not supported by Python but you can use triple quoted strings instead `"""`, or `'''`
# + slideshow={"slide_type": "fragment"}
""" Here a comment
with line break."""
''' Here another comment
over two lines.'''
some_var = 1
# and here a comment in one line
some_other_var = 2 # and here another comment
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Line continuation
#
# If your code is too long, put a `\` as line wrap. Or encapsulate it in braces with indentation.
# + slideshow={"slide_type": "fragment"}
# Here some long equation
complex = 1 + 4 - 8 +\
2 - 7
# + slideshow={"slide_type": "fragment"}
# Or the alternative version with braces
complex = (1 + 4 - 8 +
2 - 7)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Multiple statements
#
# You can either enter one statement after the other or use a semicolon to add them in one line.
# However, multiple statements in one line are discouraged by Python style guides.
# + slideshow={"slide_type": "fragment"}
a = [1, 2, 3]
b = "my string"
# is the same as
a = [1, 2, 3]; b = "my string"
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Whitespace
#
# As said, whitespace is used for indentation. Within lines, whitespace does not make a difference.
# + slideshow={"slide_type": "fragment"}
my_var = 1 + 3
# this is interpreted in the same way
my_var = 1 + 3
print(my_var)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Exercise: Try syntax (10 minutes)
#
# <img src="../_static/exercise.png" alt="Exercise" width="75" align="left">
#
# Please try out what you have seen by yourself.
#
# To solve this task you have (at least) three options to run you code:
#
# * Write a Python file and call it through ipython with `%run yourfile.py`.
# * Use the interactive Jupyter Notebooks of this course on [Binder](https://mybinder.org/v2/gh/StephanRhode/py-algorithms-4-automotive-engineering/master).
# * Use IPython cloud service [Python anywhere](https://www.pythonanywhere.com/try-ipython/) Note that you need the magic command `%cpaste` to be able to type multiple lines in IPython.
#
# Here the task:
#
# * Write a for loop which adds the loop index to a defined constant.
# * Try to include one line comments and multi line comments.
# * Use `\` or `()` for line continuation.
# * Play with indentation (also try to add wrong indentation) and whitespace.
# * Add the print command.
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Solution
#
# Please find one possible solution in [`solution_syntax.py`](solution_syntax.py) file.
# + slideshow={"slide_type": "fragment"}
# %run solution_syntax.py
| 01_basic-python/00_syntax.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Clustering et word2vec
# Sur la base des éléments méthodologiques et des enseignements techniques présentés lors du cours théorique, il est demandé dans le cadre de ce TP :
# - d’effectuer un clustering des bulletins pour une décennie au choix et d’interpréter les résultats
# - d’entraîner un modèle word2vec sur l’ensemble des bulletins et d’explorer les relations entre vecteurs
#
# Pour ce faire, vous utiliserez différentes librairies Python vues au cours comme scikit-learn et gensim.
# #### Librairies nécessaires
# +
import collections
import os
import string
import sys
import pandas as pd
import nltk
nltk.download('punkt')
from nltk import word_tokenize
from nltk.corpus import stopwords
from nltk.tokenize import wordpunct_tokenize
from unidecode import unidecode
from pprint import pprint
from sklearn.cluster import KMeans
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import PCA
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import cosine
from gensim.models.phrases import Phrases, Phraser
from gensim.models import Word2Vec
# + [markdown] tags=[]
# ## 1. Clustering
# -
# #### Choisir une décénie et charger les fichiers
data_path = "../data/txt/"
DECADE = '1880'
files = [f for f in sorted(os.listdir(data_path)) if f"_{DECADE[:-1]}" in f]
# vérifier les fichiers
files[:5]
texts = [open(data_path + f).read() for f in files]
# explorer un texte
texts[9][:400]
# #### Vectoriser les fichiers
# Création d'une fonction de pré-traitement
def preprocessing(text, stem=True):
""" Tokenize text and remove punctuation """
text = text.translate(string.punctuation)
tokens = word_tokenize(text)
return tokens
# instancier le modèle
vectorizer = TfidfVectorizer(
tokenizer=preprocessing,
stop_words=stopwords.words('french'),
max_df=0.5,
min_df=0.1,
lowercase=True)
# construire la matrice
# %time tfidf_vectors = vectorizer.fit_transform(texts)
# vecteur du premier document
pd.Series(
tfidf_vectors[0].toarray()[0],
index=vectorizer.get_feature_names_out()
).sort_values(ascending=False)
# transfert vers un tableau pour effectuer des tests
tfidf_array = tfidf_vectors.toarray()
# #### Appliquer un algorithme de clustering sur les vecteurs TF-IDF des documents
N_CLUSTERS = 4
# #### Instancier le modèle K-Means et ses arguments
km_model = KMeans(n_clusters=N_CLUSTERS, random_state = 42)
# #### Appliquer le clustering à l'aide de la fonction `fit_predict`
clusters = km_model.fit_predict(tfidf_vectors)
# +
clustering = collections.defaultdict(list)
for idx, label in enumerate(clusters):
clustering[label].append(files[idx])
# -
# #### Réduire les vecteurs à 2 dimensions à l'aide de l'algorithme PCA
pca = PCA(n_components=2)
reduced_vectors = pca.fit_transform(tfidf_vectors.toarray())
# ### Générer le plot
# +
x_axis = reduced_vectors[:, 0]
y_axis = reduced_vectors[:, 1]
plt.figure(figsize=(10,10))
scatter = plt.scatter(x_axis, y_axis, s=100, c=clusters)
# Ajouter les centroïdes
centroids = pca.transform(km_model.cluster_centers_)
plt.scatter(centroids[:, 0], centroids[:, 1], marker = "x", s=100, linewidths = 2, color='black')
# Ajouter la légende
plt.legend(handles=scatter.legend_elements()[0], labels=set(clusters), title="Clusters")
# -
pprint(dict(clustering))
# Il semble que globalement, le nombre de clusters soit pertinent. Nous remarquons que les fichiers de Laeken constituent la majeure partie des clusters [2] et [3]. Cependant, ces clusters sont assez dispersés, je décide donc de les supprimer de ma sélection afin d'explorer plus finement ceux de Bruxelles.
# #### Sélection des fichiers de Bruxelles, uniquement
files = [f for f in sorted(os.listdir(data_path)) if f"Bxl_{DECADE[:-1]}" in f]
texts = [open(data_path + f).read() for f in files]
vectorizer = TfidfVectorizer(
tokenizer=preprocessing,
stop_words=stopwords.words('french'),
max_df=0.5,
min_df=0.1,
lowercase=True)
# %time tfidf_vectors = vectorizer.fit_transform(texts)
N_CLUSTERS = 4
km_model = KMeans(n_clusters=N_CLUSTERS, random_state = 42)
clusters = km_model.fit_predict(tfidf_vectors)
# +
clustering = collections.defaultdict(list)
for idx, label in enumerate(clusters):
clustering[label].append(files[idx])
# -
pca = PCA(n_components=2)
reduced_vectors = pca.fit_transform(tfidf_vectors.toarray())
# +
x_axis = reduced_vectors[:, 0]
y_axis = reduced_vectors[:, 1]
plt.figure(figsize=(10,10))
scatter = plt.scatter(x_axis, y_axis, s=100, c=clusters)
# Ajouter les centroïdes
centroids = pca.transform(km_model.cluster_centers_)
plt.scatter(centroids[:, 0], centroids[:, 1], marker = "x", s=100, linewidths = 2, color='black')
# Ajouter la légende
plt.legend(handles=scatter.legend_elements()[0], labels=set(clusters), title="Clusters")
# -
pprint(dict(clustering))
# Nous remarquons que le cluster [1] est fort dispersé de nouveau. En analysant le nom des fichiers compris dans ce cluster, nous pouvons constater que les années rerises dans ce cluster concernent la fin de la décennie uniquement, tandis que les autres clusters concernent à peu près tous l'ensemble des années couvertes. Il pourrait être intéressant de procéder à un workcloud pour le cluster [1] afin d'explorer les thématiques puisqu'elles semblent se distinguer du reste.
# ### Clustering d'un cluster
# #### Exploration cluster [1]
# +
files_1 = [f for f in sorted(os.listdir(data_path)) if f in clustering[1]]
for f in clustering[1]:
texts_1 = [open(data_path + f).read() for f in files_1]
texts_1[0][:400]
# +
def preprocessing(text, stem=True):
""" Tokenize text and remove punctuation """
text = text.translate(string.punctuation)
tokens = word_tokenize(text)
return tokens
vectorizer_1 = TfidfVectorizer(
tokenizer=preprocessing,
stop_words=stopwords.words('french'),
max_df=0.5,
min_df=0.1,
lowercase=True)
tfidf_vectors_1 = vectorizer_1.fit_transform(texts_1)
pd.Series(
tfidf_vectors_1[0].toarray()[0],
index=vectorizer_1.get_feature_names_out()
).sort_values(ascending=False)
# +
N_CLUSTERS = 4
#instancier le modèle KMeans
km_model = KMeans(n_clusters=N_CLUSTERS, random_state = 42)
# appliquer le clustering
clusters_1 = km_model.fit_predict(tfidf_vectors_1)
clustering_1 = collections.defaultdict(list)
for idx, label in enumerate(clusters_1):
clustering_1[label].append(files_1[idx])
# réduire les dimensions
pca = PCA(n_components=2)
reduced_vectors_1 = pca.fit_transform(tfidf_vectors_1.toarray())
# générer le plot
x_axis = reduced_vectors_1[:, 0]
y_axis = reduced_vectors_1[:, 1]
plt.figure(figsize=(10,10))
scatter = plt.scatter(x_axis, y_axis, s=100, c=clusters_1)
# Ajouter les centroïdes
centroids = pca.transform(km_model.cluster_centers_)
plt.scatter(centroids[:, 0], centroids[:, 1], marker = "x", s=100, linewidths = 2, color='black')
# Ajouter la légende
plt.legend(handles=scatter.legend_elements()[0], labels=set(clusters_1), title="Clusters")
# -
pprint(dict(clustering_1))
# #### Exploration cluster [0]
# +
files_0 = [f for f in sorted(os.listdir(data_path)) if f in clustering[0]]
for f in clustering[0]:
texts_0 = [open(data_path + f).read() for f in files_0]
texts_0[0][:400]
# +
def preprocessing(text, stem=True):
""" Tokenize text and remove punctuation """
text = text.translate(string.punctuation)
tokens = word_tokenize(text)
return tokens
vectorizer_0 = TfidfVectorizer(
tokenizer=preprocessing,
stop_words=stopwords.words('french'),
max_df=0.5,
min_df=0.1,
lowercase=True)
tfidf_vectors_0 = vectorizer_0.fit_transform(texts_0)
pd.Series(
tfidf_vectors_0[0].toarray()[0],
index=vectorizer_0.get_feature_names_out()
).sort_values(ascending=False)
# +
N_CLUSTERS = 4
#instancier le modèle KMeans
km_model = KMeans(n_clusters=N_CLUSTERS, random_state = 42)
# appliquer le clustering
clusters_0 = km_model.fit_predict(tfidf_vectors_0)
clustering_0 = collections.defaultdict(list)
for idx, label in enumerate(clusters_0):
clustering_0[label].append(files_0[idx])
# réduire les dimensions
pca = PCA(n_components=2)
reduced_vectors_0 = pca.fit_transform(tfidf_vectors_0.toarray())
# générer le plot
x_axis = reduced_vectors_0[:, 0]
y_axis = reduced_vectors_0[:, 1]
plt.figure(figsize=(10,10))
scatter = plt.scatter(x_axis, y_axis, s=100, c=clusters_0)
# Ajouter les centroïdes
centroids = pca.transform(km_model.cluster_centers_)
plt.scatter(centroids[:, 0], centroids[:, 1], marker = "x", s=100, linewidths = 2, color='black')
# Ajouter la légende
plt.legend(handles=scatter.legend_elements()[0], labels=set(clusters_0), title="Clusters")
# -
pprint(dict(clustering_0))
# + [markdown] tags=[]
# #### Premiers constats
# -
# Il est difficile de juger de la pertinence des clusters sur un si grand nombre de fichiers. Dans le cas de cette décennie (1880), la dipsersion est telle qu'un certain nombre de fichiers sont ramassés sur la gauche, tandis que le reste se distribue plus largement. Un nettoyage des fichiers plus poussé permettrait sans doute de "rééquilibrer" la distribution afin d'obtenir des clusters plus marqués.
#
# Cependant, cette technique de clustering représente un intérêt pour classer les fichiers entre eux afin d'explorer des "thématiques" communes sur base de la fréquence des mots de l'ensemble étudié. En rassemblant les fichiers et les traitant à l'aide de fonctions de nettoyage plus poussées, il sera sans doute plus facile d'éliminer des mots fréquents de l'ensemble des "paquets" afin d'explorer plus finement le contenu. Des listes de stopwords peuvent être consitutées en fonction des clusters et ainsi permettre de faire ressortir du contenu plus informationnel.
#
# Pour vérifier la méthode, il faudrait pouvoir explorer les contenus de manière plus aisée, en procédant par exemple à des wordclouds ou en extrayant les keywords pour chacun des clusters.
# ## 2. Word2Vec
# Avant de procéder à l'amélioration du modèle en appliquant successivement les fonctions bigrammes/trigrammes/quadrigrammes/pentagrammes, j'ai d'abord porcédé à l'exploration de différents caractéristiques déterminant la qualité des modèles. Le tableau ci-dessous reprend les résultats obtenus pour les mêmes requêtes. Celles-ci sont calquées sur celles présentées au cours.
#
# Le modèle 1 est celui fourni à la base et appliqué à une partie du corpus (fichier 'sents' créé à l'aide du notebook du cours). Tous les autres modèles se basent sur le fichier de <NAME>, renommé en 'sents_2'. J'ai ensuite fait varier la taille du vecteur (32, 100 ou 300), la taille de la fenêtre (5, 7, 10, 13, 20, 40), la fréquence minimale des mots, le nombre de 'workers' et les 'epochs'. Ceci m'a permis d'approcher les différences en fonction de requêtes similaires appliquées à l'ensemble des modèles.
#
# Mes principales constations sont que le nombre d''epochs' semble améliorer les performances des modèles quelque soit la taille du vecteur. Le nombre de 'workers' semble diminuer la précision des modèles. La taille du vecteur et de la fenêtre augmentent la sensibilité des modèles ; ils semblent devenir plus "sensibles", plus "subtiles" mais peut-être moins "précis". Ainsi, avec une vecteur de 100 et une fenêtre de 7, le modèle parvient à comprendre que le charcutier est au plus proche du boucher, que les autres métiers généralement renseignés. Remarquons qu'avec un vecteur de 300, il faut élargir la taille du vecteur à 13 pour parvenir à un tel résultat.
#
# J'opte donc a priori pour des modèles dont la taille de la fenêtre augmente en fonction de la taille du vecteur. Cependant, il faudra voir si l'application plus poussée des fonctions du Phraser ne viendra pas perturber ces résultats. Quant à la fréquence des mots, je décide de l'abaisser à deux, jugeant qu'il est nécessaire pour Word2vec d'avoir un maximum d'information pour l'analyse ; les mots présents une seule fois, autrement dit les hapax.
# 
# Tous les résultats sont disponibles dans le repo du module 4, dans les deux forks du notebook pour le word embeddings.
# #### Chargement des phrases
class MySentences(object):
"""Tokenize and Lemmatize sentences"""
def __init__(self, filename):
self.filename = filename
def __iter__(self):
for line in open(self.filename, encoding='utf-8', errors="backslashreplace"):
yield [unidecode(w.lower()) for w in wordpunct_tokenize(line)]
infile = f"../data/sents_2.txt"
sentences = MySentences(infile)
# #### Création des bigrammes
bigram_phrases = Phrases(sentences)
len(bigram_phrases.vocab.keys())
# %time bigram_phrases[sentences]
bigram_phraser = Phraser(phrases_model=bigram_phrases)
# %time bigram_phraser[sentences]
trigram_phrases = Phrases(bigram_phraser[sentences])
trigram_phraser = Phraser(phrases_model=trigram_phrases)
quadrigram_phrases = Phrases(trigram_phraser[sentences])
quadrigram_phraser = Phraser(phrases_model=quadrigram_phrases)
pentagram_phrases = Phrases(quadrigram_phraser[sentences])
pentagram_phraser = Phraser(phrases_model=pentagram_phrases)
corpus = list(pentagram_phraser[quadrigram_phraser[trigram_phraser[bigram_phraser[sentences]]]])
print(corpus[:10])
# #### Modèle 1 du tp3 (300-10)
# %%time
model = Word2Vec(
corpus, # On passe le corpus de ngrams que nous venons de créer
vector_size=300, # Le nombre de dimensions dans lesquelles le contexte des mots devra être réduit, aka. vector_size
window=10, # La taille du "contexte", ici 5 mots avant et après le mot observé
min_count=2, # On ignore les mots qui n'apparaissent pas au moins 5 fois dans le corpus
workers=4, # Permet de paralléliser l'entraînement du modèle en 4 threads
epochs=10 # Nombre d'itérations du réseau de neurones sur le jeu de données pour ajuster les paramètres avec la descende de gradient, aka. epochs.
)
# +
# outfile = f"../data/bulletins_tp3_1.model"
# model.save(outfile)
# -
model = Word2Vec.load("../data/bulletins_tp3_1.model")
# + [markdown] tags=[]
# #### Exploration du modèle
# -
model.wv.similarity("boucher", "boulanger")
model.wv.similarity("homme", "femme")
model.wv.similarity("voiture", "carrosse")
model.wv.similarity("voiture", "chien")
model.wv.most_similar("bruxelles", topn=10)
model.wv.most_similar("boucher", topn=10)
model.wv.most_similar("platonisme", topn=10)
model.wv.most_similar("urinoir", topn=10)
print(model.wv.most_similar(positive=['bruxelles', 'france'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'espagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'allemagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'italie'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'russie'], negative=['belgique']))
print(model.wv.most_similar(positive=['roi', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['fidelite', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['fidelite', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['urinoir', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['enfant', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['enfant', 'femme'], negative=['homme']))
# #### Modèle 2 du tp3 (32-10)
# %%time
model = Word2Vec(
corpus, # On passe le corpus de ngrams que nous venons de créer
vector_size=32, # Le nombre de dimensions dans lesquelles le contexte des mots devra être réduit, aka. vector_size
window=10, # La taille du "contexte", ici 5 mots avant et après le mot observé
min_count=2, # On ignore les mots qui n'apparaissent pas au moins 5 fois dans le corpus
workers=4, # Permet de paralléliser l'entraînement du modèle en 4 threads
epochs=10 # Nombre d'itérations du réseau de neurones sur le jeu de données pour ajuster les paramètres avec la descende de gradient, aka. epochs.
)
# +
# outfile = f"../data/bulletins_tp3_2.model"
# model.save(outfile)
# -
model = Word2Vec.load("../data/bulletins_tp3_2.model")
model.wv.similarity("boucher", "boulanger")
model.wv.similarity("homme", "femme")
model.wv.similarity("voiture", "carrosse")
model.wv.similarity("voiture", "chien")
model.wv.most_similar("bruxelles", topn=10)
model.wv.most_similar("boucher", topn=10)
model.wv.most_similar("platonisme", topn=10)
print(model.wv.most_similar(positive=['bruxelles', 'france'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'espagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'allemagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'italie'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'russie'], negative=['belgique']))
print(model.wv.most_similar(positive=['roi', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['fidelite', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['fidelite', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['urinoir', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['enfant', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['enfant', 'femme'], negative=['homme']))
# #### Modèle 3 du tp3 (100-10)
# %%time
model = Word2Vec(
corpus, # On passe le corpus de ngrams que nous venons de créer
vector_size=100, # Le nombre de dimensions dans lesquelles le contexte des mots devra être réduit, aka. vector_size
window=10, # La taille du "contexte", ici 5 mots avant et après le mot observé
min_count=2, # On ignore les mots qui n'apparaissent pas au moins 5 fois dans le corpus
workers=4, # Permet de paralléliser l'entraînement du modèle en 4 threads
epochs=10 # Nombre d'itérations du réseau de neurones sur le jeu de données pour ajuster les paramètres avec la descende de gradient, aka. epochs.
)
# +
# outfile = f"../data/bulletins_tp3_3.model"
# model.save(outfile)
# -
model = Word2Vec.load("../data/bulletins_tp3_3.model")
model.wv.similarity("boucher", "boulanger")
model.wv.similarity("homme", "femme")
model.wv.similarity("homme", "individu")
model.wv.similarity("bon", "mechant")
model.wv.similarity("beau", "vilain")
model.wv.similarity("noir", "blanc")
model.wv.similarity("voiture", "carrosse")
model.wv.similarity("voiture", "chien")
model.wv.most_similar("bruxelles", topn=10)
model.wv.most_similar("boucher", topn=10)
model.wv.most_similar("platonisme", topn=10)
print(model.wv.most_similar(positive=['bruxelles', 'france'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'espagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'allemagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'italie'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'russie'], negative=['belgique']))
print(model.wv.most_similar(positive=['fidelite', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['fidelite', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['urinoir', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['enfant', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['enfant', 'femme'], negative=['homme']))
# #### Modèle 4 du tp3 (100-13)
# %%time
model = Word2Vec(
corpus, # On passe le corpus de ngrams que nous venons de créer
vector_size=100, # Le nombre de dimensions dans lesquelles le contexte des mots devra être réduit, aka. vector_size
window=13, # La taille du "contexte", ici 5 mots avant et après le mot observé
min_count=2, # On ignore les mots qui n'apparaissent pas au moins 5 fois dans le corpus
workers=4, # Permet de paralléliser l'entraînement du modèle en 4 threads
epochs=10 # Nombre d'itérations du réseau de neurones sur le jeu de données pour ajuster les paramètres avec la descende de gradient, aka. epochs.
)
# +
# outfile = f"../data/bulletins_tp3_4.model"
# model.save(outfile)
# -
model = Word2Vec.load("../data/bulletins_tp3_4.model")
model.wv.similarity("boucher", "boulanger")
model.wv.similarity("homme", "femme")
model.wv.similarity("voiture", "carrosse")
model.wv.similarity("voiture", "chien")
model.wv.most_similar("bruxelles", topn=10)
model.wv.most_similar("boucher", topn=10)
model.wv.most_similar("platonisme", topn=10)
print(model.wv.most_similar(positive=['bruxelles', 'france'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'espagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'allemagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'italie'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'russie'], negative=['belgique']))
print(model.wv.most_similar(positive=['fidelite', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['fidelite', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['urinoir', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['enfant', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['enfant', 'femme'], negative=['homme']))
# #### Modèle 5 du tp3 (100-7)
# %%time
model = Word2Vec(
corpus, # On passe le corpus de ngrams que nous venons de créer
vector_size=100, # Le nombre de dimensions dans lesquelles le contexte des mots devra être réduit, aka. vector_size
window=7, # La taille du "contexte", ici 5 mots avant et après le mot observé
min_count=2, # On ignore les mots qui n'apparaissent pas au moins 5 fois dans le corpus
workers=4, # Permet de paralléliser l'entraînement du modèle en 4 threads
epochs=10 # Nombre d'itérations du réseau de neurones sur le jeu de données pour ajuster les paramètres avec la descende de gradient, aka. epochs.
)
# +
# outfile = f"../data/bulletins_tp3_5.model"
# model.save(outfile)
# -
model = Word2Vec.load("../data/bulletins_tp3_5.model")
model.wv.similarity("boucher", "boulanger")
model.wv.similarity("homme", "femme")
model.wv.similarity("voiture", "carrosse")
model.wv.similarity("voiture", "chien")
model.wv.most_similar("bruxelles", topn=10)
model.wv.most_similar("boucher", topn=10)
model.wv.most_similar("platonisme", topn=10)
print(model.wv.most_similar(positive=['bruxelles', 'france'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'espagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'allemagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'italie'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'russie'], negative=['belgique']))
print(model.wv.most_similar(positive=['fidelite', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['fidelite', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['urinoir', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['enfant', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['enfant', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['roi', 'femme'], negative=['homme']))
model.wv.most_similar("reine", topn=10)
# #### Modèle 6 du tp3 (32-7)
# %%time
model = Word2Vec(
corpus, # On passe le corpus de ngrams que nous venons de créer
vector_size=32, # Le nombre de dimensions dans lesquelles le contexte des mots devra être réduit, aka. vector_size
window=7, # La taille du "contexte", ici 5 mots avant et après le mot observé
min_count=2, # On ignore les mots qui n'apparaissent pas au moins 5 fois dans le corpus
workers=4, # Permet de paralléliser l'entraînement du modèle en 4 threads
epochs=10 # Nombre d'itérations du réseau de neurones sur le jeu de données pour ajuster les paramètres avec la descende de gradient, aka. epochs.
)
outfile = f"../data/bulletins_tp3_6.model"
model.save(outfile)
model = Word2Vec.load("../data/bulletins_tp3_6.model")
model.wv.similarity("boucher", "boulanger")
model.wv.similarity("homme", "femme")
model.wv.similarity("voiture", "carrosse")
model.wv.most_similar("bruxelles", topn=10)
model.wv.most_similar("boucher", topn=10)
model.wv.most_similar("platonisme", topn=10)
model.wv.most_similar("urinoir", topn=10)
print(model.wv.most_similar(positive=['bruxelles', 'france'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'espagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'allemagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'italie'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'russie'], negative=['belgique']))
print(model.wv.most_similar(positive=['fidelite', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['fidelite', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['urinoir', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['enfant', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['enfant', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['roi', 'femme'], negative=['homme']))
# + [markdown] tags=[]
# #### Modèle 7 du tp3 (32-13)
# -
# %%time
model = Word2Vec(
corpus, # On passe le corpus de ngrams que nous venons de créer
vector_size=32, # Le nombre de dimensions dans lesquelles le contexte des mots devra être réduit, aka. vector_size
window=13, # La taille du "contexte", ici 5 mots avant et après le mot observé
min_count=2, # On ignore les mots qui n'apparaissent pas au moins 5 fois dans le corpus
workers=4, # Permet de paralléliser l'entraînement du modèle en 4 threads
epochs=10 # Nombre d'itérations du réseau de neurones sur le jeu de données pour ajuster les paramètres avec la descende de gradient, aka. epochs.
)
outfile = f"../data/bulletins_tp3_7.model"
model.save(outfile)
model = Word2Vec.load("../data/bulletins_tp3_7.model")
model.wv.similarity("boucher", "boulanger")
model.wv.similarity("homme", "femme")
model.wv.similarity("voiture", "carrosse")
model.wv.most_similar("bruxelles", topn=10)
model.wv.most_similar("boucher", topn=10)
model.wv.most_similar("platonisme", topn=10)
model.wv.most_similar("urinoir", topn=10)
print(model.wv.most_similar(positive=['bruxelles', 'france'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'espagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'allemagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'italie'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'russie'], negative=['belgique']))
print(model.wv.most_similar(positive=['fidelite', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['fidelite', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['urinoir', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['enfant', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['enfant', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['roi', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['reine', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['fidelite', 'joie'], negative=['obeissance']))
# #### Modèle 8 du tp3 (300-13)
# %%time
model = Word2Vec(
corpus, # On passe le corpus de ngrams que nous venons de créer
vector_size=300, # Le nombre de dimensions dans lesquelles le contexte des mots devra être réduit, aka. vector_size
window=13, # La taille du "contexte", ici 5 mots avant et après le mot observé
min_count=2, # On ignore les mots qui n'apparaissent pas au moins 5 fois dans le corpus
workers=4, # Permet de paralléliser l'entraînement du modèle en 4 threads
epochs=10 # Nombre d'itérations du réseau de neurones sur le jeu de données pour ajuster les paramètres avec la descende de gradient, aka. epochs.
)
outfile = f"../data/bulletins_tp3_8.model"
model.save(outfile)
model = Word2Vec.load("../data/bulletins_tp3_8.model")
model.wv.similarity("boucher", "boulanger")
model.wv.similarity("homme", "femme")
model.wv.similarity("voiture", "carrosse")
model.wv.most_similar("bruxelles", topn=10)
model.wv.most_similar("boucher", topn=10)
model.wv.most_similar("platonisme", topn=10)
model.wv.most_similar("urinoir", topn=10)
print(model.wv.most_similar(positive=['bruxelles', 'france'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'espagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'allemagne'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'italie'], negative=['belgique']))
print(model.wv.most_similar(positive=['bruxelles', 'russie'], negative=['belgique']))
print(model.wv.most_similar(positive=['fidelite', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['fidelite', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['urinoir', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['enfant', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['enfant', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['roi', 'femme'], negative=['homme']))
print(model.wv.most_similar(positive=['reine', 'homme'], negative=['femme']))
print(model.wv.most_similar(positive=['fidelite', 'joie'], negative=['obeissance']))
# #### Analyse des modèles sur base des pentagrammes
# 
# On remarque que - dans le cas des modèles se basant sur les Phrasers itératifs à cinq niveaux - la subtilité des modèles semblent augmenter. Le mot 'charcutier' remonte plus vite dans les résultats. Et le modèle tp3_3 identifie même la capitale de France : Paris. Notons, que le modèle tp3_1 identifie 'aristotélisme' comme étant proche de 'platonisme', mais le modèle semble perdre en précision dans la requête complexe, testant la capitale.
#
# Je décide donc d'explorer plus en profondeur ces deux modèles à l'aide des fonctions 'similarity' et 'most_similar'.
# #### Chargement des deux modèles
model_100 = Word2Vec.load("../data/bulletins_tp3_3.model")
model_300 = Word2Vec.load("../data/bulletins_tp3_1.model")
# #### Exploration de la fonction 'similarity'
model_100.wv.similarity("pompier", "officier")
model_300.wv.similarity("pompier", "officier")
model_100.wv.similarity("maladie", "convalescence")
model_300.wv.similarity("maladie", "convalescence")
model_100.wv.similarity("maladie", "traitement")
model_300.wv.similarity("maladie", "traitement")
model_100.wv.similarity("ville", "campagne")
model_300.wv.similarity("ville", "campagne")
model_100.wv.similarity("blanc", "noir")
model_300.wv.similarity("blanc", "noir")
# Nous remarquons que les scores de similarité sont ambigüs. Il peuvent indiquer des scores hauts ou bas pour des mots a priori opposés, bien que proche par nature (exemple: la couleur pour l'opposition entre noir et blanc). Ce score est donc plutôt à utiliser pour tester la compréhension du modèle pour des mots considérés comme proches en français. Selon moi, Pour tester les distances entre des mots éloignés, il vaut mieux utiliser les requêtes complexes.
# #### Exploration de la fonction 'most_similar'
# Je profite de cette fonction pour explorer des mots fréquents ressortis lors du tp2 afin d'explorer leur "signification" propre à notre corpus.
model_100.wv.most_similar("assitance", topn=10)
model_300.wv.most_similar("assitance", topn=10)
model_100.wv.most_similar("entretien", topn=10)
model_300.wv.most_similar("entretien", topn=10)
model_100.wv.most_similar("cours", topn=10)
model_300.wv.most_similar("cours", topn=10)
# #### Requêtes complexes
print(model_100.wv.most_similar(positive=['convalescence', 'credit'], negative=['maladie']))
print(model_300.wv.most_similar(positive=['convalescence', 'credit'], negative=['maladie']))
print(model_100.wv.most_similar(positive=['chien', 'enfant'], negative=['chat']))
print(model_300.wv.most_similar(positive=['chien', 'enfant'], negative=['chat']))
print(model_100.wv.most_similar(positive=['noir', 'enfant'], negative=['blanc']))
print(model_300.wv.most_similar(positive=['noir', 'enfant'], negative=['blanc']))
print(model_100.wv.most_similar(positive=['securite', 'campagne'], negative=['ville']))
print(model_300.wv.most_similar(positive=['securite', 'campagne'], negative=['ville']))
print(model_100.wv.most_similar(positive=['sante', 'campagne'], negative=['ville']))
print(model_300.wv.most_similar(positive=['sante', 'campagne'], negative=['ville']))
print(model_100.wv.most_similar(positive=['fidelite', 'joie'], negative=['obeissance']))
print(model_300.wv.most_similar(positive=['fidelite', 'joie'], negative=['obeissance']))
print(model_100.wv.most_similar(positive=['fidelite', 'joie'], negative=['infidelite']))
print(model_300.wv.most_similar(positive=['fidelite', 'joie'], negative=['infidelite']))
print(model_100.wv.most_similar(positive=['officier', 'femme'], negative=['soldat']))
print(model_300.wv.most_similar(positive=['officier', 'femme'], negative=['soldat']))
#
# On remarque que pour beaucoup de requêtes, le modèle_100 semble être plus précis et donner de meilleurs résultats (exemple: noir+enfant-blanc, convalescence+crédit-maladie). Les premières occurences sont souvent parlantes et nuancées (exemple: fidélité+joie-[obéissance ou infidélité]), tandis que le modèle_300 semble moins subtile.
#
# Rappelons-nous aussi que le modèle_100 avait bien identifié la capitale de la France, au contraire du modèle_300.
#
# J'opterais donc plutôt pour ce dernier modèle si je devais choisir de l'utiliser par la suite.
| tp4/drafts/draft_tp3_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import scrapbook as sb
import smtplib
from email.mime.text import MIMEText
from email.utils import COMMASPACE
nb = sb.read_notebook('output/Transform.ipynb')
nb.scraps['HTML_Report'].data
GMAIL_USER = '<EMAIL>'
# + tags=["parameters"]
RECIPIENT_EMAIL = '<EMAIL>'
GMAIL_PWD = '<PASSWORD>'
# +
SUBJECT = 'Top 20 Ohio School Districts by Median Salary'
TO = RECIPIENT_EMAIL
BODY = nb.scraps['HTML_Report'].data + '\n\nBrought to you by a Python script!'
def sendEmail(sender, pwd, to, subject, message):
recipient = to if type(to) is list else [to]
msg = MIMEText(message)
msg['Subject'] = subject
msg['From'] = sender
msg['To'] = COMMASPACE.join(recipient)
server = smtplib.SMTP('smtp.gmail.com:587')
server.ehlo()
server.starttls()
try:
server.login(sender,pwd)
print('Successfully authenticated...')
except smtplib.SMTPAuthenticationError: # Check for authentication error
return " Authentication ERROR"
try:
server.sendmail(sender,recipient,msg.as_string())
print('Email sent!')
except smtplib.SMTPRecipientsRefused: # Check if recipient's email was accepted by the server
return "ERROR"
server.quit()
# -
sendEmail(GMAIL_USER, GMAIL_PWD, TO, SUBJECT, BODY)
| jupyter_notebooks/papermill/etl/Load.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Agenda
#Install Matplotlib
#Explain np.meshgrid() with !3*3 examples
#Plot a linear function heatmap
#Add title and colorbar
#Plot a cos function heatmap
# -
import numpy as np
import matplotlib.pyplot as plt
#Explain np.meshgrid() with !3*3 examples
x= np.arange(3)
y= np.arange(4, 8)
print(x, y)
x2, y2= np.meshgrid(x,y)
print(x2)
print()
print(y2)
z= 2 * x2 + 3 * y2
print(z)
#Plot a linear function heatmap
plt.imshow(z)
#Add title and colorbar
plt.title("Plot of 2x + 3y")
plt.colorbar()
#Plot a cos function heatmap
z2= np.cos(x2) + np.cos(y2)
print(z2)
plt.imshow(z2)
plt.title("Plot cos(x2) + cos(y2)")
plt.colorbar()
#Save the Figure
plt.savefig("Cos Plot.png")
| Week 1/Section 2/Statistical Processing and Graphical Sketches/Statistical Processing and Graphical Sketches.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ## PHYS 105A: Introduction to Scientific Computing
#
# # Minimization or Maximization
#
# <NAME>
# + [markdown] slideshow={"slide_type": "slide"}
# ## Problem Definition
#
# * The learned about root finding last time. This week we will learn a related topic: minimization and maximization of functions.
#
# * The problem is similar, given a function $f(x)$, where $x$ may be a vector, we want to solve for the value of $x$ such that $f(x)$ is locally or globally a minimum or maximum.
#
# * This seemingly simple problem turns out to be EXTREMELY powerful!
# + [markdown] slideshow={"slide_type": "slide"}
# ## Applications
#
# * In theoretical physics, the *Action Principle* states that: the path taken by the system is the one for which the action is stationary (no change) to first order.
#
# 
#
# * In experimental physics, the *measured values* of your experiement are the solution to [maximize a likelyhood function](https://github.com/uarizona-2021spring-phys105a/phys105a/blob/main/06/dataproc.ipynb).
#
# * In Very-Long-Baseline Interferometry, the [images you saw](https://eventhorizontelescope.org/blog/astronomers-image-magnetic-fields-edge-m87s-black-hole) are the maximum entropy images that fit the data.
#
# * In most of the production ready [machine learning applications](https://playground.tensorflow.org/), the machine learning models are also mimimal solutions of some loss function.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Optimization
#
# * In general, minimizing or maximizing a funciton falls into a field of mathematics called optimization.
#
# * Because so many problems can be casted into a optimization, it is an extremely field with lots of applications.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Methods
#
# * For one-dimension minimization problems, there are two major classes of methods:
#
# * Methods that require only evaluation of the function
#
# * Methods that require also evaluations of the derivative of the function.
#
# * The first class of method is similar to the bisection method we learned last week for root finding. The second class of methods is similar to the Newton Raphson method that we skipped last time.
#
# * For multivariable problems, there is one more class of methods that you can compute the derivatives using finite difference.
#
# * Unlike root finding, methods that require derivatives is easier to understand, so we will learn them in this lecture.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Graphical Search for Extrema
#
# * Just like looking for roots, it is always a good idea to plot a function.
#
# * We already know how to plot functions in python
#
# * In fact, let's use the same polynomial we used last time.
# + slideshow={"slide_type": "slide"}
import numpy as np
from matplotlib import pyplot as plt
def h(x):
return x**5 - 16*x**3 + 32*x + 4
x = np.linspace(-4, 4, 8001)
plt.plot(x, h(x))
plt.axhline(y=0, color='k', lw=0.5)
# The extrema are at:
# Global minimum: -4
# Global maximum: +4
# Local minima: ≈ -1, -3
# Local maxima: ≈ 3, 1
# + [markdown] slideshow={"slide_type": "slide"}
# ## Gradient Descent Method
#
# * There are many methods for solving the minimum once we have the derivative.
#
# * They have different speed and complexity. One of the simplest but easiest to implment is gradient descent.
#
# * The idea is very simple:
#
# * Evaluate the function's derivative at a given point.
#
# * Step toward the "downhill" direction.
#
# * Repeat until the derivative is small enough that you are near a minimum.
# + slideshow={"slide_type": "slide"}
def minimize(f, f_x, x, alpha, acc=1e-3, nmax=1000):
for _ in range(nmax):
y = f(x)
y_x = f_x(x)
if abs(y_x) <= acc:
return x
x -= alpha * y_x
raise Exception("Too many iterations")
# + slideshow={"slide_type": "slide"}
# Let's test it
#def h(x):
# return x**5 - 16*x**3 + 32*x + 4
def h_x(x):
return 5 * x**4 - 48 * x**2 + 32
m0 = minimize(h, h_x, 0, 1e-3)
m1 = minimize(h, h_x, 2, 1e-3)
x = np.linspace(-4, 4, 8001)
plt.plot(x, h(x))
plt.axhline(y=0, color='k', lw=0.5)
plt.axvline(x=m0, color='r', lw=0.5)
plt.axvline(x=m1, color='r', lw=0.5)
# + slideshow={"slide_type": "slide"}
# Finding maximum is also easy
def maximize(f, f_x, x, alpha, **kwargs):
def nf(x):
return -f(x)
def nf_x(x):
return -h_x(x)
return minimize(nf, nf_x, x, alpha, **kwargs)
# + slideshow={"slide_type": "slide"}
# Let's also test it
M0 = maximize(h, h_x, -2, 1e-3)
M1 = maximize(h, h_x, 0, 1e-3)
x = np.linspace(-4, 4, 8001)
plt.plot(x, h(x))
plt.axhline(y=0, color='k', lw=0.5)
plt.axvline(x=M0, color='r', lw=0.5)
plt.axvline(x=M1, color='r', lw=0.5)
# + [markdown] slideshow={"slide_type": "slide"}
# ## How Good is our Optimizer?
#
# * Within an hour, we implemented a gradient descent method. This is AWESOME because we can now solve some problem that even the greatest mathematicians cannot solve!
#
# * However, this is too good to be true. While gradient descent is a generic enough method, there are many traps that we may fall into that break our optimizer.
#
# * Instead of learning a more complicated optimization algorithm, let's try to break our gradient descent method and then fix it.
# + slideshow={"slide_type": "slide"}
# We use 1e-3 as the step size; what if we change it to a larger step?
m1 = minimize(h, h_x, 2, 1e-2)
# + slideshow={"slide_type": "slide"}
# What's going on?
# To understand why the algorithm breaks, let's modify our
# gradient descent method output more information.
def minimize(f, f_x, x, alpha, acc=1e-3, nmax=1000):
l = np.array([x])
for _ in range(nmax):
y = f(x)
y_x = f_x(x)
if abs(y_x) <= acc:
return l
x -= alpha * y_x
l = np.append(l, x)
raise Exception("Too many iterations", l)
# + slideshow={"slide_type": "slide"}
try:
l1 = minimize(h, h_x, 2, 1e-2)
except Exception as e:
print('Failed')
l1 = e.args[1]
print(l1)
# + slideshow={"slide_type": "slide"}
n = 4
x = np.linspace(-4, 4, 8001)
plt.plot(x, h(x))
plt.plot(l1[:n], h(l1[:n]), 'o-')
plt.axhline(y=0, color='k', lw=0.5)
plt.xlim(2, 3.5)
plt.ylim(-100, -50)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Take Away
#
# * For simpmle gradient descent, the step size is important.
#
# * If a step is too big, you may end up jump around the extrema without reaching the accuracy requirements.
#
# * Adjusting the step size to reach a fast and accurate convergence is all *hyperparameter tuning*.
#
# * There are many algorithms to automatically adjust the step size.
#
# * However, for this course, we may simply adjust the step size by visually inspecting the results.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Moving on to Multidimensional Problems
#
# * Because gradient descent is so simple, we can trivially generalize it to multiple variables.
#
# * This will enables to solve much more intereting questions including curve fittings, or even physics problems that uses the action principle.
#
# * The basic idea is that, in stead of moving left or right in one dimension, the vector of derivatives gives you the gradient of the function.
#
# * For simplicity, we will move one direction at a time.
# + slideshow={"slide_type": "slide"}
# Let's implement a two-dimension minimizer
def minimize(f, f_x, f_y, x, y, alpha, acc=1e-3, nmax=1000):
l = np.array([x, y])
for i in range(nmax):
z = f(x, y)
z_x = f_x(x, y)
z_y = f_y(x, y)
if z_x*z_x + z_y*z_y <= acc * acc:
return l
if i % 2 == 0:
x -= alpha * z_x
else:
y -= alpha * z_y
l = np.vstack((l, [x, y]))
raise Exception("Too many iterations", l)
# + slideshow={"slide_type": "slide"}
# Test our implementation
def g(x, y):
return (x - 2)**2 + (y - 1)**2
def g_x(x, y):
return 2 * (x - 2)
def g_y(x, y):
return 2 * (y - 1)
try:
l1 = minimize(g, g_x, g_y, 0, 0, 0.1)
except Exception as e:
print('Failed')
l1 = e.args[1]
print(l1[-1,:])
plt.plot(l1[:,0], l1[:,1])
# + [markdown] slideshow={"slide_type": "slide"}
# ## Summary
#
# * Optimization finding is an important field in mathematics with many applications!
#
# * Numerical optimizers can help us find extrema of functions complicated.
#
# * The gradient descent method is extremely simple but powerful!
# -
| 10/optimization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Problem Set 1 — Coding Part
#
# **Lecture:** Data Compression With Deep probabilistic models (Prof. Bamler at University of Tuebingen)
#
# - This notebook constitutes the coding part of Problem Set 1, published on 20 April 2021 and discussed on 26 April 2021.
# - Download the full problem set from the [course website](https://robamler.github.io/teaching/compress21/).
# ## Problem 1.2: Naive Symbol Code Implementation
#
# In this exercise, we'll implement a very naive but correct encoder and decoder for prefix-free symbol codes.
# We only care about correctness for now, not about computational efficiency.
#
# We represent bit strings (code words and the concatenated encoded message) as lists of boolean values, where `True` represents a "one"-bit and `False` represents a "zero" bit.
# Please be aware that this would be an extremely inefficient representation for a real application.
# We represent code books as dictionaries from symbols to bit strings (i.e., to lists of boolean values).
# ### Sample Code Books
#
# Our decoding algorithm will only work with prefix codes.
# Let's define some sample prefix codes for our unit tests.
# +
# our example C^{(4)} from Problem 1.1
SAMPLE_CODEBOOK_MONOPOLY_C4 = {
2: [False, True, False],
3: [True, False],
4: [False, False],
5: [True, True],
6: [False, True, True]
}
# additional example (exercise: verify that this is a prefix code)
SAMPLE_CODEBOOK2 = {
'a': [True, False],
'b': [False],
'c': [True, True, False, False],
'd': [True, True, False, True],
'e': [True, True, True],
}
# -
# ### Encoder
#
# The encoder is very simple.
# Fill in the blank where it says "TODO" (a single line of code will do).
def encode(message, codebook):
"""Encodes a sequence of symbols using a prefix-free symbol code.
This is a very inefficient implementation for teaching purposes only.
Args:
message (list): The message you want to encode, as a list of symbols.
codebook (dict): A codebook for a prefix-free symbol code. Must be a
dictionary whose keys contain all symbols that appear in `message`
(and may contain additional keys). Each key must map to a list of
booleans, representing the code word as a sequence of bits. Must
specify a prefix-free code, i.e., no code word may be the prefix
of the code word for a different symbol.
Returns:
list: The encoded bit string as a list of bools.
"""
encoded = []
for symbol in message:
# TODO: look up code word for `symbol` in the `codebook` and append
# it to `encoded`
#
# PROPOSED SOLUTION:
encoded += codebook[symbol]
return encoded
# Now run these unit tests to verify your implementation:
assert encode([], SAMPLE_CODEBOOK_MONOPOLY_C4) == []
assert encode([], SAMPLE_CODEBOOK2) == []
assert (
encode([4, 3, 6, 4, 2], SAMPLE_CODEBOOK_MONOPOLY_C4)
== [False, False, True, False, False, True, True, False, False, False, True, False]
)
assert (
encode(['c', 'b', 'a', 'd', 'b', 'b', 'd', 'e'], SAMPLE_CODEBOOK2)
== [True, True, False, False, False, True, False, True, True, False,
True, False, False, True, True, False, True, True, True, True]
)
# ### Decoder
#
# The decoder is more complicated because it has to infer the boundaries between concatenated code words.
# To do this, we will use the assumption that the code book defines a *prefix-free* symbol code.
#
# We use a kind of brute-force implementation here.
# It is correct but very inefficient.
# We'll implement a more efficient method on the next problem set.
#
# Fill in the blanks where it says "TODO".
def decode(encoded, codebook):
"""Decodes a bitstring into a sequence of symbols using a prefix-free symbol code.
This is a very inefficient implementation for teaching purposes only.
Args:
encoded (list): The compressed bit string as a list of bools.
codebook (dict): A codebook for a prefix-free symbol code.
Returns:
list: The decoded message as a list of symbols.
"""
def is_prefix_of(prefix_candidate, codeword):
# TODO: Both `prefix_candidate` and `codeword` are lists of bools. Return
# `True` if `codeword` is at least as long as `prefix_candidate` and if
# `codeword` starts with `prefix_candidate`. Otherwise, return `False`.
#
# PROPOSED SOLUTION:
return (
len(codeword) >= len(prefix_candidate)
and codeword[:len(prefix_candidate)] == prefix_candidate
)
decoded = []
partial_codeword = []
candidate_symbols = list(codebook.keys())
for bit in encoded:
# TODO: apply a filter to `candidate_symbols`: only retain the ones
# whose code words start with `partial_codeword`.
#
# PROPOSED SOLUTION:
partial_codeword.append(bit)
candidate_symbols = [
symbol for symbol in candidate_symbols
if is_prefix_of(partial_codeword, codebook[symbol])
]
if len(candidate_symbols) == 0:
raise 'Encountered invalid code word.'
elif len(candidate_symbols) == 1 and partial_codeword == codebook[candidate_symbols[0]]:
# TODO:
# - Append the decoded symbol to `decoded`.
# - Then reset `partial_codeword` and `candidate_symbols` to their initial values
# so that we can start decoding the next code word.
#
# PROPOSED SOLUTION:
decoded.append(candidate_symbols[0])
partial_codeword = []
candidate_symbols = list(codebook.keys())
assert partial_codeword == [], 'The compressed message ended in the middle of a code word.'
return decoded
# Now run these unit tests to verify your implementation:
assert decode([], SAMPLE_CODEBOOK_MONOPOLY_C4) == []
assert decode([], SAMPLE_CODEBOOK2) == []
assert decode(
[False, False, True, False, False, True, True, False, False, False, True, False],
SAMPLE_CODEBOOK_MONOPOLY_C4) == [4, 3, 6, 4, 2]
assert decode(
[True, True, False, False, False, True, False, True, True, False,
True, False, False, True, True, False, True, True, True, True],
SAMPLE_CODEBOOK2) == ['c', 'b', 'a', 'd', 'b', 'b', 'd', 'e']
# ### Round-Trip Tests
#
# These entropy coding algorithms can contain very subtle errors that wouldn't show up in the minimal unit tests we've tested so far.
# It is generally a good idea to implement more elaborate tests.
# This is easy to do, now that you have both an encoder and a decoder: generate some long-ish sequence of random symbols.
# Then encode and decode them and verify that the decoder reconstructs the original message.
# Always remember the random numer seed so that, if you find an error, you can start debugging.
# #### Proposed Solution:
import numpy as np
def round_trip_test(codebook, seed):
rng = np.random.RandomState(seed) # Provide explicit random number seed to make tests reproducible.
alphabet = sorted(codebook.keys()) # Sort keys to make tests reproducible.
message = rng.choice(alphabet, 1000)
encoded = encode(message, codebook)
decoded = decode(encoded, codebook)
assert len(message) == len(decoded) # Start with simple check so that error messages are more interpretable.
assert (message == decoded).all()
round_trip_test(SAMPLE_CODEBOOK_MONOPOLY_C4, 123)
round_trip_test(SAMPLE_CODEBOOK2, 456)
# ## Problem 1.3: Binary Heap
#
# This exercise is a preparation for the next problem set, where we will implement the Huffman coding algorithm for constructing optimal symbol codes.
# Our implementation will use a *binary heap* (also known as a *priority heap*, a *min-heap*, or a *max-heap*).
#
# - (Re-)familiarize yourself with the concept of a binary heap (e.g., skim the [Wikipedia article](https://en.wikipedia.org/wiki/Binary_heap).
# It's not so important for now how the heap is implemented, just make sure you understand what the `insert` and `pop` (or `extract`) operations do.
# - The following code plays around with the binary heap implementation in the python standard library.
# Run it, read it, and make sure you understand what it does (this code has no particular purpose apart from verifying that we understand how the API works).
import numpy as np
import heapq
np.random.seed(123)
test_data = np.random.choice(10, size=20)
test_data
heap = []
for item in test_data:
heapq.heappush(heap, item)
heap
sorted_test_data = []
while heap != []:
sorted_test_data.append(heapq.heappop(heap))
sorted_test_data # Should print the items from `test_data` in sorted order.
assert set(test_data) == set(sorted_test_data)
assert sorted(sorted_test_data) == sorted_test_data
| problems/problem-set-01-solutions/problem-set-01-solutions.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Simulation Experiments
# We simulate a greybox fuzing campaign to understand the behavior of discovery probability as more inputs are generated. In contrast to a blackbox fuzzer, a greybox fuzzer adds inputs to the corpus that discover a new species (e.g., that cover a new program branch).
#
# We simulate 𝑇𝑟𝑖𝑎𝑙=30 many greybox campaigns of length 𝐿𝑒𝑛𝑔𝑡ℎ=50000 generated test inputs, where the subject has 𝑆=50 species. We take data points in exponentially increasing intervals ( 𝑁𝑅𝑎𝑛𝑔𝑒 ).
#
# Seeds are identified by their species (species are identified by a number between 0 and 𝑆 ). In order to derive the local species distribution for a seed, we assume that there exists some template distribution 𝑊𝑒𝑖𝑔ℎ𝑡𝑠 . This captures that, irrespective of the current seed, some species are always abundant, while some are always rare. The 𝐺𝑟𝑎𝑣𝑖𝑡𝑦 parameter determines how close the local distribution of a seed is to that template distribution.
#
# 𝑊𝑒𝑖𝑔ℎ𝑡𝑠 is defined such that there are very few highly abundandt species and a lot of very rare species. Specifically, species 𝑖 is 𝐿𝑎𝑚𝑏𝑑𝑎=1.3 times more likely than species 𝑖+1 .
#
# The initial corpus is 𝐶𝑜𝑟𝑝𝑢𝑠=5 and a very rare species is marked as "buggy" ( 𝐵𝑢𝑔=40 ).
# ## Initialize
# All the required libraries has to be installed to the R environment prior to running the workbook. You can install packages using the below R function.
#
# example: If you need to install "ggplot2",
# ```install.packages("ggplot2",dependencies=TRUE)```
#
# Installing multiple packages at once is also possible using a vector.
# example: ```install.packages(c("ggplot2","tidyr","grid"),dependencies=TRUE)```
#
# Use ```install_version()``` function to install a specific version of a package.
# example: ```install_version("ggplot2", version = "3.1.1", repos = "http://cran.us.r-project.org")```
# Required packages
# +
library(ggplot2)
library(scales)
library(grid)
library(dplyr)
theme_set(theme_bw())
options(warn=-1)
# -
# ## Simulation Configuration
# +
# Configuration
S = 50 # number of species
Length = 50000 # number of test cases
Trials = 30 # number of experiment repetitions
NRange = c(1.2^seq(0,log2(Length)/log2(1.2)), Length) # data points
Lambda = 1.3 # probability decrease factor
Gravity = 100 # Impact of Weight on local distribution
Corpus = c(5) # We start with one initial seed that is neither too likely nor too unlikely if sampled from 1:S according to Weights.
Bug = 40 # id of buggy species (higher id means more difficult to find)
# Weights will be used to derive local distributions
# As Gravity approaches infty, a local distribution approaches Weights.
Weights = rep(0,S)
for (i in 1:S) {
if (i == 1)
Weights[i] = 0.5
else Weights[i] = Weights[i - 1] / Lambda
}
Weights = Weights / sum(Weights)
# Just some stats
print("10 smallest probabilities:")
print(tail(Weights, n=10))
print(paste("10 samples between 1 and ", S, ":", sep=""))
print(sample(1:S, replace=TRUE, 10, prob=Weights))
# -
# ## How we derive the local distribution for a seed.
local_distribution = function(seed) {
local_weights = rep(0,S)
local_weights[seed] = 0.5
# Decrease probability on the left and right of seed.
left = seed - 1
while (left >= 1) {
local_weights[left] = local_weights[left + 1] / Lambda
left = left - 1
}
right = seed + 1
while (right <= S) {
local_weights[right] = local_weights[right - 1] / Lambda
right = right + 1
}
# Add to Weights. The key idea is that some species (statements/branches)
# are always very likely or very unlikely, irrespective of the seed.
local_weights = local_weights / Gravity + Weights
# Normalize
return(local_weights / sum(local_weights))
}
# ## Power Schedule
# We start with a power schedule that chooses all seeds uniformely.
power_schedule = function(corpus) {
# Uniform distribution
return (rep(1/length(corpus), length(corpus)))
# Prefer later seeds
#weights = seq(1:length(corpus))
#return (weights / sum(weights))
}
print(power_schedule(Corpus))
print(power_schedule(c(1,2)))
# Let's Simulate
# +
data = data.frame("Run"=character(),"Fuzzer"=character(), "n"=integer(), "factor"=character(), "value"=numeric())
for (N in NRange) {
# Assuming there is only one initial seed
pi = local_distribution(Corpus[1])
data = rbind(data,
data.frame(Run=1, # We only need one run as there is no sampling
Fuzzer="Blackbox", n=N, factor="Discovery probability", value=sum(pi * (1-pi)^N)),
data.frame(Run=1, Fuzzer="Blackbox", n=N, factor="#Species discovered", value=sum(1-(1-pi)^N)),
data.frame(Run=1, Fuzzer="Blackbox", n=N, factor="Residual risk", value=pi[Bug]))
}
time_to_error = c()
for (run in 1:Trials) {
timestamps = list()
# Construct corpus
current_C = Corpus
for (N in 1:Length) {
seed = sample(1:length(current_C), 1, prob=power_schedule(current_C))
species = sample(1:S, 1, prob=local_distribution(seed))
#print(current_C)
if (! species %in% current_C) {
current_C = c(current_C, species)
timestamps[[toString(N)]] = current_C
}
}
# Derive discovery probability values at time stamps
for (N in NRange) {
# Find current corpus at time N
current_C = Corpus
for (n in names(timestamps)) {
if (as.integer(n) >= N)
break
else
current_C = timestamps[[n]]
}
if (Bug %in% current_C) {
time_to_error = c(time_to_error, as.integer(n))
}
# Derive global distribution pi from local distributions and power schedule
pi = rep(0,S)
qt = power_schedule(current_C)
for (t in 1:length(current_C)) {
pi = pi + qt[t] * local_distribution(current_C[t])
}
# Compute discovery probability
discovery_probability = sum(unlist(lapply(sample(1:S, N*2, replace=TRUE, prob=pi), function(x) ifelse(x %in% current_C,0,1)))) / (N * 2)
# Append to data frame
data = rbind(data,
data.frame(Run=run, Fuzzer="Greybox", n=N, factor="Discovery probability",value=discovery_probability),
data.frame(Run=run, Fuzzer="Greybox", n=N, factor="#Species discovered", value=length(current_C)),
data.frame(Run=run, Fuzzer="Greybox", n=N, factor="Bug Probability", value=pi[Bug]))
}
}
# -
print(paste("[Blackbox] Avg. time to error: ", 1/local_distribution(Corpus[1])[Bug]))
if (length(time_to_error) == 0) {
print("[Greybox] Error not found.")
} else {
print(paste("[Greybox] Avg. time to error: ", mean(time_to_error)))
}
summary(subset(data,factor=="Discovery probability"))
# Compute the average residual risk and number of discovered species over all runs.
# +
mean_data = data %>%
group_by(Fuzzer, factor, n) %>%
summarize(value = mean(value, na.rm = TRUE))
summary(mean_data)
# -
# ## Results
# +
ggplot(subset(data, factor=="Discovery probability"), aes(n, value)) +
geom_point(aes(shape=Fuzzer),color="gray") +
geom_line(data=subset(mean_data, factor=="Discovery probability"), aes(n, value, linetype=Fuzzer)) +
#geom_smooth(aes(linetype=Fuzzer), color ="black") +
scale_x_log10("Number of generated test inputs (n)",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
scale_y_log10("Discovery prob. Delta(n)",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
#facet_wrap(~ factor, ncol=1, scale="free") +
theme(axis.text.x = element_text(colour="black"), axis.text.y = element_text(colour="black"), legend.position = c(0.85, 0.75))
ggsave("../outputs/residual.pdf",scale=0.4)
ggplot(subset(data, factor=="#Species discovered"), aes(n, value / S)) +
geom_point(aes(shape=Fuzzer), color="gray") +
#geom_smooth(aes(linetype=Fuzzer), color ="black") +
geom_line(data=subset(mean_data, factor=="#Species discovered"), aes(n, value / S, linetype=Fuzzer)) +
scale_x_log10("Number of generated test inputs (n)",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
scale_y_continuous("Species coverage S(n)/S",
labels=percent) +
theme(axis.text.x = element_text(colour="black"), axis.text.y = element_text(colour="black"), legend.position = c(0.85, 0.25))
ggsave("../outputs/species.pdf",scale=0.4)
# -
# ## Goodness-Of-Fit of an Extrapolation by Orders of Magnitude
summary(lm(formula = log(subset(data, Fuzzer=="Blackbox" & factor=="Discovery probability" & value>0)$n) ~ log(subset(data, Fuzzer=="Blackbox" & factor=="Discovery probability" & value>0)$value)))
summary(lm(formula = log(subset(data, Fuzzer=="Greybox" & factor=="Discovery probability" & value>0)$n) ~ log(subset(data, Fuzzer=="Greybox" & factor=="Discovery probability" & value>0)$value)))
# +
logbias = log10(subset(data, Fuzzer == "Greybox" & factor=="Discovery probability" )$value) - log10(subset(data, Fuzzer == "Blackbox"& factor=="Discovery probability")$value)
logbias_avg = log10(subset(mean_data, Fuzzer == "Greybox" & factor=="Discovery probability" )$value) - log10(subset(mean_data, Fuzzer == "Blackbox"& factor=="Discovery probability")$value)
p1 = ggplot() +
geom_point(aes(x=rep(NRange,Trials), y=logbias), color="gray") +
geom_hline(yintercept=0, color="black", linetype="dashed") +
geom_line(aes(x=NRange, y=logbias_avg), color="black") +
#geom_smooth(aes(x=rep(NRange,1), y=bias), color="black") +
scale_x_log10("Number of generated test inputs (n)",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
scale_y_continuous("Adaptive bias (log10(GB)-log10(BB)") +
#facet_wrap(~c("Log-Difference (log(GB)-log(BB)"),strip.position="right") +
theme(axis.text.x = element_text(colour="black"), axis.text.y = element_text(colour="black"))
#ggsave("bias.pdf",scale=0.4)
bias = (subset(data, Fuzzer == "Greybox" & factor=="Discovery probability" )$value) - (subset(data, Fuzzer == "Blackbox"& factor=="Discovery probability")$value)
bias_avg = (subset(mean_data, Fuzzer == "Greybox" & factor=="Discovery probability" )$value) - (subset(mean_data, Fuzzer == "Blackbox"& factor=="Discovery probability")$value)
p2 = ggplot() +
geom_point(aes(x=rep(NRange,Trials), y=bias), color="gray") +
geom_hline(yintercept=0, color="black", linetype="dashed") +
geom_line(aes(x=NRange, y=bias_avg), color="black") +
#geom_smooth(aes(x=rep(NRange,1), y=bias), color="black") +
scale_x_log10("Number of generated test inputs (n)",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
scale_y_continuous(" Adaptive bias (GB - BB)") +
#facet_wrap(~c("Difference (GB - BB)"),strip.position="right") +
theme(axis.text.x = element_blank(), axis.title.x = element_blank(), axis.ticks=element_blank(),
plot.margin = unit(c(1,1,0,1), "cm"))
grid.newpage()
grid.draw(rbind(ggplotGrob(p2), ggplotGrob(p1), size = "last"))
pdf("../outputs/bias.pdf", width=5, height=5)
grid.newpage()
grid.draw(rbind(ggplotGrob(p2), ggplotGrob(p1), size = "last"))
dev.off()
# -
ggplot(subset(data, Fuzzer == "Greybox" & factor %in% c("Bug Probability","Discovery probability")), aes(n, value)) +
geom_point(aes(shape=factor),color="gray") +
#geom_smooth(aes(linetype=factor), color ="black") +
geom_line(data=subset(mean_data, Fuzzer == "Greybox" & factor %in% c("Bug Probability","Discovery probability")), aes(linetype=factor),color="black") +
geom_vline(aes(xintercept=mean(time_to_error)),linetype="dashed") +
scale_x_log10("Number of generated test inputs (n)",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
scale_y_log10("Probability",
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
facet_wrap(~Fuzzer) +
theme(axis.text.x = element_text(colour="black"), axis.text.y = element_text(colour="black"), legend.position = "bottom")
ggsave("../outputs/risk.pdf",scale=0.4, height=8)
| workbooks/Simulation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:hetmech]
# language: python
# name: conda-env-hetmech-py
# ---
# # Proof of concept DWWC matrix computation
# +
import pandas
from neo4j.v1 import GraphDatabase
import hetio.readwrite
import hetio.neo4j
import hetio.pathtools
from hetmech.degree_weight import dwwc
from hetio.matrix import get_node_to_position
# -
url = 'https://github.com/dhimmel/hetionet/raw/76550e6c93fbe92124edc71725e8c7dd4ca8b1f5/hetnet/json/hetionet-v1.0.json.bz2'
graph = hetio.readwrite.read_graph(url)
metagraph = graph.metagraph
# +
compound = 'DB00050'
disease = 'DOID:0050425'
damping_exponent = 0.4
# CbGeAlD does not contain duplicate nodes, so DWWC is equivalent to DWPC
metapath = metagraph.metapath_from_abbrev('CbGeAlD')
# -
# %%time
rows, cols, CbGeAlD_pc = dwwc(graph, metapath, damping=0)
rows, cols, CbGeAlD_dwwc = dwwc(graph, metapath, damping=damping_exponent)
CbGeAlD_dwwc.shape
# Density
CbGeAlD_dwwc.astype(bool).mean()
# Path count matrix
CbGeAlD_pc = CbGeAlD_pc.astype(int)
CbGeAlD_pc
# DWWC matrix
CbGeAlD_dwwc
i = rows.index(compound)
j = cols.index(disease)
# Path count
CbGeAlD_pc[i, j]
# degree-weighted walk count
CbGeAlD_dwwc[i, j]
# ### Cypher DWPC implementation
query = hetio.neo4j.construct_dwpc_query(metapath, property='identifier')
print(query)
driver = GraphDatabase.driver("bolt://neo4j.het.io")
params = {
'source': compound,
'target': disease,
'w': damping_exponent,
}
with driver.session() as session:
result = session.run(query, params)
result = result.single()
result
# ### hetio DWWC implementation
compound_id = 'Compound', 'DB00050'
disease_id = 'Disease', 'DOID:0050425'
paths = hetio.pathtools.paths_between(
graph,
source=graph.node_dict[compound_id],
target=graph.node_dict[disease_id],
metapath=metapath,
duplicates=True,
)
paths
# Path count
len(paths)
# DWWC
hetio.pathtools.DWPC(paths, damping_exponent=damping_exponent)
| 3.dwwc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Pipes and Filters
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Overview
# **Teaching:** 25 min
#
# **Exercises:** 10 min
#
# **Questions**
# - How can I combine existing commands to do new things?
#
# **Objectives**
# - Redirect a command's output to a file.
# - Process a file instead of keyboard input using redirection.
# - Construct command pipelines with two or more stages.
# - Explain what usually happens if a program or pipeline isn't given any input to process.
# - Explain Unix's 'small pieces, loosely joined' philosophy.
# + [markdown] slideshow={"slide_type": "slide"}
# Now that we know a few basic commands, we can finally look at the shell's most powerful feature: the ease with which it lets us combine existing programs in new ways. We'll start with a directory called molecules that contains six files describing some simple organic molecules. The .pdb extension indicates that these files are in Protein Data Bank format, a simple text format that specifies the type and position of each atom in the molecule.
# + slideshow={"slide_type": "subslide"}
# %%bash2 --dir ~/data-shell
# ls molecules
# + [markdown] slideshow={"slide_type": "subslide"}
# Let's go into that directory with cd and run the command `wc *.pdb`. `wc` is the "word count" command: it counts the number of lines, words, and characters in files (from left to right, in that order).
#
# The `*` in `*.pdb` matches zero or more characters, so the shell turns `*.pdb` into a list of all `.pdb` files in the current directory:
# + slideshow={"slide_type": "fragment"}
# %%bash2
# cd molecules
wc *.pdb
# + [markdown] slideshow={"slide_type": "subslide"}
# If we run `wc -l` instead of just `wc`, the output shows only the number of lines per file:
# + slideshow={"slide_type": "fragment"}
# %%bash2
wc -l *.pdb
# + [markdown] slideshow={"slide_type": "subslide"}
# We can also use `-w` to get only the number of words, or `-c` to get only the number of characters.
#
# Which of these files is shortest? It's an easy question to answer when there are only six files, but what if there were 6000? Our first step toward a solution is to run the command:
# + slideshow={"slide_type": "fragment"}
# %%bash2
wc -l *.pdb > lengths.txt
# + [markdown] slideshow={"slide_type": "subslide"}
# The greater than symbol, `>`, tells the shell to redirect the command's output to a file instead of printing it to the screen. (This is why there is no screen output: everything that wc would have printed has gone into the file `lengths.txt` instead.) The shell will create the file if it doesn't exist. If the file exists, it will be silently overwritten, which may lead to data loss and thus requires some caution. `ls lengths.txt` confirms that the file exists:
# + slideshow={"slide_type": "fragment"}
# %%bash2
# ls lengths.txt
# + [markdown] slideshow={"slide_type": "subslide"}
# We can now send the content of `lengths.txt` to the screen using `cat lengths.txt`. cat stands for "concatenate": it prints the contents of files one after another. There's only one file in this case, so cat just shows us what it contains:
# + slideshow={"slide_type": "fragment"}
# %%bash2
# cat lengths.txt
# + [markdown] slideshow={"slide_type": "slide"}
# ## Information: Output Page by Page
# We'll continue to use `cat` in this lesson, for convenience and consistency, but it has the disadvantage that it always dumps the whole file onto your screen. More useful in practice is the command `less`, which you use with `$ less lengths.txt`. This displays a screenful of the file, and then stops. You can go forward one screenful by pressing the spacebar, or back one by pressing `b`. Press `q` to quit.
# + [markdown] slideshow={"slide_type": "slide"}
# Now let's use the `sort` command to sort its contents.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: What Does `sort -n` Do?
# If we run sort on a file containing the following lines:
# ```bash
# 10
# 2
# 19
# 22
# 6
# ```
# the output is:
# ```bash
# 10
# 19
# 2
# 22
# 6
# ```
# If we run sort `-n` on the same input, we get this instead:
# ```bash
# 2
# 6
# 10
# 19
# 22
# ```
# Explain why `-n` has this effect.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Solution: What Does `sort -n` Do?
#
# The `-n` flag specifies a numeric sort, rather than alphabetical.
# + [markdown] slideshow={"slide_type": "slide"}
# We will also use the `-n` flag to specify that the sort is numerical instead of alphabetical. This does not change the file; instead, it sends the sorted result to the screen:
# + slideshow={"slide_type": "fragment"}
# %%bash2
sort -n lengths.txt
# + [markdown] slideshow={"slide_type": "subslide"}
# We can put the sorted list of lines in another temporary file called `sorted-lengths.txt` by putting `> sorted-lengths.txt` after the command, just as we used `> lengths.txt` to put the output of `wc` into `lengths.txt`. Once we've done that, we can run another command called `head` to get the first few lines in `sorted-lengths.txt`:
# + slideshow={"slide_type": "fragment"}
# %%bash2
sort -n lengths.txt > sorted-lengths.txt
head -n 1 sorted-lengths.txt
# + [markdown] slideshow={"slide_type": "subslide"}
# Using `-n 1` with `head` tells it that we only want the first line of the file; `-n 20` would get the first 20, and so on. Since `sorted-lengths.txt` contains the lengths of our files ordered from least to greatest, the output of head must be the file with the fewest lines.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Information: Redirecting to the same file
# It's a very bad idea to try redirecting the output of a command that operates on a file to the same file. For example:
# ```bash
# sort -n lengths.txt > lengths.txt
# ```
# Doing something like this may give you incorrect results and/or delete the contents of `lengths.txt`.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: What Does `>>` Mean?
# We have seen the use of `>`, but there is a similar operator `>>` which works slightly differently. By using the `echo` command to print strings, test the commands below to reveal the difference between the two operators:
# ```bash
# # # echo hello > testfile01.txt
# ```
# and:
# ```bash
# # # echo hello >> testfile02.txt
# ```
# Hint: Try executing each command twice in a row and then examining the output files.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Solution: What Does `>>` Mean?
# In the first example with `>`, the string "hello" is written to `testfile01.txt`, but the file gets overwritten each time we run the command.
#
# We see from the second example that the `>>` operator also writes "hello" to a file (in this `casetestfile02.txt`), but appends the string to the file if it already exists (i.e. when we run it for the second time).
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: Appending Data
# We have already met the `head` command, which prints lines from the start of a file. `tail` is similar, but prints lines from the end of a file instead.
#
# Consider the file `data-shell/data/animals.txt`. After these commands, select the answer that corresponds to the file `animalsUpd.txt`:
# ```bash
# head -n 3 animals.txt > animalsUpd.txt
# tail -n 2 animals.txt >> animalsUpd.txt
# ```
# 1. The first three lines of `animals.txt`
# 2. The last two lines of `animals.txt`
# 3. The first three lines and the last two lines of `animals.txt`
# 4. The second and third lines of `animals.txt`
# + [markdown] slideshow={"slide_type": "slide"}
# ## Solution: Appending Data
# Option 3 is correct. For option 1 to be correct we would only run the `head` command. For option 2 to be correct we would only run the `tail` command. For option 4 to be correct we would have to pipe the output of `head` into `tail -2` by doing `head -3 animals.txt | tail -2 >> animalsUpd.txt`
# + [markdown] slideshow={"slide_type": "slide"}
# If you think this is confusing, you're in good company: even once you understand what `wc`, `sort`, and `head` do, all those intermediate files make it hard to follow what's going on. We can make it easier to understand by running `sort` and `head` together:
# + slideshow={"slide_type": "fragment"}
# %%bash2
sort -n lengths.txt | head -n 1
# + [markdown] slideshow={"slide_type": "subslide"}
# The vertical bar, `|`, between the two commands is called a pipe. It tells the shell that we want to use the output of the command on the left as the input to the command on the right. The computer might create a temporary file if it needs to, or copy data from one program to the other in memory, or something else entirely; we don't have to know or care.
# + [markdown] slideshow={"slide_type": "subslide"}
# Nothing prevents us from chaining pipes consecutively. That is, we can for example send the output of `wc` directly to `sort`, and then the resulting output to `head`. Thus we first use a pipe to send the output of `wc` to `sort`:
# + slideshow={"slide_type": "fragment"}
# %%bash2
wc -l *.pdb | sort -n
# + [markdown] slideshow={"slide_type": "subslide"}
# And now we send the output of this pipe, through another pipe, to `head`, so that the full pipeline becomes:
# + slideshow={"slide_type": "fragment"}
# %%bash2
wc -l *.pdb | sort -n | head -n 1
# + [markdown] slideshow={"slide_type": "subslide"}
# This is exactly like a mathematician nesting functions like _log(3x)_ and saying "the log of three times x". In our case, the calculation is "head of sort of line count of `*.pdb`".
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: Piping Commands Together
# In our current directory, we want to find the 3 files which have the least number of lines. Which command listed below would work?
# 1. `wc -l * > sort -n > head -n 3`
# 2. `wc -l * | sort -n | head -n 1-3`
# 3. `wc -l * | head -n 3 | sort -n`
# 4. `wc -l * | sort -n | head -n 3`
# + [markdown] slideshow={"slide_type": "slide"}
# ## Solution: Piping Commands Together
# Option 4 is the solution. The pipe character `|` is used to feed the standard output from one process to the standard input of another. `>` is used to redirect standard output to a file. Try it in the `data-shell/molecules` directory!
# + [markdown] slideshow={"slide_type": "subslide"}
# Here's what actually happens behind the scenes when we create a pipe. When a computer runs a program - any program - it creates a process in memory to hold the program's software and its current state. Every process has an input channel called standard input. (By this point, you may be surprised that the name is so memorable, but don't worry: most Unix programmers call it "stdin"). Every process also has a default output channel called standard output (or "stdout"). A second output channel called standard error (stderr) also exists. This channel is typically used for error or diagnostic messages, and it allows a user to pipe the output of one program into another while still receiving error messages in the terminal.
#
# The shell is actually just another program. Under normal circumstances, whatever we type on the keyboard is sent to the shell on its standard input, and whatever it produces on standard output is displayed on our screen. When we tell the shell to run a program, it creates a new process and temporarily sends whatever we type on our keyboard to that process's standard input, and whatever the process sends to standard output to the screen.
#
# Here's what happens when we run `wc -l *.pdb > lengths.txt`. The shell starts by telling the computer to create a new process to run the `wc` program. Since we've provided some filenames as arguments, `wc` reads from them instead of from standard input. And since we've used `>` to redirect output to a file, the shell connects the process's standard output to that file.
#
# If we run `wc -l *.pdb | sort -n` instead, the shell creates two processes (one for each process in the pipe) so that `wc` and `sort` run simultaneously. The standard output of `wc` is fed directly to the standard input of sort; since there's no redirection with `>`, `sort`'s output goes to the screen. And if we run `wc -l *.pdb | sort -n | head -n 1`, we get three processes with data flowing from the files, through `wc` to `sort`, and from `sort` through `head` to the screen.
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# + [markdown] slideshow={"slide_type": "subslide"}
# This simple idea is why Unix has been so successful. Instead of creating enormous programs that try to do many different things, Unix programmers focus on creating lots of simple tools that each do one job well, and that work well with each other. This programming model is called "pipes and filters". We've already seen pipes; a filter is a program like `wc` or `sort` that transforms a stream of input into a stream of output. Almost all of the standard Unix tools can work this way: unless told to do otherwise, they read from standard input, do something with what they've read, and write to standard output.
#
# The key is that any program that reads lines of text from standard input and writes lines of text to standard output can be combined with every other program that behaves this way as well. You can and should write your programs this way so that you and other people can put those programs into pipes to multiply their power.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Information: Redirecting Input
# As well as using `>` to redirect a program's output, we can use `<` to redirect its input, i.e., to read from a file instead of from standard input. For example, instead of writing `wc ammonia.pdb`, we could write `wc < ammonia.pdb`. In the first case, `wc` gets a command line argument telling it what file to open. In the second, `wc` doesn't have any command line arguments, so it reads from standard input, but we have told the shell to send the contents of `ammonia.pdb` to `wc`'s standard input.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: What Does `<` Mean?
# Change directory to `data-shell` (the top level of our downloaded example data).
#
# What is the difference between:
# ```bash
# wc -l notes.txt
# ```
# and:
# ```bash
# wc -l < notes.txt
# ```
# + [markdown] slideshow={"slide_type": "slide"}
# ## Solution: What Does `<` Mean?
# `<` is used to redirect input to a command.
#
# In both examples, the shell returns the number of lines from the input to the `wc` command. In the first example, the input is the file `notes.txt` and the file name is given in the output from the `wc` command. In the second example, the contents of the file `notes.txt` are redirected to standard input. It is as if we have entered the contents of the file by typing at the prompt. Hence the file name is not given in the output - just the number of lines. Try this for yourself:
# ```bash
# wc -l
# this
# is
# a test
# Ctrl-D # This lets the shell know you have finished typing the input
#
# 3
# ```
# .
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: Why Does `uniq` Only Remove Adjacent Duplicates?
# The command `uniq` removes adjacent duplicated lines from its input. For example, the file `data-shell/data/salmon.txt` contains:
# ```bash
# coho
# coho
# steelhead
# coho
# steelhead
# steelhead
# ```
# Running the command `uniq salmon.txt` from the `data-shell/data` directory produces:
# ```bash
# coho
# steelhead
# coho
# steelhead
# ```
# Why do you think `uniq` only removes adjacent duplicated lines? (Hint: think about very large data sets.) What other command could you combine with it in a pipe to remove all duplicated lines?
# + [markdown] slideshow={"slide_type": "slide"}
# ## Solution: Why Does `uniq` Only Remove Adjacent Duplicates?
# ```bash
# sort salmon.txt | uniq
# ```
# .
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: Pipe Reading Comprehension
# A file called `animals.txt` (in the `data-shell/data` folder) contains the following data:
# ```bash
# 2012-11-05,deer
# 2012-11-05,rabbit
# 2012-11-05,raccoon
# 2012-11-06,rabbit
# 2012-11-06,deer
# 2012-11-06,fox
# 2012-11-07,rabbit
# 2012-11-07,bear
# ```
# What text passes through each of the pipes and the final redirect in the pipeline below?
# ```bash
# # # cat animals.txt | head -n 5 | tail -n 3 | sort -r > final.txt
# ```
# Hint: build the pipeline up one command at a time to test your understanding
# + [markdown] slideshow={"slide_type": "slide"}
# ## Solution: Pipe Reading Comprehension
# The `head` command extracts the first 5 lines from `animals.txt`. Then, the last 3 lines are extracted from the previous 5 by using the `tail` command. With the `sort -r` command those 3 lines are sorted in reverse order and finally, the output is redirected to a file `final.txt`. The content of this file can be checked by executing `cat final.txt`. The file should contain the following lines:
# ```bash
# 2012-11-06,rabbit
# 2012-11-06,deer
# 2012-11-05,raccoon
# ```
# .
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: Pipe Construction
# For the file `animals.txt` from the previous exercise, the command:
# ```bash
# cut -d , -f 2 animals.txt
# ```
# uses the `-d` flag to separate each line by comma, and the `-f` flag to print the second field in each line, to give the following output:
# ```bash
# deer
# rabbit
# raccoon
# rabbit
# deer
# fox
# rabbit
# bear
# ```
# What other command(s) could be added to this in a pipeline to find out what animals the file contains (without any duplicates in their names)?
# + [markdown] slideshow={"slide_type": "slide"}
# ## Solution: Pipe Construction
# ```bash
# cut -d , -f 2 animals.txt | sort | uniq
# ```
# .
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: Which Pipe?
# The file `animals.txt` contains 586 lines of data formatted as follows:
# ```bash
# 2012-11-05,deer
# 2012-11-05,rabbit
# 2012-11-05,raccoon
# 2012-11-06,rabbit
# ...
# ```
# Assuming your current directory is `data-shell/data/`, what command would you use to produce a table that shows the total count of each type of animal in the file?
# 1. `grep {deer, rabbit, raccoon, deer, fox, bear} animals.txt | wc -l`
# 2. `sort animals.txt | uniq -c`
# 3. `sort -t, -k2,2 animals.txt | uniq -c`
# 4. `cut -d, -f 2 animals.txt | uniq -c`
# 5. `cut -d, -f 2 animals.txt | sort | uniq -c`
# 6. `cut -d, -f 2 animals.txt | sort | uniq -c | wc -l`
# + [markdown] slideshow={"slide_type": "slide"}
# ## Solution: Which Pipe?
# Option 5. is the correct answer. If you have difficulty understanding why, try running the commands, or sub-sections of the pipelines (make sure you are in the `data-shell/data` directory).
# + [markdown] slideshow={"slide_type": "slide"}
# ## Nelle's Pipeline: Checking Files
# Nelle has run her samples through the assay machines and created 17 files in the `north-pacific-gyre/2012-07-03` directory described earlier. As a quick sanity check, starting from her home directory, Nelle types:
# ```bash
# # # cd north-pacific-gyre/2012-07-03
# wc -l *.txt
# ```
# The output is 18 lines that look like this:
# ```bash
# 300 NENE01729A.txt
# 300 NENE01729B.txt
# 300 NENE01736A.txt
# 300 NENE01751A.txt
# 300 NENE01751B.txt
# 300 NENE01812A.txt
# ... ...
# ```
# + [markdown] slideshow={"slide_type": "subslide"}
# Now she types this:
# ```bash
# wc -l *.txt | sort -n | head -n 5
#
# 240 NENE02018B.txt
# 300 NENE01729A.txt
# 300 NENE01729B.txt
# 300 NENE01736A.txt
# 300 NENE01751A.txt
# ```
# + [markdown] slideshow={"slide_type": "subslide"}
# Whoops: one of the files is 60 lines shorter than the others. When she goes back and checks it, she sees that she did that assay at 8:00 on a Monday morning — someone was probably in using the machine on the weekend, and she forgot to reset it. Before re-running that sample, she checks to see if any files have too much data:
# ```bash
# wc -l *.txt | sort -n | tail -n 5
#
# 300 NENE02040B.txt
# 300 NENE02040Z.txt
# 300 NENE02043A.txt
# 300 NENE02043B.txt
# 5040 total
# ```
# + [markdown] slideshow={"slide_type": "subslide"}
# Those numbers look good — but what's that 'Z' doing there in the third-to-last line? All of her samples should be marked 'A' or 'B'; by convention, her lab uses 'Z' to indicate samples with missing information. To find others like it, she does this:
# ```bash
# # # ls *Z.txt
#
# NENE01971Z.txt NENE02040Z.txt
# ```
# + [markdown] slideshow={"slide_type": "subslide"}
# Sure enough, when she checks the log on her laptop, there's no depth recorded for either of those samples. Since it's too late to get the information any other way, she must exclude those two files from her analysis. She could just delete them using `rm`, but there are actually some analyses she might do later where depth doesn't matter, so instead, she'll just be careful later on to select files using the wildcard expression `*[AB].txt`. As always, the `*` matches any number of characters; the expression `[AB]` matches either an 'A' or a 'B', so this matches all the valid data files she has.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: Wildcard Expressions
# Wildcard expressions can be very complex, but you can sometimes write them in ways that only use simple syntax, at the expense of being a bit more verbose.
#
# Consider the directory `data-shell/north-pacific-gyre/2012-07-03`: the wildcard expression `*[AB].txt` matches all files ending in `A.txt` or `B.txt`. Imagine you forgot about this.
#
# 1. Can you match the same set of files with basic wildcard expressions that do not use the `[]` syntax? Hint: You may need more than one expression.
# 2. The expression that you found and the expression from the lesson match the same set of files in this example. What is the small difference between the outputs?
# 3. Under what circumstances would your new expression produce an error message where the original one would not?
# + [markdown] slideshow={"slide_type": "slide"}
# ## Solution: Wildcard Expressions
# ```bash
# # # ls *A.txt
# # # ls *B.txt
# ```
# The output from the new commands is separated because there are two commands.
# When there are no files ending in `A.txt`, or there are no files ending in `B.txt`.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: Removing Unneeded Files
# Suppose you want to delete your processed data files, and only keep your raw files and processing script to save storage. The raw files end in `.dat` and the processed files end in `.txt`. Which of the following would remove all the processed data files, and only the processed data files?
# 1. `rm ?.txt`
# 2. `rm *.txt`
# 3. `rm * .txt`
# 4. `rm *.*`
# + [markdown] slideshow={"slide_type": "slide"}
# ## Solution: Removing Unneeded Files
# 1. This would remove `.txt` files with one-character names
# 2. This is correct answer
# 3. The shell would expand `*` to match everything in the current directory, so the command would try to remove all matched files and an additional file called `.txt`
# 4. The shell would expand `*.*` to match all files with any extension, so this command would delete all files
# + [markdown] slideshow={"slide_type": "slide"}
# ## Key Points
# - `cat` displays the contents of its inputs.
# - `head` displays the first 10 lines of its input.
# - `tail` displays the last 10 lines of its input.
# - `sort` sorts its inputs.
# - `wc` counts lines, words, and characters in its inputs.
# - `command > file` redirects a command's output to a file.
# - `first | second `is a pipeline: the output of the first command is used as the input to the second.
# - The best way to use the shell is to use pipes to combine simple single-purpose programs (filters).
| notebooks_plain/05_pipesNfilters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
# +
# %matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
# +
# install seaborn
# ! python -m pip install seaborn
# import library
import seaborn as sns
# -
# install openpyxl (will let us read xlsx files)
# ! python -m pip install openpyxl
# download the dataset and read it into a pandas dataframe:
df_can = pd.read_excel(
'https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DV0101EN-SkillsNetwork/Data%20Files/Canada.xlsx',
sheet_name='Canada by Citizenship',
skiprows=range(20),
skipfooter=2)
# +
# clean up the dataset to remove unnecessary columns (eg. REG)
df_can.drop(['AREA','REG','DEV','Type','Coverage'], axis = 1, inplace = True)
# rename the columns so that they make sense
df_can.rename (columns = {'OdName':'Country', 'AreaName':'Continent','RegName':'Region'}, inplace = True)
# make all column labels of type string
df_can.columns = list(map(str, df_can.columns))
# set the country name as index - useful for quickly looking up countries using .loc method
df_can.set_index('Country', inplace = True)
# add total column
df_can['Total'] = df_can.sum(axis = 1, numeric_only=True)
# limit the years
years = list(map(str, range(1980, 2014)))
# get the total population per year
df_tot = pd.DataFrame(df_can[years].sum(axis=0))
# change the years to type float
df_tot.index = map(float, df_tot.index)
# reset the index to put in back in as a column in the df_tot dataframe
df_tot.reset_index(inplace=True)
# rename columns
df_tot.columns = ['year', 'total']
# view the final dataframe
df_tot.head()
# +
plt.figure(figsize=(15, 10))
sns.set(font_scale=1.5)
sns.set_style('whitegrid')
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration') # add x- and y-labels
ax.set_title('Total Immigration to Canada from 1980 - 2013') # add title
plt.show()
# -
| regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# IMPORTING PACKAGES
import struct
import pandas as pd
import os
import librosa
import librosa.display
import numpy as np
import pickle
from keras.models import Sequential
from keras.layers import Add
from keras.layers import Dense, Dropout, Activation, Flatten, BatchNormalization, Input
from keras.layers import Convolution2D, Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.optimizers import Adam
from keras.utils import np_utils
from keras.models import model_from_json
from keras.utils import to_categorical
from keras.callbacks import ModelCheckpoint
from keras.applications.mobilenet import MobileNet
from keras.applications.mobilenet_v2 import MobileNetV2
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.preprocessing import LabelEncoder
from datetime import datetime
import matplotlib.pyplot as plt
# -
# Source: https://medium.com/@mikesmales/sound-classification-using-deep-learning-8bc2aa1990b7
# HELPER FUNCTION FOR READING AUDIO FILE
class WavFileHelper():
# read the files in the UrbanSound8K according to the CSV file contained in the folder
def read_file_properties(self, filename):
wave_file = open(filename,"rb")
riff = wave_file.read(12)
fmt = wave_file.read(36)
num_channels_string = fmt[10:12]
num_channels = struct.unpack('<H', num_channels_string)[0]
sample_rate_string = fmt[12:16]
sample_rate = struct.unpack("<I",sample_rate_string)[0]
bit_depth_string = fmt[22:24]
bit_depth = struct.unpack("<H",bit_depth_string)[0]
# return the channel, smaple rate and bit depth information of the sound files
return (num_channels, sample_rate, bit_depth)
# +
# EXTRACTING THE WAVE FORM FROM SOUND TRACK AND CONVERT TO 1-D ARRAY
# n is the mfcc value
def extract_feature(file_name, n):
standard_size = 88200 # standard size for audio signal which is about 2 s
try:
audio, sample_rate = librosa.core.load(file_name, mono=True, res_type='kaiser_fast')
fill = standard_size - audio.shape[0]
# if the sound file is less than 2s, fill the short part with zeros
if(fill>0):
audio = np.concatenate((audio, np.zeros(fill)), axis=0)
# if the file is more than 2s, clip the excess part
elif(fill<0):
audio = audio[:standard_size]
mfccs = librosa.feature.mfcc(y=audio, sr=sample_rate, n_mfcc=n)
mfccs = librosa.util.normalize(mfccs)
except Exception as e:
print("Error encountered while parsing file: ", file_name)
return None
# return the audio spectrum of the sound file
return mfccs
# Set the path to the full UrbanSound dataset
def audio_extration(n=40):
fulldatasetpath = 'UrbanSound8K/audio/'
metadata = pd.read_csv(fulldatasetpath + 'metadata/UrbanSound8K.csv')
features = []
label_amount = {}
# Iterate through each sound file and extract the features
for index, row in metadata.iterrows():
file_name = os.path.join(os.path.abspath(fulldatasetpath),'fold'+str(row["fold"])+'/',str(row["slice_file_name"]))
class_label = row["class"]
data = extract_feature(file_name, n)
label_amount[class_label] = label_amount.get(class_label, 0) + 1
features.append([data, class_label])
# Convert into a Panda dataframe
featuresdf = pd.DataFrame(features, columns=['feature','class_label'])
#print('Finished feature extraction from ', len(featuresdf), ' files')
return featuresdf, metadata
# -
# EXTRACT THE CHANNELS, SAMPLE RATE AND BIT DEPTH DATA FROM SOUNDS FILE
# use the previous functinons and actually do the info extraction
def info_extration(metadata):
wavfilehelper = WavFileHelper()
audiodata = []
for index, row in metadata.iterrows():
file_name = os.path.join(os.path.abspath('UrbanSound8K/audio/'),'fold'+str(row["fold"])+'/',str(row["slice_file_name"]))
data = wavfilehelper.read_file_properties(file_name)
audiodata.append(data)
# Convert into a Panda dataframe
audiodf = pd.DataFrame(audiodata, columns=['num_channels','sample_rate','bit_depth'])
return audiodf
# This part is used to test the previous functions
'''
print(type(featuresdf.iloc[0]['feature']))
print(featuresdf.head())
print()
print(audiodf.head())
lst = [0,1,10,23,96,106,114,122,171,196]
for n in lst:
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(featuresdf.iloc[n]['feature'][5:])
plt.ylabel('Amplitude')
plt.show()
print(featuresdf.iloc[n]['class_label'])
'''
# SEPERATING DATA TO TRAINING AND TESTING WITH 80% 20% RATIO
# use the previous functions and actually read and store all the files in the dataset
def data_seperation(featuresdf):
# Convert features and corresponding classification labels into numpy arrays
X = np.array(featuresdf.feature.tolist())
y = np.array(featuresdf.class_label.tolist())
# Encode the labels using Onehot technique
lable_encoder = LabelEncoder()
y_one_hot = to_categorical(lable_encoder.fit_transform(y))
# split the dataset
train_features, validate_features, train_labels, validate_labels = train_test_split(X, y_one_hot, test_size=0.11, random_state = 42)
train_features, test_features, train_labels, test_labels = train_test_split(train_features, train_labels, test_size=0.12, random_state = 42)
#print("Training data set size: ",train_features.shape[0])
#print("Validate data set size: ",validate_features.shape[0])
#print("Test data set size: ",test_features.shape[0])
return (train_features, validate_features, test_features, train_labels, validate_labels, test_labels, y_one_hot)
# Save the data for easy access
def save_data(n, train_features, validate_features, test_features, train_labels, validate_labels, test_labels, y_one_hot):
pickle_file = 'audio_features_mfcc'+ str(n) +'.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open(pickle_file, 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': validate_features,
'valid_labels': validate_labels,
'test_dataset': test_features,
'test_labels': test_labels,
'y_one_hot': y_one_hot,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
# MODEL BUILDING
# n is mfcc number
def model_building(n, validate_features, validate_labels, y_one_hot):
num_rows = n # audio frequency spectrum 173*n
num_columns = 173
num_channels = 1 # combine 2 channels to one channel
validate_features = validate_features.reshape(validate_features.shape[0], num_rows, num_columns, num_channels)
num_labels = y_one_hot.shape[1]
filter_size = 2
drop_rate = 0.4 # dropping 40% of data to prevent over-fitting
# Construct model
model = Sequential()
# Layer 1 - Conv 2D Layer 1
model.add(Conv2D(filters=16, kernel_size=2, input_shape=(num_rows, num_columns, num_channels)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(drop_rate))
# Layer - Conv Layer 2
model.add(Conv2D(filters=32, kernel_size=2))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(drop_rate))
# Layer 3 - Conv Layer 3
model.add(Conv2D(filters=64, kernel_size=2))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(drop_rate))
# Layer 4 - Conv Layer 4
model.add(Conv2D(filters=128, kernel_size=2))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(drop_rate))
# Layer 5 - Flatten Layer
model.add(GlobalAveragePooling2D())
model.add(Dense(num_labels, activation='softmax'))
# Compile the model
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
# Pre-training evaluation
# Display model architecture summary
# model.summary()
# Calculate pre-training accuracy
# score = model.evaluate(validate_features, validate_labels, verbose=1)
# accuracy = 100*score[1]
# print("Pre-training accuracy: %.4f%%" % accuracy)
return model
# Train the model
def model_training(n, model, train_features, train_labels, validate_features,
validate_labels, num_epochs=150, num_batch_size=256):
checkpointer = ModelCheckpoint(filepath='weights.best.basic_cnn.hdf5', verbose=1, save_best_only=True)
num_rows = n # audio frequency spectrum 173*40
num_columns = 173
num_channels = 1 # combine 2 channels to one channel
train_features = train_features.reshape(train_features.shape[0], num_rows, num_columns, num_channels)
validate_features = validate_features.reshape(validate_features.shape[0], num_rows, num_columns, num_channels)
#start = datetime.now()
history = model.fit(train_features, train_labels, batch_size=num_batch_size, epochs=num_epochs, validation_data=(validate_features, validate_labels), shuffle=True, callbacks=[checkpointer], verbose=1)
#duration = datetime.now() - start
#print("Training completed in time: ", duration)
return history
# MobileNet model
def mobileNet_building(n, validate_features, validate_labels, y_one_hot):
num_rows = n # audio frequency spectrum n*173
num_columns = 173
num_channels = 3
drop_rate = 0.4 # dropping 40% of data to prevent over-fitting
input_img = Input(shape=(num_rows, num_columns, num_channels))
mn = MobileNetV2(input_shape=(num_rows, num_columns, num_channels), include_top=False, weights='imagenet',input_tensor=input_img, pooling='max')
# add a layer at the end of the model
# the original MobileNet has 1000 classes, here only have 10 classes
# so add a layer with 10 output at the end
model = Sequential()
model.add(mn)
model.add(Dense(10, activation='softmax'))
# Compile the MobileNet model
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
model.summary()
return model
# Train the model
def mobileNet_training(n, model, train_features, train_labels, validate_features,
validate_labels, num_epochs=150, num_batch_size=256):
#start = datetime.now()
history = model.fit(train_features, train_labels, batch_size=num_batch_size, epochs=num_epochs, validation_data=(validate_features, validate_labels), shuffle=True, verbose=1)
#duration = datetime.now() - start
#print("Training completed in time: ", duration)
return history
# load the pickle file to
def load_data(pickle_file_path):
file_dict = pickle.load( open( pickle_file_path, "rb" ))
return file_dict['train_dataset'], file_dict['valid_dataset'],\
file_dict['test_dataset'],file_dict['train_labels'],\
file_dict['valid_labels'], file_dict['test_labels'], file_dict['y_one_hot']
# Pipe Line
# experimenting mfcc number
def pip(n):
# data preprocessing
'''
featuresdf, metadata = audio_extration(n)
audiodf = info_extration(metadata)
train_features, validate_features, test_features, \
train_labels, validate_labels, test_labels, y_one_hot = data_seperation(featuresdf)
save_data(n, train_features, validate_features, test_features, train_labels, validate_labels, test_labels, y_one_hot)
'''
# load pre-processed data
pickle_file_path = 'audio_features_mfcc40.pickle'
train_features, validate_features, test_features, train_labels,\
validate_labels, test_labels, y_one_hot = load_data(pickle_file_path)
# Simple model
#model = model_building(n, validate_features, validate_labels, y_one_hot)
#history = model_training(n, model, train_features, train_labels, validate_features,
# validate_labels, num_epochs=50, num_batch_size=256)
# MobileNet
train_features = np.stack((train_features, train_features, train_features), axis=3)
validate_features = np.stack((validate_features, validate_features, validate_features), axis=3)
print(train_features.shape)
print(validate_features.shape)
model = mobileNet_building(n, validate_features, validate_labels, y_one_hot)
history = mobileNet_training(n, model, train_features, train_labels, validate_features,
validate_labels, num_epochs=50, num_batch_size=256)
return history
# +
# evaluate the effect of between MFCC number
mfcc_num = list(range(40,260,20))
accuracy = []
history_list = []
for n in mfcc_num:
history = pip(n)
history_list.append(history)
accuracy.append(history.history['val_acc'][-1])
plt.plot(mfcc_num, accuracy)
plt.title('Model Accuracy vs. MFCC Number')
plt.ylabel('Accuracy')
plt.xlabel('MFCC Number')
plt.savefig('Model Accuracy vs. MFCC Number.png')
plt.show()
# +
pickle_file = 'training_history.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open(pickle_file, 'wb') as pfile:
pickle.dump(
{
'history_list': history_list,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Training history saved to pickle file.')
# +
acc = []
mfcc = list(range(40,140,20))
for hist in history_list:
acc.append(hist.history['val_acc'][-1])
plt.plot(mfcc, acc)
plt.title('Model Accuracy vs. MFCC Number')
plt.ylabel('Accuracy')
plt.xlabel('MFCC Number')
plt.savefig('Model Accuracy vs. MFCC Number.png')
plt.show()
# -
# training the model based on pretrained weights
history = pip(40)
# summarize history for accuracy
#print(history.history.keys())
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validate'], loc='upper left')
plt.show()
# +
# Evaluating the model on the training and testing set
score = model.evaluate(train_features, train_labels, verbose=0)
print("Training Accuracy: ", score[1])
score = model.evaluate(validate_features, validate_labels, verbose=0)
print("Validation Accuracy: ", score[1])
# +
# Source: https://machinelearningmastery.com/save-load-keras-deep-learning-models/
# Save model to JSON
model_json = model.to_json()
with open("sound_classification_model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model_weights.h5")
print("Saved model to disk")
# -
# Load the saved model
# load json and create model
json_file = open('sound_classification_model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("model_weights.h5")
print("Loaded model from disk")
# +
# Final testing after the model is tunned
test_features = test_features.reshape(test_features.shape[0], num_rows, num_columns, num_channels)
score = model.evaluate(test_features, test_labels, verbose=0)
print("Testing Accuracy: ", score[1])
# -
# Loadoaded model
loaded_model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
# Test performance of user input audio file
# x need to be numpy array of shape (40, 173) processed through mfcc
# make sure the sound file length is <= 2s, or doesn't contians info after 2s
file_path = ''
x = extract_feature(file_path)
print(x.shape)
predictions = loaded_model.predict(x)
print(predictions)
# +
# next step:
# 1. Run the model
# 2. Save the data set (train/validation/test data)
# 3. Save the model
# 4. Improve model performance
# 5. Modify the model - possible: VGG16, LeNet, GoogLeNet
# 6. Predict the user input audio clip
| Audio Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Extract amplicons from MAF alignment
# - test run for alignments of >=5 genomes - bimodal distribution: apart from almost all species alginments, there are ~7 species (gambiae complex) alignments
# - relaxed amplicon search parameters: allow missing species in alignment (max 11), shorter flanks (40 bp) and longer inserts (200 bp)
# - filter amplicons variable target (940 markers left)
# - extend amplicons
# - discriminative power analysis (aka clusters of identical sequences), remove if less than 10 identified lineages
# - annotate amplicons with genes and repeats, remove amplicons with repeats (663 markers left)
# - write combined maf and metadata
#
# Hints from downstream analysis:
# - length filter should be applied for agam reference length, not only to aligned length
# - within-intron markers are not treated as introns, but can be estimated from Gene=AGAP* and Exon=None
#
# __Next__, rerun tree shape analysis, Aedes/Culex/Dmel blast, and assign priority to markers.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pybedtools
from Bio import Phylo
from Bio import AlignIO
from Bio.Phylo.TreeConstruction import DistanceCalculator, DistanceTreeConstructor
from collections import Counter
from scipy.stats import gaussian_kde
# ## Filenames and parameters
DATA_DIR = "../../../data/"
# multiple genome alignment file from Neafsey et al 2015 Science paper
CHR_FILE = DATA_DIR + "AgamP3_21genome_maf/chr{}.maf"
# annotations from VectorBase
GENES = pybedtools.BedTool(DATA_DIR + "genome/agamP3/Anopheles-gambiae-PEST_BASEFEATURES_AgamP3.8.gff3.gz")
REPEATS = pybedtools.BedTool(DATA_DIR + "genome/agamP3/Anopheles-gambiae-PEST_REPEATFEATURES_AgamP3.gff3.gz")
AMPL_FILE = "data/20180619_comb_ampl.maf"
TREE_FILE = "data/20180619_comb_ampl_tree.nwk"
META_FILE = "data/20180619_comb_ampl_data.csv"
# +
# Alignment filtering parameters
min_species = 10 # minimum number of species in alignment
min_aligned = 150 # minimum alignment length, also used as minimum amplicon length
# relaxed compared to first iteration (190)
min_conserved = 40 # minimum length of flanks with given conservation level (used for primer design),
# here set lower than in the first iteration (50)
max_xs = 0.1 # maximum proportion of indels (represented as X) in flanks
max_ns = 0.1 # maximum proportion of substitutions (represented as N) in flanks
max_insert = 200 # maximum length of non-conserved sequence between two conserved flank
# increased compared to first iteration (100)
min_id_lineages = 10 # minimum number of distinct sequences in alignment
# this cutoff removes ubiquitous gambiae complex-only alignments
# -
#
#
# ## Find amplicons in alignment - min 5 species (test)
# +
def seq_repr(alignment):
'''
Given multiple sequence alignment, return first sequence with Ns for ambiguous chars and X's for indels.'''
seq = ''
for i in range(alignment.get_alignment_length()):
col = alignment[:, i]
if '-' in col: # indel has higher priority than substitution
seq += 'X'
elif len(set(col)) == 1:
seq += col[0]
else:
seq += 'N'
return seq
def get_conserved_subsequences(seq, max_ns, max_xs, min_len):
'''
Given sequence, substitution (max_ns) and indel (max_xs) levels, and minimum subsequence length
return list of tuples for the subsequences with given conservation level (overlapping regions merged).
If no conserved subsequences found, return 'None'.'''
slen = len(seq)
if slen < min_len:
return None
def is_conserved(s, max_ns, max_xs):
if s.count('N')/len(s) <= max_ns and s.count('X')/len(s) <= max_xs:
return True
else:
return False
cons_windows = [is_conserved(seq[i:i + min_len], max_ns, max_xs) for i in range(slen - min_len + 1)]
if sum(cons_windows) == 0:
return None
cons_kernels = []
in_kernel = False
for i, cw in enumerate(cons_windows):
if in_kernel:
if cw == False:
in_kernel = False
cons_kernels.append(i + min_len)
elif cw == True:
cons_kernels.append(i)
in_kernel = True
if in_kernel:
cons_kernels.append(i + min_len)
# merge overlapping kernels
merged_kernels = []
for i in range(len(cons_kernels)//2):
start = cons_kernels[i * 2]
end = cons_kernels[i * 2 + 1]
if not merged_kernels:
merged_kernels.append((start, end))
else:
prev_start = merged_kernels[-1][0]
prev_end = merged_kernels[-1][1]
if prev_end >= start:
upper_bound = max(prev_end, end)
merged_kernels[-1] = (prev_start, upper_bound) # replace by merged interval
else:
merged_kernels.append((start, end))
return np.asarray(merged_kernels)
# functions test
# for alignment in AlignIO.parse("../../data/AgamP3_maf/chr2L.maf", "maf"):
# if len(alignment) >= min_species and alignment.get_alignment_length() >= min_aligned:
# seq = seq_repr(alignment)
# cons = get_conserved_subsequences(seq, max_ns=max_ns, max_xs=max_xs, min_len=min_conserved)
# if cons is not None: # conser
# print(seq)
# print(cons, cons[:,1] - cons[:,0])
# break
# +
# Candidate amplicon search - within conserved sequences and between consecutive conserved sequences
def get_candidate_amplicons(cons, min_len, max_insert):
'''
Given conserved subsequence intervals, minimum amplicon length and maximum insert length,
return list of plausible amplicons with insert positions'''
ampls = []
for reg in cons: # internal amplicons
if reg[1] - reg[0] >= min_len:
ampls.append((reg[0], reg[1], 0, 0))
for i in range(len(cons) - 1):
for j in range(i + 1, len(cons)):
if cons[j, 0] - cons[i, 1] <= max_insert:
if cons[j, 1] - cons[i, 0] >= min_len:
ampls.append((cons[i, 0], cons[j, 1],
cons[i, 1], cons[j, 0]))
return ampls
# function test - long run
# for alignment in AlignIO.parse("../../data/AgamP3_maf/chr2L.maf", "maf"):
# if len(alignment) >= min_species and alignment.get_alignment_length() >= min_aligned:
# seq = seq_repr(alignment)
# cons = get_conserved_subsequences(seq, max_ns=max_ns, max_xs=max_xs, min_len=min_conserved)
# if cons is not None:
# ampls = get_candidate_amplicons(cons, min_aligned, max_insert)
# if len(ampls) > 0:
# for reg in cons:
# print(reg, seq[reg[0]:reg[1]])
# for ampl in ampls:
# print(alignment[:, ampl[0]:ampl[1]])
# print(ampls)
# break
# +
def gapped_coord(aln, coord, ref=0):
'''
Transforms coordinate in maf alignment according to number of gaps in ref (i-th seq in alignment)
'''
ngaps = str(aln[ref, :coord].seq).count('-')
return aln[ref].annotations['start'] + coord - ngaps
def alignment_to_amplicons(alignment, min_species, min_aligned, max_xs, max_ns,
min_conserved, max_insert, annotated=True):
'''
Given alignment and filtering paramters
return list of (alignment, target start, target end)
'''
ampl_data = []
if len(alignment) >= min_species and alignment.get_alignment_length() >= min_aligned:
seq = seq_repr(alignment)
cons = get_conserved_subsequences(seq, max_ns=max_ns, max_xs=max_xs, min_len=min_conserved)
if cons is not None:
ampls = get_candidate_amplicons(cons, min_aligned, max_insert)
if len(ampls) > 0:
for ampl in ampls:
ampl_aln = alignment[:, ampl[0]:ampl[1]]
if annotated:
ampl_aln[0].annotations = alignment[0].annotations.copy()
ampl_aln[0].annotations['start'] = gapped_coord(alignment, ampl[0])
ampl_aln[0].annotations['size'] = gapped_coord(alignment, ampl[1]) - ampl_aln[0].annotations['start']
ampl_data.append((ampl_aln, (ampl[2] - ampl[0], ampl[3] - ampl[0])))
return ampl_data
return None
#function test
test_amplicons = []
for alignment in AlignIO.parse(CHR_FILE.format('3L'), "maf"):
if alignment[0].annotations['start'] > 9800000:
a = alignment_to_amplicons(alignment, min_species=21, min_aligned=190, max_xs=0.1, max_ns=0.1,
min_conserved=40, max_insert=200)
if a is not None:
test_amplicons.extend(a)
print(test_amplicons)
break
if alignment[0].annotations['start'] > 9811000:
break
print(test_amplicons[0][0][0].annotations['start'])
# -
# proportion of variable nucleotides depending on window size
x = seq_repr(test_amplicons[0][0])
for n in ('N','X'):
for window in (40,50):
plt.plot([
(x[i:i+window].count(n)/window)
for i in range(len(x)-window)], label='{} {}'.format(n, window))
plt.axhline(y=0.1, color='r', linestyle='-', label='cutoff')
plt.legend();
# insert size becomes higher than conservative cutoff for this intron marker - increase maximum variable insert
# %%time
amplicons = []
# exctract amplicons
for chrom in ('2L', '2R', '3L', '3R', 'X', 'Unkn'):
print(CHRX_FILE.format(chrom))
for alignment in AlignIO.parse(CHR_FILE.format(chrom), "maf"):
a = alignment_to_amplicons(alignment, min_species, min_aligned, max_xs, max_ns, min_conserved, max_insert, annotated=True)
if a is not None:
amplicons.extend(a)
print(len(amplicons))
# number of aligned speicies
ax = plt.figure().gca()
ax.hist([len(a[0]) for a in amplicons], bins=range(5, 23))
plt.yscale('log')
ax.xaxis.set_major_locator(MaxNLocator(integer=True));
# clearly bimodal distribution
genomes = Counter()
for a in amplicons:
if len(a[0]) < 12:
for i in range(len(a[0])):
genomes[a[0][i].id.split('.')[0]] += 1
print(genomes)
# most markers recovered in ~7 genomes are found only within gambiae group
# ## Find amplicons in alignment - min 10 species (experiment)
# %%time
amplicons = []
# exctract amplicons
for chrom in ('2L', '2R', '3L', '3R', 'X', 'UNKN', 'Y_unplaced'):
print(CHR_FILE.format(chrom))
for alignment in AlignIO.parse(CHR_FILE.format(chrom), "maf"):
a = alignment_to_amplicons(alignment, min_species, min_aligned, max_xs, max_ns, min_conserved, max_insert, annotated=True)
if a is not None:
amplicons.extend(a)
print(len(amplicons))
# number of aligned speicies
ax = plt.figure().gca()
ax.hist([len(a[0]) for a in amplicons], bins=range(5, 23))
plt.yscale('log')
ax.xaxis.set_major_locator(MaxNLocator(integer=True));
# ## Remove amplicons without targets
flt_amplicons = [a for a in amplicons if a[1][0] > 0]
len(flt_amplicons)
# check for the most informative marker from first iteration
for a in amplicons:
if a[0][0].annotations['start'] > 9810300 and a[0][0].annotations['start'] < 9810500:
print(a)
# ## Extend variable insert
# +
def prop_var(seq):
'''
Return propotion of variable nucleotides in seq_repr of alignment'''
return (seq.count('N') + seq.count('X'))/len(seq)
def extend_variable(seq, start, end, min_ambig=0.5):
'''
Extends flanks of variable insert. Works only if seq[0:start] and seq[end:len(seq)] are conserved.
This should be true for pre-selected amplicons (see 20180223).
Parameters - sequence, start and end of variable target to be extended,
minimum proportion of variable sites for extended region. '''
var_start = False
for i in range(0, start - 1):
if prop_var(seq[i:start]) >= min_ambig:
#print(seq[i:start])
var_start = True
if var_start:
if seq[i] in 'NX':
ext_start = i
#print(ext_start)
break
else:
ext_start = start
var_end = False
for i in reversed(range(end + 1,len(seq))):
if prop_var(seq[end:i]) >= min_ambig:
#print(seq[end:i])
var_end = True
if var_end:
if seq[i - 1] in 'NX':
ext_end = i
#print(ext_end)
break
else:
ext_end = end
return (ext_start, ext_end)
# -
long_amplicons = []
for a in flt_amplicons:
seq = seq_repr(a[0])
(start, end) = extend_variable(seq, a[1][0], a[1][1])
insert = seq[start:end]
long_amplicons.append([a[0], seq, start, end, insert])
display(flt_amplicons[0], long_amplicons[0])
# ## Discriminative power analysis (identical clusters)
# +
def identical_clusters(aln):
'''
Given alignment, return list of sets with species IDs with identical sequences'''
ids = [set()]
dm = DistanceCalculator('identity').get_distance(aln)
dm.names = [n.split('.')[0] for n in dm.names]
for i in range(len(dm)):
for j in range(i + 1, len(dm)):
if dm[i,j] == 0:
n1 = dm.names[i]
n2 = dm.names[j]
for cl in ids:
if (n1 in cl):
if (n2 in cl):
break
if (n2 not in cl):
cl.add(n2)
break
else:
ids.append(set((n1, n2)))
id_clusters = ids[1:]
discrim = len(dm) - sum([len(cl)-1 for cl in id_clusters])
return [id_clusters, discrim]
cl_amplicons = []
for a in long_amplicons:
target = a[0][:, a[2]:a[3]]
cl_amplicons.append(a + identical_clusters(target))
cl_amplicons[0]
# -
# how many lineages can each amplicon discriminate?
from matplotlib.ticker import MaxNLocator
ax = plt.figure().gca()
ax.hist([a[-1] for a in cl_amplicons], bins=range(20))
ax.xaxis.set_major_locator(MaxNLocator(integer=True));
# divide number of species aligned to obtain estimates of relative informativeness
data = [a[-1]/len(a[0]) for a in cl_amplicons]
density = gaussian_kde(data)
xs = np.linspace(0,1,200)
density.covariance_factor = lambda : .1
density._compute_covariance()
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(5, 10))
ax1.hist(data, bins=20)
ax2.plot(xs,density(xs))
plt.show()
# set identification power cutoff of 0.5
# unique sequences divided by total sequences in alignment
# unused in current variation
info_amplicons = [a for a in cl_amplicons if a[-1]/len(a[0]) >= 0.5]
plt.hist([a[-1]/len(a[0]) for a in info_amplicons], bins=20);
display(len(info_amplicons))
len(cl_amplicons)
# ## Create metadata
amplicon_stats = []
for aln in cl_amplicons:
amplicon_stats.append({
'seqid': aln[0][0].id,
'start': aln[0][0].annotations['start'],
'end': gapped_coord(aln[0], aln[0].get_alignment_length()),
'aligned_len': aln[0].get_alignment_length(),
'snvs': aln[1].count('N'),
'indels': aln[1].count('X'),
'target_start': gapped_coord(aln[0], aln[2]),
'target_end': gapped_coord(aln[0], aln[3]),
'target_aligned_len': aln[3] - aln[2],
'target_snvs': aln[4].count('N'),
'target_indels': aln[4].count('X'),
'aligned_species': len(aln[0]),
'unid_species': aln[5],
'id_lineages': aln[6],
'informativeness': aln[6]/len(aln[0]),
'chr': aln[0][0].id[10:],
'alignment': aln[0]
})
meta = pd.DataFrame(amplicon_stats)
display(meta.shape)
# explore markers per chromsome
meta['chr'].value_counts()
# ## Annotate amplicons - genes and repeats
# create list of BED intervals for amplicons
amplicon_beds = meta[['chr', 'start', 'end']].to_string(header=False, index=False).split('\n')
amplicon_beds[0]
# +
def bt_to_df(bt):
'''
Convert bedtool to pandas dataframe replacing empty files with None'''
if len(bt) > 0:
return bt.to_dataframe()
else:
return None
def annotate_interval(bed_str, genes, repeats):
'''
Annotate interval in string format genes and repats annotation tracks
'''
def bt_to_df(bt):
'''
Convert bedtool to pandas dataframe'''
if len(bt) > 0:
return bt.to_dataframe()
else:
return pd.DataFrame()
def get_attrs(d, feature, attr_id):
'''
From gff dataframe extract list of features by attribute ID
Attribute string example for gene feature:
ID=AGAP001235;biotype=protein_coding
'''
out = []
try:
for attr in d[d.feature == feature]['attributes']:
for a in attr.split(';'):
aa = a.split('=')
if aa[0] == attr_id:
out.append(aa[1])
if len(out) > 0:
return ';'.join(out)
except: # no annotations
pass
return 'None'
attr_dict = dict()
# intersect
a_bed = pybedtools.BedTool(bed_str, from_string=True)
ag_gff = genes.intersect(a_bed)
ar_gff = repeats.intersect(a_bed)
# convert annotations to dataframe
ampl_annot = pd.concat([bt_to_df(ag_gff), bt_to_df(ar_gff)])
# extract annotations
attr_dict = {
'gene': get_attrs(ampl_annot, 'gene', 'ID'),
'mRNA': get_attrs(ampl_annot, 'mRNA', 'ID'),
'exon': get_attrs(ampl_annot, 'exon', 'ID'),
'repeat': get_attrs(ampl_annot, 'repeat', 'Name'),
}
attr_dict['utr'] = ('Yes' if ('utr' in str(ampl_annot['feature'])) else 'None')
attr_dict['intron'] = ('Yes' if (attr_dict['mRNA'].count(';') < attr_dict['exon'].count(';')) else 'None')
return attr_dict
annotate_interval(amplicon_beds[0], GENES, REPEATS)
# -
# %%time
# annotation dictionary
ann_dict = dict()
for (i, bed) in enumerate(amplicon_beds):
ann_dict[i] = annotate_interval(bed, GENES, REPEATS)
pd.DataFrame(ann_dict)
# combine metadata, explore repeats
meta_ann = pd.concat([meta, pd.DataFrame(ann_dict).T], axis=1)
display(meta_ann.shape)
meta_ann['repeat'].value_counts()
# remove repeats and <10 lineages
meta_nonrep = meta_ann[(meta_ann['repeat'] == 'None') & (meta_ann['id_lineages'] >= min_id_lineages)]
meta_nonrep.shape
# markers per gene
meta_nonrep['gene'].value_counts().head(10)
# Overall, multiple markers per gene are common. Top-2:
#
# - AGAP010147 - myosin heavy chain (3R:49Mbp)
# - AGAP002859 - Na/Ca-exchange protein (dmel) (2R:28Mbp)
# inronic markers
print('{} introns'.format(meta_nonrep[meta_nonrep['intron'] == 'Yes'].shape[0]))
print('{} genes'.format(meta_nonrep[meta_nonrep['intron'] == 'Yes']['gene'].nunique()))
# ## Write alignments
# write combined autosomal and X-chromosomal amplicons
count = 0
with open(AMPL_FILE, "w") as handle:
for a in meta_nonrep['alignment']:
count += AlignIO.write(a, handle, "maf")
count
# Number of amplicons total
# !grep -c AgamP3 {AMPL_FILE}
# ## Write trees
def phylo_tree(aln):
'''
Given alignment, return NJ tree in nwk format'''
calculator = DistanceCalculator('identity')
dm = calculator.get_distance(aln)
dm.names = [n.split('.')[0] for n in dm.names]
constructor = DistanceTreeConstructor()
tree = constructor.nj(dm)
return tree
ampl_alns = AlignIO.parse(AMPL_FILE, "maf")
ampl_trees = [phylo_tree(a) for a in ampl_alns]
with open(TREE_FILE, 'w') as o:
Phylo.write(trees=ampl_trees, file=o, format='newick')
# ! wc -l {TREE_FILE}
# ## Write metadata
# final formatting changes (only column-wise)
meta_nonrep['id lineage proportion'] = meta_nonrep.id_lineages / meta_nonrep.aligned_species
meta_nonrep.reset_index(drop=True, inplace=True)
meta_nonrep.drop(columns=['alignment'], inplace=True)
meta_nonrep.tail()
meta_nonrep.to_csv(META_FILE)
| work/1_panel_design/20180619_more_amplicons.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: pyml
# language: python
# name: pyml
# ---
import pandas as pd
df_wine = pd.read_csv('wine.data', header=None)
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, t_test = train_test_split(X, y, test_size=0.3, stratify=y, random_state=0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
import numpy as np
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train==label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label-1]))
d = 13
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train==label].T)
S_W += class_scatter
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
print('Class label distribution: %s' % np.bincount(y_train)[1:])
mean_overall = np.mean(X_train_std, axis=0)
d = 13
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train==i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1)
mean_overall = mean_overall.reshape(d, 1)
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i]) for i in range(len(eigen_vals))]
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
print('Eigenvalues in descending order:\n')
for eigen_pair in eigen_pairs:
print(eigen_pair[0])
import matplotlib.pyplot as plt
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center', label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid', label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real, eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train==l, 0], X_train_lda[y_train==l, 1] * (-1), c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.show()
| ch05/py_lda.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.8 64-bit (''base'': conda)'
# language: python
# name: python36864bitbasecondaa1a218bda4144e0b95099118ea02d83a
# ---
from __future__ import print_function
from imutils import perspective
from imutils import contours
import numpy as np
import argparse
import imutils
import cv2
def order_points_old(pts):
# initialize a list of coordinates that will be ordered
# such that the first entry in the list is the top-left,
# the second entry is the top-right, the third is the
# bottom-right, and the fourth is the bottom-left
rect = np.zeros((4, 2), dtype="float32")
# the top-left point will have the smallest sum, whereas
# the bottom-right point will have the largest sum
s = pts.sum(axis=1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]
# now, compute the difference between the points, the
# top-right point will have the smallest difference,
# whereas the bottom-left will have the largest difference
diff = np.diff(pts, axis=1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]
# return the ordered coordinates
return rect
# +
import cv2
import numpy as np
kernal=np.ones((3,3 ),np.uint8)
# background=cv2.imread('background.jpeg',1)
image=cv2.imread('background1.jpeg',1)
graytin=cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
graytin=cv2.GaussianBlur(graytin,(9,9),3)
# blur=cv2.GaussianBlur(graytin,(9,9),3)
# dila=cv2.dilate(blur,kernal,iterations=5)
edged=cv2.Canny(graytin,84,0)
# cv2.imshow('background',background)
#cv2.imshow('tin',tin)
#cv2.imshow('gray',graytin)
cv2.imshow('canny',edged)
if cv2.waitKey(0)& 0xFF ==27:
cv2.destroyAllWindows()
# -
# find contours in the edge map
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
# sort the contours from left-to-right and initialize the bounding box
# point colors
(cnts, _) = contours.sort_contours(cnts)
colors = ((0, 0, 255), (240, 0, 159), (255, 0, 0), (255, 255, 0))
boxlist=[]# loop over the contours individually
for (i, c) in enumerate(cnts):
# if the contour is not sufficiently large, ignore it
if cv2.contourArea(c) < 100:
continue
# compute the rotated bounding box of the contour, then
# draw the contours
box = cv2.minAreaRect(c)
box = cv2.cv.BoxPoints(box) if imutils.is_cv2() else cv2.boxPoints(box)
box = np.array(box, dtype="int")
cv2.drawContours(image, [box], -1, (0, 255, 0), 2)
# show the original coordinates
print(("Object #{}:".format(i + 1)))
boxlist.append(box)
print(box)
rect = order_points_old(box)
# check to see if the new method should be used for
# ordering the coordinates
if 1 > 0:
rect = perspective.order_points(box)
# show the re-ordered coordinates
print(rect.astype("int"))
print("")
rect = order_points_old(box)
# check to see if the new method should be used for
# ordering the coordinates
if len(boxlist) > 0:
rect = perspective.order_points(box)
# show the re-ordered coordinates
print(rect.astype("int"))
print("")
for ((x, y), color) in zip(rect, colors):
cv2.circle(image, (int(x), int(y)), 5, color, -1)
# draw the object num at the top-left corner
cv2.putText(image, "Object #{}".format(i + 1),
(int(rect[0][0] - 15), int(rect[0][1] - 15)),
cv2.FONT_HERSHEY_SIMPLEX, 0.55, (255, 255, 255), 2)
# show the image
cv2.imshow("Image", image)
cv2.waitKey(0)
# order the points in the contour such that they appear
# in top-left, top-right, bottom-right, and bottom-left
# order, then draw the outline of the rotated bounding
# box
rect = order_points_old(box)
# check to see if the new method should be used for
# ordering the coordinates
if len(boxlist) > 0:
rect = perspective.order_points(box)
# show the re-ordered coordinates
print(rect.astype("int"))
print("")
for ((x, y), color) in zip(rect, colors):
cv2.circle(image, (int(x), int(y)), 5, color, -1)
# draw the object num at the top-left corner
cv2.putText(image, "Object #{}".format(i + 1),
(int(rect[0][0] - 15), int(rect[0][1] - 15)),
cv2.FONT_HERSHEY_SIMPLEX, 0.55, (255, 255, 255), 2)
# show the image
cv2.imshow("Image", image)
cv2.waitKey(0)
# + run_control={"marked": false}
import cv2
img=cv2.imread('background1.jpeg',1)
def nothing(x):
pass
traker=cv2.namedWindow('tracker')
cv2.createTrackbar('low','tracker',0,255,nothing)
cv2.createTrackbar('upper','tracker',0,255,nothing)
kernal=np.ones((3,3 ),np.uint8)
while True:
lower=cv2.getTrackbarPos('low','tracker')
upper=cv2.getTrackbarPos('upper','tracker')
blur=cv2.GaussianBlur(img,(9,9),3)
#dila=cv2.dilate(img,kernal,iterations=5)
canny=cv2.Canny(blur ,lower,upper)
cv2.imshow('canny',canny)
if cv2.waitKey(1)&0xFF == 27:
break
cv2.destroyAllWindows()
# -
| bottle detction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.6.2
# language: julia
# name: julia-0.6
# ---
# # Meet Julia
# ## Variables
# **Documentation**: https://docs.julialang.org/en/stable/manual/variables/
# ## Builtin Types
# ### Integers and Floats
# **Documentation**: https://docs.julialang.org/en/stable/manual/integers-and-floating-point-numbers/
# **API Docs**: https://docs.julialang.org/en/stable/stdlib/numbers/
# ### Complex and Rational Numbers
# **Documentation**: https://docs.julialang.org/en/stable/manual/complex-and-rational-numbers/
# **API Docs**: https://docs.julialang.org/en/stable/stdlib/numbers/
# ### Strings
# **Documentation**: https://docs.julialang.org/en/stable/manual/strings/
# **API Docs**: https://docs.julialang.org/en/stable/stdlib/strings/
# ### Arrays
# **Documentation**: https://docs.julialang.org/en/stable/manual/arrays/
# **API Docs**: https://docs.julialang.org/en/stable/stdlib/arrays/
# ### Dicts
# **API Docs**: https://docs.julialang.org/en/stable/stdlib/collections/#Associative-Collections-1
# ### Sets
# **API Docs**: https://docs.julialang.org/en/stable/stdlib/collections/#Set-Like-Collections-1
# ## Control Flow
# **Documentation**: https://docs.julialang.org/en/stable/manual/control-flow/
# ## Functions
# **Documentation**: https://docs.julialang.org/en/stable/manual/functions/
# ## User-Defined Types
# **Documentation**: https://docs.julialang.org/en/stable/manual/types/
# ## Methods
# **Documentation**: https://docs.julialang.org/en/stable/manual/methods/
# ## Macros
# **Documentation**: https://docs.julialang.org/en/stable/manual/metaprogramming/
# ## Mathematical Operations
# ### Basic Operations
# **Documentation**: https://docs.julialang.org/en/stable/manual/mathematical-operations/
# **API Docs**: https://docs.julialang.org/en/stable/stdlib/math/
# ### Linear Algebra
# **Documentation**: https://docs.julialang.org/en/stable/manual/linear-algebra/
# **API Docs**: https://docs.julialang.org/en/stable/stdlib/linalg/
# ### Signal Processing
# **API Docs**: https://docs.julialang.org/en/stable/stdlib/math/#Signal-Processing-1
# ## Parallel Computing
# **Documentation**: https://docs.julialang.org/en/stable/manual/parallel-computing/
# **API Docs**: https://docs.julialang.org/en/stable/stdlib/parallel/
# ## Development
# ### Packages
# **Documentation**: https://docs.julialang.org/en/stable/manual/packages/
# **API Docs**: https://docs.julialang.org/en/stable/manual/packages/#Package-Development-1
# ### Documentation
# **Documentation**: https://docs.julialang.org/en/stable/manual/documentation/
# ### Profiling
# **Documentation**: https://docs.julialang.org/en/stable/manual/profile/
# **API Docs**: https://docs.julialang.org/en/stable/stdlib/profile/
# ### Testing
# **API Docs**: https://docs.julialang.org/en/stable/stdlib/test/
# ## Plotting
# ### Gadfly.jl
# **Website**: http://gadflyjl.org/stable/
# ### Plots.jl
# **Website**: http://docs.juliaplots.org/latest/
# ### StatsPlots.jl
# **Repository**: https://github.com/JuliaPlots/StatPlots.jl
# ### PyPlot.jl
# **Repository**: https://github.com/JuliaPy/PyPlot.jl
# ## LightGraphs
# **Website**: http://juliagraphs.github.io/LightGraphs.jl/latest/
# ## PyCall
# **Repository**: https://github.com/JuliaPy/PyCall.jl
| Meet Julia.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Modeling and Simulation in Python
#
# Chapter 2
#
# Copyright 2017 <NAME>
#
# License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
# +
# Configure Jupyter so figures appear in the notebook
# %matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
# %config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim library
from modsim import *
# set the random number generator
np.random.seed(7)
# If this cell runs successfully, it produces no output.
# -
# ## Modeling a bikeshare system
# We'll start with a `State` object that represents the number of bikes at each station.
#
# When you display a `State` object, it lists the state variables and their values:
bikeshare = State(olin=10, wellesley=2, babson=0)
# We can access the state variables using dot notation.
bikeshare.olin
bikeshare.wellesley
# **Exercise:** What happens if you spell the name of a state variable wrong? Edit the previous cell, change the spelling of `wellesley`, and run the cell again.
#
# The error message uses the word "attribute", which is another name for what we are calling a state variable.
# **Exercise:** Add a third attribute called `babson` with initial value 0, and display the state of `bikeshare` again.
# ## Updating
#
# We can use the update operators `+=` and `-=` to change state variables.
bikeshare.olin -= 1
# If we display `bikeshare`, we should see the change.
bikeshare
# Of course, if we subtract a bike from `olin`, we should add it to `wellesley`.
bikeshare.wellesley += 1
bikeshare
# ## Functions
#
# We can take the code we've written so far and encapsulate it in a function.
def bike_to_wellesley():
bikeshare.olin -= 1
bikeshare.wellesley += 1
# When you define a function, it doesn't run the statements inside the function, yet. When you call the function, it runs the statements inside.
bike_to_wellesley()
bikeshare
#
# One common error is to omit the parentheses, which has the effect of looking up the function, but not calling it.
bike_to_wellesley
# The output indicates that `bike_to_wellesley` is a function defined in a "namespace" called `__main__`, but you don't have to understand what that means.
# **Exercise:** Define a function called `bike_to_olin` that moves a bike from Wellesley to Olin. Call the new function and display `bikeshare` to confirm that it works.
def bike_to_olin():
bikeshare.olin +=1
bikeshare.wellesley-=1
bike_to_olin()
bikeshare
# ## Conditionals
# `modsim.py` provides `flip`, which takes a probability and returns either `True` or `False`, which are special values defined by Python.
#
# The Python function `help` looks up a function and displays its documentation.
help(flip)
# In the following example, the probability is 0.7 or 70%. If you run this cell several times, you should get `True` about 70% of the time and `False` about 30%.
flip(0.7)
# In the following example, we use `flip` as part of an if statement. If the result from `flip` is `True`, we print `heads`; otherwise we do nothing.
if flip(0.7):
print('heads')
# With an else clause, we can print heads or tails depending on whether `flip` returns `True` or `False`.
if flip(0.7):
print('heads')
else:
print('tails')
# ## Step
#
# Now let's get back to the bikeshare state. Again let's start with a new `State` object.
bikeshare = State(olin=10, wellesley=2)
# Suppose that in any given minute, there is a 50% chance that a student picks up a bike at Olin and rides to Wellesley. We can simulate that like this.
# +
if flip(0.5):
bike_to_wellesley()
print('Moving a bike to Wellesley')
bikeshare
# -
# And maybe at the same time, there is also a 40% chance that a student at Wellesley rides to Olin.
# +
if flip(0.4):
bike_to_olin()
print('Moving a bike to Olin')
bikeshare
# -
# We can wrap that code in a function called `step` that simulates one time step. In any given minute, a student might ride from Olin to Wellesley, from Wellesley to Olin, or both, or neither, depending on the results of `flip`.
def step():
if flip(0.5):
bike_to_wellesley()
print('Moving a bike to Wellesley')
if flip(0.4):
bike_to_olin()
print('Moving a bike to Olin')
# Since this function takes no parameters, we call it like this:
step()
bikeshare
# ## Parameters
#
# As defined in the previous section, `step` is not as useful as it could be, because the probabilities `0.5` and `0.4` are "hard coded".
#
# It would be better to generalize this function so it takes the probabilities `p1` and `p2` as parameters:
def step(p1, p2):
if flip(p1):
bike_to_wellesley()
print('Moving a bike to Wellesley')
print(p1)
if flip(p2):
bike_to_olin()
print('Moving a bike to Olin')
print(p2)
# Now we can call it like this:
step(0.5, 0.4)
bikeshare
# **Exercise:** At the beginning of `step`, add a print statement that displays the values of `p1` and `p2`. Call it again with values `0.3`, and `0.2`, and confirm that the values of the parameters are what you expect.
step(0.3, 0.2)
bikeshare
# ## For loop
# Before we go on, I'll redefine `step` without the print statements.
def step(p1, p2):
if flip(p1):
bike_to_wellesley()
if flip(p2):
bike_to_olin()
# And let's start again with a new `State` object:
bikeshare = State(olin=10, wellesley=2)
# We can use a `for` loop to move 4 bikes from Olin to Wellesley.
# +
for i in range(4):
bike_to_wellesley()
bikeshare
# -
# Or we can simulate 4 random time steps.
# +
for i in range(4):
step(0.3, 0.2)
bikeshare
# -
# If each step corresponds to a minute, we can simulate an entire hour like this.
# +
for i in range(60):
step(0.3, 0.2)
bikeshare
# -
# After 60 minutes, you might see that the number of bike at Olin is negative. We'll fix that problem in the next notebook.
#
# But first, we want to plot the results.
# ## TimeSeries
#
# `modsim.py` provides an object called a `TimeSeries` that can contain a sequence of values changing over time.
#
# We can create a new, empty `TimeSeries` like this:
results = TimeSeries()
# And we can add a value to the `TimeSeries` like this:
results[0] = bikeshare.olin
results
# The `0` in brackets is an `index` that indicates that this value is associated with time step 0.
#
# Now we'll use a for loop to save the results of the simulation. I'll start one more time with a new `State` object.
bikeshare = State(olin=10, wellesley=2)
# Here's a for loop that runs 10 steps and stores the results.
for i in range(10):
step(0.3, 0.2)
results[i] = bikeshare.olin
# Now we can display the results.
results
# A `TimeSeries` is a specialized version of a Pandas `Series`, so we can use any of the functions provided by `Series`, including several that compute summary statistics:
results.mean()
results.describe()
# You can read the documentation of `Series` [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
# ## Plotting
#
# We can also plot the results like this.
# +
plot(results, label='Olin')
decorate(title='Olin-Wellesley Bikeshare',
xlabel='Time step (min)',
ylabel='Number of bikes')
savefig('figs/chap01-fig01.pdf')
# -
# `decorate`, which is defined in the `modsim` library, adds a title and labels the axes.
help(decorate)
# `savefig()` saves a figure in a file.
help(savefig)
# The suffix of the filename indicates the format you want. This example saves the current figure in a PDF file named `chap01-fig01.pdf`.
# **Exercise:** Wrap the code from this section in a function named `run_simulation` that takes three parameters, named `p1`, `p2`, and `num_steps`.
#
# It should:
#
# 1. Create a `TimeSeries` object to hold the results.
# 2. Use a for loop to run `step` the number of times specified by `num_steps`, passing along the specified values of `p1` and `p2`.
# 3. After each step, it should save the number of bikes at Olin in the `TimeSeries`.
# 4. After the for loop, it should plot the results and
# 5. Decorate the axes.
#
# To test your function:
#
# 1. Create a `State` object with the initial state of the system.
# 2. Call `run_simulation` with appropriate parameters.
# 3. Save the resulting figure.
#
# Optional:
#
# 1. Extend your solution so it creates two `TimeSeries` objects, keeps track of the number of bikes at Olin *and* at Wellesley, and plots both series at the end.
def run_simulation(p1, p2, num_steps):
def bike_to_olin():
bikeshare.olin +=1
bikeshare.wellesley-=1
def bike_to_wellesley():
bikeshare.olin -=1
bikeshare.wellesley +=1
def step():
if flip(p1):
bike_to_wellesley()
if flip(p2):
bike_to_olin()
resultsO = TimeSeries()
resultsW = TimeSeries()
for i in range(num_steps):
step()
resultsO[i] = bikeshare.olin
for j in range(num_steps):
step()
resultsW[j] = bikeshare.wellesley
plot(resultsO, label='olin')
plot(resultsW, label='wellesley')
decorate(title='Number of Bikes at Wellesley and Olin Over Time',
xlabel='Time (min)',
ylabel='Number of Bikes')
savefig ('bikeshare_fig1.pdf')
bikeshare = State(olin=10, wellesley=2)
run_simulation (.5, .5, 60)
# ## Opening the hood
#
# The functions in `modsim.py` are built on top of several widely-used Python libraries, especially NumPy, SciPy, and Pandas. These libraries are powerful but can be hard to use. The intent of `modsim.py` is to give you the power of these libraries while making it easy to get started.
#
# In the future, you might want to use these libraries directly, rather than using `modsim.py`. So we will pause occasionally to open the hood and let you see how `modsim.py` works.
#
# You don't need to know anything in these sections, so if you are already feeling overwhelmed, you might want to skip them. But if you are curious, read on.
# ### Pandas
#
# This chapter introduces two objects, `State` and `TimeSeries`. Both are based on the `Series` object defined by Pandas, which is a library primarily used for data science.
#
# You can read the documentation of the `Series` object [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html)
#
# The primary differences between `TimeSeries` and `Series` are:
#
# 1. I made it easier to create a new, empty `Series` while avoiding a [confusing inconsistency](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
#
# 2. I provide a function so the `Series` looks good when displayed in Jupyter.
#
# 3. I provide a function called `set` that we'll use later.
#
# `State` has all of those capabilities; in addition, it provides an easier way to initialize state variables, and it provides functions called `T` and `dt`, which will help us avoid a confusing error later.
# ### Pyplot
#
# The `plot` function in `modsim.py` is based on the `plot` function in Pyplot, which is part of Matplotlib. You can read the documentation of `plot` [here](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html).
#
# `decorate` provides a convenient way to call the `pyplot` functions `title`, `xlabel`, and `ylabel`, and `legend`. It also avoids an annoying warning message if you try to make a legend when you don't have any labelled lines.
help(decorate)
# ### NumPy
#
# The `flip` function in `modsim.py` uses NumPy's `random` function to generate a random number between 0 and 1.
#
# You can get the source code for `flip` by running the following cell.
# %psource flip
| code/chap02-Mine.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## A notebook containing training logic for a robot to lift a cube using the mpo algorithm
# +
# Include all the imports here
from typing import Dict, Sequence
from absl import app
from absl import flags
import acme
from acme import specs
from acme import types
from acme import wrappers
from acme.agents.tf import dmpo
from acme.tf import networks
from acme.tf import utils as tf2_utils
import numpy as np
import sonnet as snt
# -
import robosuite as suite
from robosuite.wrappers import GymWrapper
from robosuite.controllers import load_controller_config
# +
# Prepare the environment
env_config = {
"control_freq": 20,
"env_name": "Lift",
"hard_reset": False,
"horizon": 500,
"ignore_done": False,
"reward_scale": 1.0,
"camera_names": "frontview",
"robots": [
"Panda"
]
}
controller_config = load_controller_config(default_controller="OSC_POSITION")
keys = ["object-state"]
for idx in range(1):
keys.append(f"robot{idx}_proprio-state")
# -
def make_environment(env_config, controller_config, keys):
env_suite = suite.make(**env_config,
has_renderer=False,
has_offscreen_renderer=False,
use_camera_obs=False,
reward_shaping=True,
controller_configs=controller_config,
)
env = GymWrapper(env_suite, keys=keys)
env = wrappers.gym_wrapper.GymWrapper(env)
env = wrappers.SinglePrecisionWrapper(env)
spec = specs.make_environment_spec(env)
return env, spec
env, spec = make_environment(env_config, controller_config, keys)
# +
# Prepare the agent
def make_networks(
action_spec: specs.BoundedArray,
policy_layer_sizes: Sequence[int] = (256, 256, 256),
critic_layer_sizes: Sequence[int] = (512, 512, 256),
vmin: float = -150.,
vmax: float = 150.,
num_atoms: int = 51,
) -> Dict[str, types.TensorTransformation]:
"""Creates networks used by the agent."""
# Get total number of action dimensions from action spec.
num_dimensions = np.prod(action_spec.shape, dtype=int)
# Create the shared observation network; here simply a state-less operation.
observation_network = tf2_utils.batch_concat
# Create the policy network.
policy_network = snt.Sequential([
networks.LayerNormMLP(policy_layer_sizes),
networks.MultivariateNormalDiagHead(num_dimensions)
])
# The multiplexer transforms concatenates the observations/actions.
multiplexer = networks.CriticMultiplexer(
critic_network=networks.LayerNormMLP(critic_layer_sizes),
action_network=networks.ClipToSpec(action_spec))
# Create the critic network.
critic_network = snt.Sequential([
multiplexer,
networks.DiscreteValuedHead(vmin, vmax, num_atoms),
])
return {
'policy': policy_network,
'critic': critic_network,
'observation': observation_network,
}
# -
agent_networks = make_networks(spec.actions)
# construct the agent
agent = dmpo.DistributionalMPO(
environment_spec=spec,
policy_network=agent_networks['policy'],
critic_network=agent_networks['critic'],
observation_network=agent_networks['observation'], # pytype: disable=wrong-arg-types
)
# Start the training process
loop = acme.EnvironmentLoop(env, agent)
num_episodes = 100
loop.run(num_episodes=num_episodes)
| examples/lift_mpo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ___
#
# <a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a>
# ___
# <center><em>Copyright <NAME></em></center>
# <center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>
# # Series
# The first main data type we will learn about for pandas is the Series data type. Let's import Pandas and explore the Series object.
#
# A Series is very similar to a NumPy array (in fact it is built on top of the NumPy array object). What differentiates the NumPy array from a Series, is that a Series can have axis labels, meaning it can be indexed by a label, instead of just a number location. It also doesn't need to hold numeric data, it can hold any arbitrary Python Object.
#
# Let's explore this concept through some examples:
import numpy as np
import pandas as pd
# ## Creating a Series
#
# You can convert a list,numpy array, or dictionary to a Series:
labels = ['a','b','c']
my_list = [10,20,30]
arr = np.array([10,20,30])
d = {'a':10,'b':20,'c':30}
# ### Using Lists
pd.Series(data=my_list)
pd.Series(data=my_list,index=labels)
pd.Series(my_list,labels)
# ### Using NumPy Arrays
pd.Series(arr)
pd.Series(arr,labels)
# ### Using Dictionaries
pd.Series(d)
# ### Data in a Series
#
# A pandas Series can hold a variety of object types:
pd.Series(data=labels)
# Even functions (although unlikely that you will use this)
pd.Series([sum,print,len])
# ## Using an Index
#
# The key to using a Series is understanding its index. Pandas makes use of these index names or numbers by allowing for fast look ups of information (works like a hash table or dictionary).
#
# Let's see some examples of how to grab information from a Series. Let us create two sereis, ser1 and ser2:
sales_Q1 = pd.Series(data=[250,450,200,150],index = ['USA', 'China','India', 'Brazil'])
sales_Q1
sales_Q2 = pd.Series([260,500,210,100],index = ['USA', 'China','India', 'Japan'])
sales_Q2
sales_Q1['USA']
# +
# KEY ERROR!
# sales_Q1['Russia'] # wrong name!
# sales_Q1['USA '] # wrong string spacing!
# -
# Operations are then also done based off of index:
# We'll explore how to deal with this later on!
sales_Q1 + sales_Q2
# Let's stop here for now and move on to DataFrames, which will expand on the concept of Series!
# # Great Job!
| FINAL-TF2-FILES/TF_2_Notebooks_and_Data/01-Pandas-Crash-Course/.ipynb_checkpoints/01-Series-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.1 64-bit (''dacy'': venv)'
# name: python3
# ---
# # Wrapping a fine-tuned Transformer
#
# ---
# This tutorial will briefly take you through how to wrap an already trained transformer for text classification in a SpaCy pipeline. For this example we will use DaNLP's BertTone as an example. However, do note that this approach also works using models directly from Huggingface's model hub.
#
# Before we start let us make sure everything is installed:
#
# !pip install dacy[all]
# ## Wrapping a Huggingface model
# ---
# Wrapping a Huggingface model can be done in a single line of code:
#
# +
import spacy
import dacy
from dacy.subclasses import add_huggingface_model
nlp = spacy.blank("da") # replace with your desired pipeline
nlp = add_huggingface_model(nlp,
download_name="pin/senda", # the model name on the huggingface hub
doc_extension="senda_trf_data", # the doc extention for transformer data e.g. including wordpieces
model_name="senda", # the name of the model in the pipeline
category="polarity", # the category type it predicts
labels=["negative", "neutral", "positive"], # possible outcome labels
)
# -
# ## Step by step
# ---
# However, one might want to look a bit more into it so let is go through step by step. First of all let us start with the setup. We will here use a downloaded Huggingface transformer from DaNLP.
#
# ### Setup
# Let's start off by downloading the model and be sure that it can be loaded in using Huggingface transformers:
# +
import os
from danlp.download import download_model as danlp_download
from danlp.download import _unzip_process_func
from danlp.download import DEFAULT_CACHE_DIR as DANLP_DIR
from transformers import AutoModelForSequenceClassification, BertTokenizer
# downloading model and setting a path to its location
path_sub = danlp_download(
"bert.polarity", DANLP_DIR, process_func=_unzip_process_func, verbose=True
)
path_sub = os.path.join(path_sub, "bert.pol.v0.0.1")
# loading it in with transformers
berttone = AutoModelForSequenceClassification.from_pretrained(path_sub, num_labels=3)
# -
# Assuming this works you are ready to move on. However if you were to do this on your own model I would also test that the forward pass works as intended.
#
# ## Wrapping the Model
#
# ---
#
# Now we will start wrapping the model. DaCy provides good utility functions for doing precisely this without making more changes than necessary to the transformer class from SpaCy. This should allow you to use the extensive documentation by SpaCy while working with this code.
#
# This utilizes SpaCy's config system, which might take a bit of getting used to initially, but it is quite worth the effort. For now I will walk you through it:
#
# +
from dacy.subclasses import ClassificationTransformer, install_classification_extensions
labels=["positive", "neutral", "negative"]
doc_extension = "berttone_pol_trf_data"
category = "polarity"
config = {
"doc_extension_attribute": doc_extension,
"model": {
"@architectures": "dacy.ClassificationTransformerModel.v1",
"name": path_sub,
"num_labels": len(labels),
},
}
# add the relevant extentsion to the doc
install_classification_extensions(
category=category, labels=labels, doc_extension=doc_extension, force=True
)
# -
# The config file is an extension of the `Transformers` config in SpaCy, but you will note a few changes:
#
# 1) `doc_extension_attribute`: This is to make sure that the doc extension can be customized. The doc extension is how you fetch data relevant to your model. The SpaCy transformer uses the `trf_data`, but we don't want to overwrite this in case we are using multiple transformers.
# 2) `num_labels`: The number of labels. This is an argument passed forward when loading the model. Without this, the Huggingface transformers package will raise an error (at least for cases where `num_labels` isn't 2)
#
# `name` simply specifies the name of the model. You could potentially change this out for any sequence classification model on Huggingfaces model hub. Lastly the `install_classification_extensions` adds the getter function for the model. Here it would for instance add `doc._.polarity` for extracting the label of the model as well as a `doc._.polarity_prop` for extracting the polarity probabilities for each class.
#
# ## Adding it to the NLP pipeline
# ---
#
# Now it can simply be added it to the pipeline using `add_pipe`
# +
import spacy
nlp = spacy.blank("da") # dummy nlp
clf_transformer = nlp.add_pipe(
"classification_transformer", name="berttone", config=config
)
clf_transformer.model.initialize()
# -
# ## Final Test
#
# ---
#
# We can then finish off with a final test to see if everything works as intended:
# +
texts = ["Analysen viser, at økonomien bliver forfærdelig dårlig",
"Jeg tror alligvel, det bliver godt"]
docs = nlp.pipe(texts)
for doc in docs:
print(doc._.polarity)
print(doc._.polarity_prop)
# +
# we can also examine the wordpieces used (and see the entire TransformersData)
doc._.berttone_pol_trf_data
# -
| tutorials/dacy-wrapping-a-classification-transformer.ipynb |