code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="H7gQFbUxOQtb"
# # Fundamentals of NLP (Chapter 1): Tokenization, Lemmatization, Stemming, and Sentence Segmentation
#
#
#
#
# Natural language processing (NLP) has made substantial advances in the past few years due to the success of [modern techniques](https://nlpoverview.com/) that are based on [deep learning](https://en.wikipedia.org/wiki/Deep_learning). With the rise of the popularity of NLP and the availability of different forms of large-scale data, it is now even more imperative to understand the inner workings of NLP techniques and concepts, from first principles, as they find their way into real-world usage and applications that affect society at large. Building intuitions and having a solid grasp of concepts are both important for coming up with innovative techniques, improving research, and building safe, human-centered AI and NLP technologies.
#
# In this first chapter, which is part of a series called **Fundamentals of NLP**, we will learn about some of the most important **basic concepts** that power NLP techniques used for research and building real-world applications. Some of these techniques include *lemmatization*, *stemming*, *tokenization*, and *sentence segmentation*. These are all important techniques to train efficient and effective NLP models. Along the way, we will also cover best practices and common mistakes to avoid when training and building NLP models. We also provide some exercises for you to keep practicing and exploring some ideas.
#
#
# In every chapter, we will introduce the theoretical aspect and motivation of each concept covered. Then we will obtain hands-on experience by using bootstrap methods, industry-standard tools, and other open-source libraries to implement the different techniques. Along the way, we will also cover best practices, share important references, point out common mistakes to avoid when training and building NLP models, and discuss what lies ahead.
#
# ---
# + [markdown] id="Xy7qsKcOFaH2"
# ## Tokenization
#
# 
#
# With any typical NLP task, one of the first steps is to tokenize your pieces of text into its individual words/tokens (process demonstrated in the figure above), the result of which is used to create so-called vocabularies that will be used in the langauge model you plan to build. This is actually one of the techniques that we will use the most throughout this series but here we stick to the basics.
#
# Below I am showing you an example of a simple tokenizer without any following any standards. All it does is extract tokens based on a white space seperator.
#
# Try to running the following code blocks.
# + id="Fn7xM8HKqAtf"
## required libraries that need to be installed
# %%capture
# !pip install -U spacy
# !pip install -U spacy-lookups-data
# !python -m spacy download en_core_web_sm
# + id="vUhMRrhFGfqJ" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="b390df54-00c9-482b-efd3-d69d1b325318"
## tokenizing a piecen of text
doc = "I love coding and writing"
for i, w in enumerate(doc.split(" ")):
print("Token " + str(i) + ": " + w)
# + [markdown] id="Med-k0CeG8Ke"
# All the code does is separate the sentence into individual tokens. The above simple block of code works well on the text I have provided. But typically, text is a lot noisier and complex than the example I used. For instance, if I used the word "so-called" is that one word or two words? For such scenarios, you may need more advanced approaches for tokenization. You can consider stripping away the "-" and splitting into two tokens or just combining into one token but this all depends on the problem and domain you are working on.
#
# Another problem with our simple algorithm is that it cannot deal with extra whitespaces in the text. In addition, how do we deal with cities like "New York" and "San Francisco"?
#
# + [markdown] id="z0qxNrl191NS"
# ---
# **Exercise 1**: Copy the code from above and add extra whitespaces to the string value assigned to the `doc` variable and identify the issue with the code. Then try to fix the issue. Hint: Use `text.strip()` to fix the problem.
# + id="bx22yqPJQCQc"
### ENTER CODE HERE
###
# + [markdown] id="QpYLDmLu9379"
# ---
# + [markdown] id="kSQwXKwrQAp0"
# Tokenization can also come in different forms. For instance, more recently a lot of state-of-the-art NLP models such as [BERT](https://arxiv.org/pdf/1810.04805.pdf) make use of `subword` tokens in which frequent combinations of characters also form part of the vocabulary. This helps to deal with the so-called out of vocabulary (OOV) problem. We will discuss this in upcoming chapters, but if you are interested in reading more about this now, check this [paper](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37842.pdf).
#
# To demonstrate how you can achieve more reliable tokenization, we are going to use [spaCy](https://spacy.io/), which is an impressive and robust Python library for natural language processing. In particular, we are going to use the built-in tokenizer found [here](https://spacy.io/usage/linguistic-features#sbd-custom).
#
# Run the code block below.
# + id="Cpinv_FjoyVx" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="ed70d690-7a25-4076-fa1c-d483162b641c"
## import the libraries
import spacy
## load the language model
nlp = spacy.load("en_core_web_sm")
## tokenization
doc = nlp("This is the so-called lemmatization")
for token in doc:
print(token.text)
# + [markdown] id="Zl6JG5yirhn0"
# All the code does is tokenize the text based on a pre-built language model.
#
# Try putting different running text into the `nlp()` part of the code above. The tokenizer is quiet robust and it includes a series of built-in rules that deal with exceptions and special cases such as those tokens that contain puctuations like "`" and ".", "-", etc. You can even add your own rules, find out how [here](https://spacy.io/usage/linguistic-features#special-cases).
#
# In a later chapter of the series, we will do a deep dive on tokenization and the different tools that exist out there that can simplify and speed up the process of tokenization to build vocabularies. Some of the tools we will explore are the [Keras Tokenizer API](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer) and [Hugging Face Tokenizer](https://github.com/huggingface/tokenizers).
#
# ---
# + [markdown] id="Z-F16mbBkVXF"
# ## Lemmatization
#
# 
#
# [Lemmatization](https://en.wikipedia.org/wiki/Lemmatisation) is the process where we take individual tokens from a sentence and we try to reduce them to their *base* form. The process that makes this possible is having a vocabulary and performing morphological analysis to remove inflectional endings. The output of the lemmatization process (as shown in the figure above) is the *lemma* or the base form of the word. For instance, a lemmatization process reduces the inflections, "am", "are", and "is", to the base form, "be". Take a look at the figure above for a full example and try to understand what it's doing.
#
# Lemmatization is helpful for normalizing text for text classification tasks or search engines, and a variety of other NLP tasks such as [sentiment classification](https://en.wikipedia.org/wiki/Sentiment_analysis). It is particularly important when dealing with complex languages like Arabic and Spanish.
#
# To show how you can achieve lemmatization and how it works, we are going to use [spaCy](https://spacy.io/) again. Using the spaCy [Lemmatizer](https://spacy.io/api/lemmatizer#_title) class, we are going to convert a few words into their lemmas.
#
# Below I show an example of how to lemmatize a sentence using spaCy. Try to run the block of code below and inspect the results.
# + id="i5QgWANL3JbD" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="601bd079-db04-4897-8431-fad4630d358f"
## import the libraries
from spacy.lemmatizer import Lemmatizer
from spacy.lookups import Lookups
## lemmatization
doc = nlp(u'I love coding and writing')
for word in doc:
print(word.text, "=>", word.lemma_)
# + [markdown] id="lgDcWgMvOFqA"
# The results above look as expected. The only lemma that looks off is the `-PRON-` returned for the "I" token. According to the spaCy documentation, "*This is in fact expected behavior and not a bug. Unlike verbs and common nouns, there’s no clear base form of a personal pronoun. Should the lemma of “me” be “I”, or should we normalize person as well, giving “it” — or maybe “he”? spaCy’s solution is to introduce a novel symbol, -PRON-, which is used as the lemma for all personal pronouns.*"
#
# Check out more about this in the [spaCy documentation](https://spacy.io/api/annotation#lemmatization).
# + [markdown] id="Zc6wkiW-ANT6"
# ---
# + [markdown] id="mUB3wRFhkczV"
# **Exercise 2:** Try the code above with different sentences and see if you get any unexpected results. Also, try adding punctuations and extra whitespaces which are more common in natural language. What happens?
# + id="cnfPOGgYkr3h"
### ENTER CODE HERE
###
# + [markdown] id="ALKZxh54APho"
# ---
# + [markdown] id="cOdA8GxMta7N"
# We can also create our own custom lemmatizer as shown below (*code adopted directly from the spaCy website*):
#
#
# + id="xwtdub8er-sU" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="f36994b8-aba0-4411-b010-f192bb86b2fd"
## lookup tables
lookups = Lookups()
lookups.add_table("lemma_rules", {"noun": [["s", ""]]})
lemmatizer = Lemmatizer(lookups)
words_to_lemmatize = ["cats", "brings", "sings"]
for w in words_to_lemmatize:
lemma = lemmatizer(w, "NOUN")
print(lemma)
# + [markdown] id="NNYYuCC3GqPG"
# In the example code above, we added one *lemma rule*, which aims to identify plural nouns and remove the plurality, i.e. remove the "s". There are different types of rules you can add here. I encourage you to head over to the [spaCy documentation](https://spacy.io/api/lemmatizer) to learn a bit more.
# + [markdown] id="ZzL2K-sU-e3M"
# ---
# + [markdown] id="dcaLqxPX5CJa"
# ## Stemming
#
# 
#
# Stemming is just a simpler version of lemmatization where we are interested in stripping the *suffix* at the end of the word. When stemming we are interesting in reducing the *inflected* or *derived* word to it's base form. Take a look at the figure above to get some intuition about the process.
#
# Both the stemming and the lemmatization processes involve [*morphological analysis*](https://en.wikipedia.org/wiki/Morphology_(linguistics)) where the stems and affixes (called the *morphemes*) are extracted and used to reduce inflections to their base form. For instance, the word *cats* has two morphemes, *cat* and *s*, the *cat* being the stem and the *s* being the affix representing plurality.
#
# spaCy doesn't support stemming so for this part we are going to use [NLTK](https://www.nltk.org/), which is another fantastic Python NLP library.
#
# The simple example below demonstrates how you can stem words in a piece of text. Go ahead and run the code to see what happens.
# + id="0lVd74BE5BXK" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="59cff187-333c-4de0-d4e9-f40a5f8b89da"
from nltk.stem.snowball import SnowballStemmer
stemmer = SnowballStemmer(language='english')
doc = 'I prefer not to argue'
for token in doc.split(" "):
print(token, '=>' , stemmer.stem(token))
# + [markdown] id="pdbxwdMw8AAD"
# Notice how the stemmed version of the word "argue" is "argu". That's because we can have derived words like "argument", "arguing", and "argued".
# + [markdown] id="RBG0l7CsBhAz"
# ---
# + [markdown] id="Fa5xGWDVBild"
# **Exercise 3:** Try to use different sentences in the code above and observe the effect of the stemmer. By the way, there are other stemmers such as the Porter stemmer in the NLTK library. Each stemmer behaves differently so the output may vary. Feel free to try the [Porter stemmer](https://www.nltk.org/howto/stem.html) from the NLTK library and inspect the output of the different stemmers.
# + id="Vow0MVZxmQQq"
### ENTER CODE HERE
###
# + [markdown] id="zIqegtJUjJeL"
# ---
# + [markdown] id="KjOkmpOn9QGL"
# ## Sentence Segmentation
#
# 
#
# When dealing with text, it is always common that we need to break up text into its individual sentences. That is what is known as sentence segmentation: the process of obtaining the individual sentences from a text corpus. The resulting segments can then be analyzed individually with the techniques that we previously learned.
#
# In the spaCy library, we have the choice to use a built-in sentence segmenter (trained on statistical models) or build your own rule-based method. In fact, we will cover a few examples to demonstrate the difficultiness of this problem.
# + [markdown] id="za0nOPqPAlph"
# Below I created a naive implementation of a sentence segmentation algorithm without using any kind of special library. You can see that my code increases with complexity (bugs included) as I start to consider more rules. This sort of boostrapped or rule-based approach is sometimes your only option depending on the language you are working with or the availability of linguistic resources.
#
# Run the code below to apply a simple algorithm for sentence segmentation.
# + id="sJc-bB8E9PVg" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b23fa1fc-30ce-45d0-acb1-781fb761a438"
## using a simple rule-based segmenter with native python code
text = "I love coding and programming. I also love sleeping!"
current_position = 0
cursor = 0
sentences = []
for c in text:
if c == "." or c == "!":
sentences.append(text[current_position:cursor+1])
current_position = cursor + 2
cursor+=1
print(sentences)
# + [markdown] id="dNyddCAMmnv6"
# Our sentence segmenter only segments sentences when it meets a sentence boundary which in this case is either a "." or a "!". It's not the cleanest of code but it shows how difficult the task can get as we are presented with richer text that include more diverse special characters. One problem with my code is that I am not able to differentiate between abbreviations like Dr. and numbers like 0.4. You may be able to create your own complex regular expression (we will get into this in the second chapter) to deal with these special cases but it still requires a lot of work and debugging. Luckily for us, there are libraries like spaCy and NLTK which help with this sort of preprocessing tasks.
#
# Let's try the sentence segmentation provided by spaCy. Run the code below and inspect the results.
# + id="3_M1vypFBj8Y" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="934c5fda-776a-4634-e1cc-3430cbf8990e"
doc = nlp("I love coding and programming. I also love sleeping!")
for sent in doc.sents:
print(sent.text)
# + [markdown] id="WWcJ6EsVEGQU"
# Here is a [link](https://spacy.io/usage/linguistic-features#sbd-custom) showing how you can create your own rule-based strategy for sentence segmentation using spaCy. This is particulary useful if you are working with domain-specific text which is full with noisy information and is not as standardized as text found on a factual Wiki page or news website.
# + [markdown] id="gImYrbxqHtGR"
# ---
# + [markdown] id="k-lomu0YHvBv"
# **Exercise 4:** For practise, try to create your own sentence segmentation algorithm using spaCy (try this [link](https://spacy.io/usage/linguistic-features#sbd-custom) for help and ideas). At this point, I am encouraging you to look at documentation which is a huge part of learning in-depth about all the concepts we will cover in this series. Research is a huge part of the learning process.
# + id="_Wys2htLnZYC"
### ENTER CODE HERE
###
# + [markdown] id="fTVD0ls4HzVu"
# ---
# + [markdown] id="lnboskbe96z4"
# ## How to use with Machine Learning?
#
# When you are working with textual information, it is imperative to clean your data so as to be able to train more accurate machine learning (ML) models.
#
# One of the reasons why transformations like lemmatization and stemming are useful is for normalizing the text before you feed the output to an ML algorithm. For instance, if you are building a sentiment analysis model how can you tell the model that "smiling" and "smile" refer to the same concept? You may require stemming if you are using [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) features combined with a machine learning algorithm such as [Naive Bayes classifier](https://en.wikipedia.org/wiki/Naive_Bayes_classifier). As you may suspect already, this also requires a really good tokenizer to come up with the features, especially when work on noisy pieces of text that could be generated from users in a social media site.
#
# With a wide variety of NLP tasks, one of the first big steps in the NLP pipeline is to create a vocabulary that will eventually be used to determine the inputs for the model representing the features. In modern NLP techniques such as pretrained language models, you need to process a text corpus that require proper and more sophisticated sentence segmentation and tokenization as we discussed before. We will talk more about these methods in due time. For now, the basics presented here are a good start into the world of practical NLP. Spend some time reading up on all the concepts mentioned here and take notes. I will guide through the series on what are the important parts and provide you with relevant links but you can also conduct your own additional research on the side and even improve this notebook.
#
#
# + [markdown] id="34h5bTlNdVH1"
# ## Final Words and What's Next?
# In this chapter we learned some fundamental concepts of NLP such as lemmatization, stemming, sentence segmentations, and tokenization. In the next chapter we will cover topics such as **word normalization**, **regular expressions**, **part of speech** and **edit distance**, all very important topics when working with information retrieval and NLP systems.
| Tokenization/NLPEN2021-105-Tokenization Segmentation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#
# -
import json
import time
import numpy
import pandas
import datetime
from sklearn.tree import DecisionTreeRegressor as DTR
from no_need import insane as golags
# +
# models
# all of them are stored in .json file with possible parameters
import json
with open('./models_params.json') as f:
models_params = json.load(f)
# -
def multiply_params(params):
keys = numpy.array(list(params.keys()))
dims = numpy.array([len(params[keys[j]]) for j in numpy.arange(keys.shape[0])])
result = []
for j in numpy.arange(dims.prod()):
curr = j
res = {}
for k in numpy.arange(keys.shape[0]):
ix = curr % dims[k]
res[keys[k]] = params[keys[k]][ix]
curr = curr // dims[k]
result.append(res)
return result
dim_models = 0
multiple_model_args = multiply_params(models_params['DecisionTreeRegressor'])
multiple_model_args
d = 'E:/dataset.csv'
data = pandas.read_csv(d)
data = data.rename(columns={'lag': 'news_horizon'})
data = data.set_index(['ticker', 'time', 'news_horizon'], drop=False)
data = data.sort_index()
data
# +
tsi_names = ['news_time']
y_names = ['open_LAG0']
removes = ['ticker', 'time', 'id', 'title', 'news_time']
report = golags(das_model=DTR,
data=data,
mask_thresh=-1,
multiple_model_args=multiple_model_args,
tsi_names=tsi_names,
y_names=y_names,
removes=removes,
n_folds=5)
report
# -
| trash/garbage/flat_arch__DTR_model-Copy1___LOOK_HERE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [python3]
# language: python
# name: Python [python3]
# ---
# ## Import necessary packages and define functions
import pandas as pd
from matplotlib import pyplot as plt
import math
import numpy as np
# %matplotlib inline
# +
def cart2pol(x, y):
rho = np.sqrt(x**2 + y**2)
phi = np.arctan2(y, x)
return(rho, phi)
def pol2cart(rho, phi):
x = rho * np.cos(phi)
y = rho * np.sin(phi)
return(x, y)
# -
# ## Import data and calculate overall angle of trajectory
traces_S2 = pd.read_excel('WT traces.xlsx',sheetname='S2')
traces_S3 = pd.read_excel('WT traces.xlsx',sheetname='S3')
angles = []
for i in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]:
traces = traces_S2
dy = (traces[traces['Track']==i].iloc[57]['Y']-traces[traces['Track']==i].iloc[0]['Y'])
dx = (traces[traces['Track']==i].iloc[57]['X']-traces[traces['Track']==i].iloc[0]['X'])
ang = math.atan2(dy,dx)
angles.append(ang)
for i in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]:
traces = traces_S3
dy = (traces[traces['Track']==i].iloc[57]['Y']-traces[traces['Track']==i].iloc[0]['Y'])
dx = (traces[traces['Track']==i].iloc[57]['X']-traces[traces['Track']==i].iloc[0]['X'])
ang = math.atan2(dy,dx)
angles.append(ang)
for n,num in enumerate(angles):
if num < 0:
angles[n]=num+2*math.pi
# +
n_numbers = 48
bins_number = 12 # the [0, 360) interval will be subdivided into this
# number of equal bins
bins = np.linspace(0.0, 2 * np.pi, bins_number + 1)
# angles = 2 * np.pi * np.random.rand(n_numbers)
n, _, _ = plt.hist(angles, bins)
plt.clf()
width = 2 * np.pi / bins_number
ax = plt.subplot(1, 1, 1, projection='polar')
bars = ax.bar(bins[:bins_number], n, width=width, bottom=0.0)
ax.set_yticklabels([])
ax.set_yticks(np.arange(0, max(n)+1, 2.1))
plt.rcParams.update({'font.size': 12})
plt.savefig('WT rose.pdf')
# -
nt = []
for i in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]:
traces = traces_S2
x0 = traces[traces['Track']==i].iloc[0]['X']
y0 = traces[traces['Track']==i].iloc[0]['Y']
for j in range(57):
normx = traces[traces['Track']==i].iloc[j]['X']-x0
normy = traces[traces['Track']==i].iloc[j]['Y']-y0
rho,phi = cart2pol(normx,normy)
nt.append({'rho': rho, 'phi': phi, 'trace':i})
for i in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]:
traces = traces_S3
x0 = traces[traces['Track']==i].iloc[0]['X']
y0 = traces[traces['Track']==i].iloc[0]['Y']
for j in range(57):
normx = traces[traces['Track']==i].iloc[j]['X']-x0
normy = traces[traces['Track']==i].iloc[j]['Y']-y0
rho,phi = cart2pol(normx,normy)
nt.append({'rho': rho, 'phi': phi, 'trace':i+21})
norm_traces = pd.DataFrame(nt)
# +
r = np.arange(0, 2, 0.01)
theta = 2 * np.pi * r
ax = plt.subplot(111, projection='polar')
for i in [2,3,4,6,7,9,11,12,16,21,23,27,28,31,33,41,42,43]:
r = norm_traces[norm_traces['trace']==i+1]['rho']
theta = norm_traces[norm_traces['trace']==i+1]['phi']
ax.plot(theta, r)
ax.set_yticklabels([]) # no radial ticks
ax.set_ylim([0,150])
ax.set_rlabel_position(-500) # get radial labels away from plotted line
ax.grid(True)
plt.rcParams.update({'font.size': 12})
plt.savefig('WT traces.pdf')
| Gradient Tracking/Polar cap tracking/WT polar tracking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Writeup
#
# ---
#
# **Vehicle Detection Project**
#
# The goals / steps of this project are the following:
#
# * Perform a Histogram of Oriented Gradients (HOG) feature extraction on a labeled training set of images and train a classifier Linear SVM classifier
# * Optionally, you can also apply a color transform and append binned color features, as well as histograms of color, to your HOG feature vector.
# * Note: for those first two steps don't forget to normalize your features and randomize a selection for training and testing.
# * Implement a sliding-window technique and use your trained classifier to search for vehicles in images.
# * Run your pipeline on a video stream (start with the test_video.mp4 and later implement on full project_video.mp4) and create a heat map of recurring detections frame by frame to reject outliers and follow detected vehicles.
# * Estimate a bounding box for vehicles detected.
#
# [//]: # (Image References)
# [image1]: ./output_images/car_not_car.png
# [image2]: ./output_images/HOG_example.png
# [image3]: ./output_images/sliding_windows.png
# [image4]: ./output_images/sliding_window_result.png
# [image5]: ./output_images/bboxes_and_heat.png
#
# ## [Rubric](https://review.udacity.com/#!/rubrics/513/view) Points
# ### Here I will consider the rubric points individually and describe how I addressed each point in my implementation.
#
# ---
# ### Writeup / README
#
# #### 1. Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf. [Here](https://github.com/udacity/CarND-Vehicle-Detection/blob/master/writeup_template.md) is a template writeup for this project you can use as a guide and a starting point.
#
# You're reading it!
#
# ### Histogram of Oriented Gradients (HOG)
#
# #### 1. Explain how (and identify where in your code) you extracted HOG features from the training images.
#
# The code for this step is contained in the second code cell of the IPython notebook.
#
# I started by reading in all the `vehicle` and `non-vehicle` images. Here is an example of one of each of the `vehicle` and `non-vehicle` classes:
#
# ![alt text][image1]
#
# I then explored different color spaces I grabbed random images from each of the two classes and displayed them to get a feel for what the different color space outputs looks like.
#
# Here is an example using the `YCrCb` color space and HOG parameters of the Y channel:
#
#
# ![alt text][image2]
#
# #### 2. Explain how you settled on your final choice of HOG parameters.
#
# I tried various combinations of parameters and after training and scoring some classifiers I chose the standard 8x8 pixels per cell and 2x2 cells per block with 9 orientations and L2 Hysteresis Block normalisations.
#
# #### 3. Describe how (and identify where in your code) you trained a classifier using your selected HOG features (and color features if you used them).
#
# I trained a SVM using a feature combination containing spatial RGB values, Color Histogramms from H, Cr and Cb channels and hog features from Y and S channels. (Lines 29 to 51 in second code cell in [Vehicle_Detection.ipynb](./Vehicle_Detection.ipynb))
# The classifier training is in the 3rd Code cell in the IPython Notebook
#
# ### Sliding Window Search
#
# #### 1. Describe how (and identify where in your code) you implemented a sliding window search. How did you decide what scales to search and how much to overlap windows?
#
# I decided to search window positions at 3 scales in the bottom half of the image and came up with this:
#
# ![alt text][image3]
#
# #### 2. Show some examples of test images to demonstrate how your pipeline is working. What did you do to optimize the performance of your classifier?
#
# Ultimately I searched on three scales using a feature combination containing spatial RGB values, Color Histogramms from H, Cr and Cb channels and hog features from Y and S channels, which provided a nice result. Here are some example images:
#
# ![alt text][image4]
# ---
#
# ### Video Implementation
#
# #### 1. Provide a link to your final video output. Your pipeline should perform reasonably well on the entire project video (somewhat wobbly or unstable bounding boxes are ok as long as you are identifying the vehicles most of the time with minimal false positives.)
# Here's a [link to my video result](./output_images/project_video.mp4) and here's a [combination with lane detection](./output_images/project_video_lanes.mp4) from the last project.
#
#
# #### 2. Describe how (and identify where in your code) you implemented some kind of filter for false positives and some method for combining overlapping bounding boxes.
#
# I recorded the positions of positive detections in each frame of the video. From the positive detections I created a heatmap and then thresholded that map to identify vehicle positions. I then used `scipy.ndimage.measurements.label()` to identify individual blobs in the heatmap. I then assumed each blob corresponded to a vehicle. I constructed bounding boxes to cover the area of each blob detected.
#
# Here's an example result showing the heatmap from a series of frames of video, the result of `scipy.ndimage.measurements.label()` and the bounding boxes then overlaid on the last frame of video:
#
# ### Here are three examples and their corresponding heatmaps, the output of `scipy.ndimage.measurements.label()` and the resulting bounding boxes:
#
# ![alt text][image5]
#
#
#
#
# ---
#
# ### Discussion
#
# #### 1. Briefly discuss any problems / issues you faced in your implementation of this project. Where will your pipeline likely fail? What could you do to make it more robust?
#
# I took the suggested approach from the lesson using hog and color features, training a support vector machine and use sliding windows for object detection.
#
# One main difficulty was to select the best color channels for hog and histogram features, so that the detection is reliable but not too slow because of the many features that have to be calculated.
#
# I tried to use PCA to reduce the number of features, but that made the detection step even slower.
#
# An other difficulty was to choose how many sliding windows will be used, because more windows make the pipeline more robust, but also a lot slower.
#
# Using this approach it is not possible to seperate two cars that are very close two each other. They will result in one big bounding box.
#
#
| writeup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 4 семинар. Критерии согласия и однородность выборок
#
# ```
# Ауд.: 300(285), 303(288), 417(395), 420(398)
# Д/З: 301(286), 304(289), 309(294), 419(397)
# ```
#
# ## Критерий $\chi^2$
#
# $$ \chi^2_в = \sum\limits_{k=1}^r \frac{(n_k - np_k)^2}{np_k} $$
#
# $$ H_0: \chi^2_в < \chi^2_{\alpha - 1}(r - l - 1), $$
#
# где `l` - число оцениваемых по выборке неизвестных параметров распределения.
#
# **Условие**: $np_k \ge 5$
#
# ## Критерий знаков
#
# $$ H_0: p = 1/2, H_1^{(1)}: p > 1/2, H_1^{(2)}: p < 1/2, H_1^{(3)}: p \ne 1/2 $$
#
# `r` = число знаков `+`, `l` = число ненулевых разностей
#
# Статистика Фишера:
#
# $ F_в = \frac{r}{l - r + 1} \ge F_{1 - \alpha}(k_1, k_2), k_1 = 2(l - r + 1), k_2 = 2r$ (для $H_1^{(1)}$);
#
# $ F_в = \frac{l - r}{r + 1} \ge F_{1 - \alpha}(k_1, k_2), k_1 = 2(r + 1), k_2 = 2(l - r)$ (для $H_1^{(2)}$);
# ---
#
# ## Задачи
#
# ### 285
#
# +
import numpy as np
from scipy import stats
x = [110, 130, 70, 90, 100]
alpha = 0.01
p = 0.2
n = np.sum(x)
degrees = 5 - 0 - 1
q = stats.chi2(degrees).ppf(1 - alpha)
# c = np.sum
# -
# ### 288
# +
import numpy as np
from scipy import stats
x = [41, 62, 45, 22, 16, 8, 4, 2, 0, 0, 0]
lamda =
alpha = 0.05
p = 0.2
n = np.sum(x)
degrees = 5 - 1 - 1
q = stats.chi2(degrees).ppf(1 - alpha)
n, q
# -
# ### 395
# +
from scipy import stats
l = 9
r = 6
alpha = 0.1
k1 = 2 * (l - r + 1)
k2 = 2 * r
f = k2 / k1
print(f >= stats.f(k1, k2).ppf(1 - alpha / 2))
k1 = 2 * (r + 1)
k2 = 2 * (l - r)
f = k2 / k1
print(f >= stats.f(k1, k2).ppf(1 - alpha / 2))
| lessons/4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# The MNIST dataset can be found here: http://yann.lecun.com/exdb/mnist/.
#
# First, let's import the necessary libraries. Notice there are also some imports from a file called `helper_functions`
# +
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, accuracy_score
from helper_functions import show_images, show_images_by_digit, fit_random_forest_classifier2
from helper_functions import fit_random_forest_classifier, do_pca, plot_components
import test_code as t
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# -
# `1.` Use pandas to read in the dataset, which can be found in this workspace using the filepath **'./data/train.csv'**. If you have missing values, fill them with 0. Take a look at info about the data using `head`, `tail`, `describe`, `info`, etc. You can learn more about the data values from the article here: https://homepages.inf.ed.ac.uk/rbf/HIPR2/value.htm.
df = pd.read_csv('./data/train.csv')
df.fillna(0, inplace=True)
# `2.` Create a vector called y that holds the **label** column of the dataset. Store all other columns holding the pixel data of your images in X.
# +
y= df['label']
X = df.drop("label", axis =1)
df.head(5)
# -
#Check Your Solution
t.question_two_check(y, X)
# `3.` Now use the `show_images_by_digit` function from the `helper_functions` module to take a look some of the `1`'s, `2`'s, `3`'s, or any other value you are interested in looking at. Do they all look like what you would expect?
show_images_by_digit(7) # Try looking at a few other digits
# `4.` Now that you have had a chance to look through some of the data, you can try some different algorithms to see what works well to use the X matrix to predict the response well. If you would like to use the function I used in the video regarding random forests, you can run the code below, but you might also try any of the supervised techniques you learned in the previous course to see what works best.
#
# If you decide to put together your own classifier, remember the 4 steps to this process:
#
# **I.** Instantiate your model. (with all the hyperparameter values you care about)
#
# **II.** Fit your model. (to the training data)
#
# **III.** Predict using your fitted model. (on the test data)
#
# **IV.** Score your model. (comparing the predictions to the actual values on the test data)
#
# You can also try a grid search to see if you can improve on your initial predictions.
# +
# Remove the tag to fit the RF model from the video, you can also try fitting your own!
fit_random_forest_classifier(X, y)
# -
# `5.` Now for the purpose of this lesson, to look at PCA. In the video, I created a model just using two features. Replicate the process below. You can use the same `do_pca` function that was created in the previous video. Store your variables in **pca** and **X_pca**.
pca, X_pca = do_pca(2, X)
# `6.` The **X_pca** has reduced the original number of more than 700 features down to only 2 features that capture the majority of the variability in the pixel values. Use the space below to fit a model using these two features to predict the written value. You can use the random forest model by running `fit_random_forest_classifier` the same way as in the video. How well does it perform?
fit_random_forest_classifier(X, y)
# `7.` Now you can look at the separation of the values using the `plot_components` function. If you plot all of the points (more than 40,000), you will likely not be able to see much of what is happening. I recommend plotting just a subset of the data. Which value(s) have some separation that are being predicted better than others based on these two components?
# +
# Try plotting some of the numbers below - you can change the number
# of digits that are plotted, but it is probably best not to plot the
# entire dataset. Your visual will not be readable.
plot_components(X_pca[:100], y[:100])
# -
# `8.` See if you can find a reduced number of features that provides better separation to make predictions. Say you want to get separation that allows for accuracy of more than 90%, how many principal components are needed to obtain this level of accuracy? Were you able to substantially reduce the number of features needed in your final model?
for reduced_number in range(20, 50):
pca, X_pca = do_pca(reduced_number, X)
fit_random_forest_classifier(X_pca, y)
reduced_number = list(range(12, 50))
# `9.` It is possible that extra features in the dataset even lead to overfitting or the [curse of dimensionality](https://stats.stackexchange.com/questions/65379/machine-learning-curse-of-dimensionality-explained). Do you have evidence of this happening for this dataset? Can you support your evidence with a visual or table? To avoid printing out all of the metric results, I created another function called `fit_random_forest_classifier2`. I ran through a significant number of components to create the visual for the solution, but I strongly recommend you look in the range below 100 principal components!
accs = []
comps = []
for comp in range(2, 15):
comps.append(comp)
pca, X_pca = do_pca(comp, X)
acc = fit_random_forest_classifier2(X_pca, y)
accs.append(acc)
plt.plot(comps, accs, 'bo');
plt.xlabel('Number of Components');
plt.ylabel('Accuracy');
plt.title('Number of Components by Accuracy');
# The max accuracy and corresponding number of components
np.max(accs), comps[np.where(accs == np.max(accs))[0][0]]
#
| Unsupervised_Learning/PCA_MNIST Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/lenyabloko/SemEval2020/blob/master/SemEval2020_Delex.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="U1VkwsR6Tp2K" colab_type="text"
# UPLOAD FILES - Place [train.csv](https://github.com/arielsho/Subtask-1/archive/master.zip) and [test.csv](https://github.com/arielsho/Subtask-1-test/archive/master.zip) files directly under your `gdrive/My Drive/Subtask-1/`, before starting (follow the prompt URL and get authentication token)
# + id="p3GfRjjaUfMv" colab_type="code" outputId="af258aba-d6a3-4650-d54b-0d73cefd2a92" colab={"base_uri": "https://localhost:8080/", "height": 51}
from google.colab import drive
drive.mount('/content/gdrive')
#from google.colab import files
#uploaded = files.upload()
# !cp /content/gdrive/My\ Drive/Colab\ Notebooks/Subtask-1/train.csv /content
# !cp /content/gdrive/My\ Drive/Colab\ Notebooks/Subtask-1/test.csv /content
# !cp /content/gdrive/My\ Drive/Colab\ Notebooks/Subtask-1/train_delex.jsonl /content
# !cp /content/gdrive/My\ Drive/Colab\ Notebooks/Subtask-1/test_delex.jsonl /content
# + [markdown] id="C1aTA5OXdvqn" colab_type="text"
# FORMAT DATA
# + id="pcCOsgyHbnrP" colab_type="code" outputId="29725045-d594-4702-d591-3d5597b79b4e" colab={"base_uri": "https://localhost:8080/", "height": 204}
import pandas as pd
prefix = '/content/'
train_df = pd.read_csv(prefix + 'train.csv', header=None)
train_df=train_df.drop(index=0)
train_df = pd.DataFrame({
'claim': train_df[2].replace(r'\n', ' ', regex=True),
'evidence': ['sentenceID-']+train_df[0],
'label':train_df[1]
})
train_df.head()
# + id="54KBMEcMWWqC" colab_type="code" outputId="d2355df8-dafd-4f2b-d454-6ed820fb6eb8" colab={"base_uri": "https://localhost:8080/", "height": 204}
import pandas as pd
prefix = '/content/'
test_df = pd.read_csv(prefix + 'test.csv', header=None)
test_df = test_df.drop(index=0)
test_df = pd.DataFrame({
'claim': test_df[1].replace(r'\n', ' ', regex=True),
'evidence': ['sentenceID-']+test_df[0],
'label':'0'
})
test_df.head()
# + id="qC5wBtbEfYGA" colab_type="code" colab={}
train_df.to_json(prefix+'train.jsonl', orient='records',lines=True)
test_df.to_json(prefix+'test.jsonl', orient='records', lines=True)
# + id="6WDJt1yF2A0G" colab_type="code" colab={}
# !cp /content/train.jsonl /content/gdrive/My\ Drive/Colab\ Notebooks/Subtask-1/
# !cp /content/test.jsonl /content/gdrive/My\ Drive/Colab\ Notebooks/Subtask-1/
# + id="szVaybesqvfU" colab_type="code" colab={}
#https://docs.docker.com/install/linux/docker-ce/debian/
sudo apt update
#sudo apt-get install -y software-properties-common
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint <KEY>
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
#sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
sudo apt-get update
# List versions available in yor repository
apt-cache madison docker-ce
curl -O https://download.docker.com/linux/ubuntu/dists/bionic/pool/edge/amd64/containerd.io_1.2.2-3_amd64.deb
sudo apt install ./containerd.io_1.2.2-3_amd64.deb
sudo apt install docker-ce
sudo docker pull myedibleenso/processors-server:
sudo docker run myedibleenso/processors-server
sudo docker start procserv
#sudo docker run -d -e _JAVA_OPTIONS="-Xmx3G" -p 1192.168.127.12:8886:8888 --name procserv myedibleenso/processors-server
# + id="n06nIM0MnP7A" colab_type="code" colab={}
sudo apt install bzip2
wget -c https://repo.continuum.io/archive/Anaconda3-5.2.0-Linux-x86_64.sh
sudo bash ./Anaconda3-5.2.0-Linux-x86_64.sh -b -f -p /usr/local
# + id="z296mNiQpW-1" colab_type="code" colab={}
conda create --name delex python=3
source activate delex
sudo pip install tqdm
sudo pip install clean-text
sudo pip install git+https://github.com/myedibleenso/py-processors.git
# + id="AvertbnCoRm4" colab_type="code" colab={}
# mkdir -p data
# mkdir -p outputs
# mkdir -p sstagged_files
# + id="GDnpeIKHtK49" colab_type="code" colab={}
# mv train.jsonl data
# + id="VP9AxnOilWQy" colab_type="code" colab={}
python main.py --pyproc_port 8886 --use_docker True --convert_prepositions False --create_smart_NERs True --inputFile data/train.jsonl
# + id="xoSp8xRVuOJs" colab_type="code" outputId="c3c875da-c022-4369-8048-1ca5f9963286" colab={"base_uri": "https://localhost:8080/", "height": 204}
import pandas as pd
prefix = '/content/'
train_df = pd.read_json(prefix + 'train_delex.jsonl', orient='records',lines=True)
train_df.head()
| SemEval2020_Delex.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # More advanced concepts: Parallel computation and caching
#
# ```
# Authors: <NAME>
# <NAME>
# ```
#
# The aim of this notebook is:
#
# - to explain how parallel computation works within scikit-learn
# - how to cache certain computations to save computation time.
#
# For this tutorial we will rely essentially on the [joblib package](https://joblib.readthedocs.io/).
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.simplefilter(action="ignore", category=UserWarning)
# +
import os
from urllib.request import urlretrieve
url = ("https://archive.ics.uci.edu/ml/machine-learning-databases"
"/adult/adult.data")
local_filename = os.path.basename(url)
if not os.path.exists(local_filename):
print("Downloading Adult Census datasets from UCI")
urlretrieve(url, local_filename)
# +
names = ("age, workclass, fnlwgt, education, education-num, "
"marital-status, occupation, relationship, race, sex, "
"capital-gain, capital-loss, hours-per-week, "
"native-country, income").split(', ')
data = pd.read_csv(local_filename, names=names)
y = data['income']
X_df = data.drop('income', axis=1)
# -
X_df.head()
y.value_counts()
# ## Let's construct a full model with a ColumnTransformer
# +
from sklearn.compose import make_column_transformer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler, QuantileTransformer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
numeric_features = [c for c in X_df
if X_df[c].dtype.kind in ('i', 'f')]
categorical_features = [c for c in X_df
if X_df[c].dtype.kind not in ('i', 'f')]
pipeline = make_pipeline(
make_column_transformer(
(OneHotEncoder(handle_unknown='ignore'), categorical_features),
(StandardScaler(), numeric_features),
),
RandomForestClassifier(max_depth=7, n_estimators=300)
)
cv_scores = cross_val_score(pipeline, X_df, y, scoring='roc_auc', cv=5)
print("CV score:", np.mean(cv_scores))
# -
# ### How to run things in parallel in scikit-learn: The `n_jobs` parameter
# %timeit -n1 -r2 cross_val_score(pipeline, X_df, y, scoring='roc_auc', cv=5)
# %timeit -n1 -r2 cross_val_score(pipeline, X_df, y, scoring='roc_auc', cv=5, n_jobs=-1)
# +
# %%timeit -n1 -r2
pipeline[-1].set_params(n_jobs=-1)
cv_scores = cross_val_score(pipeline, X_df, y, scoring='roc_auc', cv=5, n_jobs=1)
# -
# ### How to write your own parallel code with joblib
#
# Let's first look at a simple example:
from joblib import Parallel, delayed
from math import sqrt
Parallel(n_jobs=1)(delayed(sqrt)(i**2) for i in range(10))
[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]
# Let's now do a full cross-validation in parallel:
# +
from sklearn.base import clone
def _fit_score(model, X, y, train_idx, test_idx):
X_train = X.iloc[train_idx]
X_test = X.iloc[test_idx]
y_train = y.iloc[train_idx]
y_test = y.iloc[test_idx]
model = clone(model)
model.fit(X_train, y_train)
return model.score(X_test, y_test)
# +
from sklearn.model_selection import StratifiedKFold
n_jobs = 1
cv = StratifiedKFold(n_splits=5)
scores = Parallel(n_jobs=n_jobs)(delayed(_fit_score)(
pipeline, X_df, y, train_idx, test_idx
) for train_idx, test_idx in cv.split(X_df, y))
print(scores)
# -
# ### How about caching?
#
# Something you want to avoid redoing again and again the same computations.
# One classical solution to address this is called function [memoization](https://en.wikipedia.org/wiki/Memoization).
#
# joblib offers a very trivial way to do using a simple Python decorator.
# +
from joblib import Memory
cachedir = '.'
mem = Memory(cachedir, verbose=0)
mem.clear() # make sure there is not left over cache from previous run
_fit_score_cached = mem.cache(_fit_score)
def evaluate_model():
scores = Parallel(n_jobs=n_jobs)(delayed(_fit_score_cached)(
pipeline, X_df, y, train_idx, test_idx
) for train_idx, test_idx in cv.split(X_df, y))
print(scores)
# %timeit -n1 -r1 evaluate_model()
# -
# %timeit -n1 -r1 evaluate_model()
# Certain transformer objects in scikit-learn have a memory parameter. This allows to cache their computation for example to avoid rerunning the same preprocessing in a grid-search when tuning the classifier or regressor at the end of the pipeline.
#
# To go further you can also look how joblib can be used in combination with [dask-distributed](http://distributed.dask.org/en/stable/) to run computations across different machines or with [dask-jobqueue](http://jobqueue.dask.org/en/latest/) to use a cluster with a queuing system like `slurm`.
| 02_pipelines_and_column_transformers/02-parallel_and_caching_with_joblib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Chapter 2: Conditional probability
#
# This Jupyter notebook is the Python equivalent of the R code in section 2.10 R, pp. 80 - 83, [Introduction to Probability, Second Edition](https://www.crcpress.com/Introduction-to-Probability-Second-Edition/Blitzstein-Hwang/p/book/9781138369917), Blitzstein & Hwang.
#
# ----
import numpy as np
# ## Simulating the frequentist interpretation
#
# Recall that the frequentist interpretation of conditional probability based on a large number `n` of repetitions of an experiment is $P(A|B) ≈ n_{AB}/n_{B}$, where $n_{AB}$ is the number of times that $A \cap B$ occurs and $n_{B}$ is the number of times that $B$ occurs. Let's try this out by simulation, and verify the results of Example 2.2.5. So let's use [`numpy.random.choice`](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.choice.html) to simulate `n` families, each with two children.
# +
np.random.seed(34)
n = 10**5
child1 = np.random.choice([1,2], n, replace=True)
child2 = np.random.choice([1,2], n, replace=True)
print('child1:\n{}\n'.format(child1))
print('child2:\n{}\n'.format(child2))
# -
# Here `child1` is a NumPy `array` of length `n`, where each element is a 1 or a 2. Letting 1 stand for "girl" and 2 stand for "boy", this `array` represents the gender of the elder child in each of the `n` families. Similarly, `child2` represents the gender of the younger child in each family.
#
# Alternatively, we could have used
np.random.choice(["girl", "boy"], n, replace=True)
# but it is more convenient working with numerical values.
#
# Let $A$ be the event that both children are girls and $B$ the event that the elder is a girl. Following the frequentist interpretation, we count the number of repetitions where $B$ occurred and name it `n_b`, and we also count the number of repetitions where $A \cap B$ occurred and name it `n_ab`. Finally, we divide `n_ab` by ` n_b` to approximate $P(A|B)$.
# +
n_b = np.sum(child1==1)
n_ab = np.sum((child1==1) & (child2==1))
print('P(both girls | elder is girl) = {:0.2F}'.format(n_ab / n_b))
# -
# The ampersand `&` is an elementwise $AND$, so `n_ab` is the number of families where both the first child and the second child are girls. When we ran this code, we got 0.50, confirming our answer $P(\text{both girls | elder is a girl}) = 1/2$.
#
# Now let $A$ be the event that both children are girls and $B$ the event that at least one of the children is a girl. Then $A \cap B$ is the same, but `n_b` needs to count the number of families where at least one child is a girl. This is accomplished with the elementwise $OR$ operator `|` (this is not a conditioning bar; it is an inclusive $OR$, returning `True` if at least one element is `True`).
# +
n_b = np.sum((child1==1) | (child2==2))
n_ab = np.sum((child1==1) & (child2==1))
print('P(both girls | at least one girl) = {:0.2F}'.format(n_ab / n_b))
# -
# For us, the result was 0.33, confirming that $P(\text{both girls | at least one girl}) = 1/3$.
# ## Monty Hall simulation
#
# Many long, bitter debates about the Monty Hall problem could have been averted by trying it out with a simulation. To study how well the never-switch strategy performs, let's generate 10<sup>5</sup> runs of the Monty Hall game. To simplify notation, assume the contestant always chooses door 1. Then we can generate a vector specifying which door has the car for each repetition:
#
# +
np.random.seed(55)
n = 10**5
cardoor = np.random.choice([1,2,3] , n, replace=True)
print('The never-switch strategy has success rate {:.3F}'.format(np.sum(cardoor==1) / n))
# -
# At this point we could generate the vector specifying which doors Monty opens, but that's unnecessary since the never-switch strategy succeeds if and only if door 1 has the car! So the fraction of times when the never-switch strategy succeeds is `numpy.sum(cardoor==1)/n`, which was 0.331in our simulation. This is very close to 1/3.
#
# What if we want to play the Monty Hall game interactively? We can do this by programming a Python class that would let us play interactively or let us run a simulation across many trials.
class Monty():
def __init__(self):
""" Object creation function. """
self.state = 0
self.doors = np.array([1, 2, 3])
self.prepare_game()
def get_success_rate(self):
""" Return the rate of success in this series of plays: num. wins / num. plays. """
if self.num_plays > 0:
return 1.0*self.num_wins / self.num_plays
else:
return 0.0
def prepare_game(self):
""" Prepare initial values for game play, and randonly choose the door with the car. """
self.num_plays = 0
self.num_wins = 0
self.cardoor = np.random.choice(self.doors)
self.players_choice = None
self.montys_choice = None
def choose_door(self, door):
""" Player chooses a door at state 0. Monty will choose a remaining door to reveal a goat. """
self.state = 1
self.players_choice = door
self.montys_choice = np.random.choice(self.doors[(self.doors!=self.players_choice) & (self.doors!=self.cardoor)])
def switch_door(self, do_switch):
""" Player has the option to switch from the door she has chosen to the remaining unopened door.
If the door the player has selected is the same as the cardoor, then num. of wins is incremented.
Finally, number of plays will be incremented.
"""
self.state = 2
if do_switch:
self.players_choice = self.doors[(self.doors!=self.players_choice) & (self.doors!=self.montys_choice)][0]
if self.players_choice == self.cardoor:
self.num_wins += 1
self.num_plays += 1
def continue_play(self):
""" Player opts to continue playing in this series.
The game is returned to state 0, but the counters for num. wins and num. plays
will be kept intact and running.
A new cardoor is randomly chosen.
"""
self.state = 0
self.cardoor = np.random.choice(self.doors)
self.players_choice = None
self.montys_choice = None
def reset(self):
""" The entire game state is returned to its initial state.
All counters and variable holdling state are re-initialized.
"""
self.state = 0
self.prepare_game()
# In brief:
# * The `Monty` class represents a simple state model for the game.
# * When an instance of the `Monty` game is created, game state-holding variables are initialized and a `cardoor` randomly chosen.
# * After the player initially picks a door, `Monty` will choose a remaining door that does not have car behind it.
# * The player can then choose to switch to the other, remaining unopened door, or stick with her initial choice.
# * `Monty` will then see if the player wins or not, and updates the state-holding variables for num. wins and num. plays.
# * The player can continue playing, or stop and reset the game to its original state.
#
# ### As a short simulation program
#
# Here is an example showing how to use the `Monty` class above to run a simulation to see how often the switching strategy succeeds.
# +
np.random.seed(89)
trials = 10**5
game = Monty()
for _ in range(trials):
game.choose_door(np.random.choice([1,2,3]))
game.switch_door(True)
game.continue_play()
print('In {} trials, the switching strategy won {} times.'.format(game.num_plays, game.num_wins))
print('Success rate is {:.3f}'.format(game.get_success_rate()))
# -
# ### As an interactive widget in this Jupyter notebook
#
# Optionally, the `Monty` Python class above can also be used as an engine to power an interactive widget that lets you play the three-door game _in the browser_ using [`ipywidgets` ](https://ipywidgets.readthedocs.io/en/stable/user_guide.html).
#
# To run the interactive widget, make sure you have the `ipywidgets` package installed (v7.4.2 or greater).
#
# To install with the `conda` package manager, execute the following command:
#
# conda install ipywidgets
#
# To install with the `pip` package manager, execute the following command:
#
# pip install ipywidgets
from ipywidgets import Box, Button, ButtonStyle, FloatText, GridBox, IntText, Label, Layout, HBox
from IPython.display import display
# The doors in the game are represented by [`ipywidgets.Button`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#Button).
door1 = Button(description='Door 1', layout=Layout(flex='1 1 auto', width='auto'))
door2 = Button(description='Door 2', layout=door1.layout)
door3 = Button(description='Door 3', layout=door1.layout)
doors_arr = [door1, door2, door3]
doors = Box(doors_arr, layout=Layout(width='auto', grid_area='doors'))
# State-holding variables in the `Monty` object are displayed using [`ipywidgets.IntText`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#IntText) (for the `num_wins` and `num_plays`); and [`ipywidgets.FloatText`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#FloatText) (for the success rate).
# +
label1 = Label(value='number of plays', layout=Layout(width='auto', grid_area='label1'))
text1 = IntText(disabled=True, layout=Layout(width='auto', grid_area='text1'))
label2 = Label(value='number of wins', layout=Layout(width='auto', grid_area='label2'))
text2 = IntText(disabled=True, layout=Layout(width='auto', grid_area='text2'))
label3 = Label(value='success rate', layout=Layout(width='auto', grid_area='label3'))
text3 = FloatText(disabled=True, layout=Layout(width='auto', grid_area='text3'))
# -
# [`ipywidgets.Label`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#Label) is used to display the title and descriptive text in the game widget.
# +
banner = Box([Label(value='Interactive widget: Monty Hall problem',
layout=Layout(width='50%'))],
layout=Layout(width='auto', justify_content='center', grid_area='banner'))
status = Label(value='Pick a door...', layout=Layout(width='auto', grid_area='status'))
# -
# Buttons allowing for further user actions are located at the bottom of the widget.
#
# * The `reveal` button is used to show what's behind all of the doors after the player makes her final choice.
# * After the player completes a round of play, she can click the `continue` button to keep counting game state (num. wins and num. plays)
# * The `reset` button lets the player return the game to its original state after completing a round of play.
button_layout = Layout(flex='1 1 auto', width='auto')
reveal = Button(description='reveal', tooltip='open selected door', layout=button_layout, disabled=True)
contin = Button(description='continue', tooltip='continue play', layout=button_layout, disabled=True)
reset = Button(description='reset', tooltip='reset game', layout=button_layout, disabled=True)
actions = Box([reveal, contin, reset], layout=Layout(width='auto', grid_area='actions'))
# [`ipywidgets.GridBox`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Styling.html#The-Grid-layout) helps us lay out the user interface elements for the `Monty` game widget.
ui = GridBox(children=[banner, doors, label1, text1, label2, text2, label3, text3, status, actions],
layout=Layout(
width='50%',
grid_template_rows='auto auto auto auto auto auto auto',
grid_template_columns='25% 25% 25% 25%',
grid_template_areas='''
"banner banner banner banner"
"doors doors doors doors"
"label1 label1 text1 text1"
"label2 label2 text2 text2"
"label3 label3 text3 text3"
"status status status status"
". . actions actions"
'''
)
)
# We lastly create some functions to connect the widget to the `Monty` game object. These functions adapt player action [events](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Events.html#Example) to state changes in the `Monty` object, and then update the widget user interface accordingly.
# +
uigame = Monty()
def reset_ui(disable_reset=True):
""" Return widget elements to their initial state.
Do not disable the reset button in the case of continue.
"""
for i,d in enumerate(doors_arr):
d.description = 'Door {}'.format(i+1)
d.disabled = False
d.icon = ''
d.button_style = ''
reveal.disabled = True
contin.disabled = True
reset.disabled = disable_reset
def update_status(new_status):
""" Update the widget text fields for displaying present game status. """
text1.value = uigame.num_plays
text2.value = uigame.num_wins
text3.value = uigame.get_success_rate()
status.value = new_status
def update_ui_reveal():
""" Helper function to update the widget after the player clicks the reveal button. """
if uigame.players_choice == uigame.cardoor:
new_status = 'You win! Continue playing?'
else:
new_status = 'Sorry, you lose. Continue playing?'
for i,d in enumerate(doors_arr):
d.disabled = True
if uigame.cardoor == i+1:
d.description = 'car'
else:
d.description = 'goat'
if uigame.players_choice == i+1:
if uigame.players_choice == uigame.cardoor:
d.button_style = 'success'
d.icon = 'check'
else:
d.button_style = 'danger'
d.icon = 'times'
update_status(new_status)
reveal.disabled = True
contin.disabled = False
reset.disabled = False
def on_button_clicked(b):
""" Event-handling function that maps button click events in the widget
to corresponding functions in Monty, and updates the user interface
according to the present game state.
"""
if uigame.state == 0:
if b.description in ['Door 1', 'Door 2', 'Door 3']:
c = int(b.description.split()[1])
uigame.choose_door(c)
b.disabled = True
b.button_style = 'info'
m = doors_arr[uigame.montys_choice-1]
m.disabled = True
m.description = 'goat'
unopened = uigame.doors[(uigame.doors != uigame.players_choice) &
(uigame.doors != uigame.montys_choice)][0]
status.value = 'Monty reveals a goat behind Door {}. Click Door {} to switch, or \'reveal\' Door {}.' \
.format(uigame.montys_choice, unopened, uigame.players_choice)
reveal.disabled = False
reset.disabled = False
elif b.description == 'reset':
uigame.reset()
reset_ui()
update_status('Pick a door...')
elif uigame.state == 1:
if b.description in ['Door 1', 'Door 2', 'Door 3']:
prev_choice = uigame.players_choice
uigame.switch_door(True)
pb = doors_arr[prev_choice-1]
pb.icon = ''
pb.button_style = ''
b.disabled = True
b.button_style = 'info'
status.value = 'Now click \'reveal\' to see what\'s behind Door {}.'.format(uigame.players_choice)
elif b.description == 'reset':
uigame.reset()
reset_ui()
update_status('Pick a door...')
elif b.description == 'reveal':
uigame.switch_door(False)
update_ui_reveal()
elif uigame.state == 2:
if b.description == 'reveal':
update_ui_reveal()
else:
if b.description == 'continue':
uigame.continue_play()
reset_ui(False)
update_status('Pick a door once more...')
elif b.description == 'reset':
uigame.reset()
reset_ui()
update_status('Pick a door...')
# hook up all buttons to our event-handling function
door1.on_click(on_button_clicked)
door2.on_click(on_button_clicked)
door3.on_click(on_button_clicked)
reveal.on_click(on_button_clicked)
contin.on_click(on_button_clicked)
reset.on_click(on_button_clicked)
display(ui)
# -
# How to play:
# * Click a door to select.
# * Monty will select a remaining door and open to reveal a goat.
# * Click the `reveal` button to open your selected door.
# * Or click the remaining unopened Door button to switch your door choice, and then click `reveal`.
# * Click the `continue` button to keep playing.
# * You may click the `reset` button at any time to return the game back to its initial state.
# ----
#
# <NAME> and <NAME>, Harvard University and Stanford University, © 2019 by Taylor and Francis Group, LLC
| Ch2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# v0.10
# Dependencies: OpenSSL installed (to handle https/SSL), requests module
# Notes: The requests module handles most of the heavy lifting
# This will loop through all results in a query, 50 at a time,
# saving individual JSON files to the same directory as the script
import requests
from requests.auth import HTTPBasicAuth
from datetime import datetime # used to name json files
from time import sleep
import json
query = 'hlead(recession) or subject(recession) and date > 2006 and date<2010' # Place entire query string inside these quotes. (e.g. "'<NAME>'")
filter = "SearchType eq LexisNexis.ServicesApi.SearchType'Boolean' and (PublicationType eq 'SW5kdXN0cnkgVHJhZGUgUHJlc3M' or PublicationType eq 'TmV3c3dpcmVzICYgUHJlc3MgUmVsZWFzZXM'or PublicationType eq 'TmV3c3BhcGVycw' or PublicationType eq 'TWFnYXppbmVzICYgSm91cm5hbHM') and Language eq LexisNexis.ServicesApi.Language'English' and Location eq 'VVM' and Geography eq 'Z3VpZD1HUjEyMDtwYXJlbnRndWlkPQ'" # Place entire query string inside these quotes. (e.g. "SearchType eq LexisNexis.ServicesApi.SearchType'Boolean' and PublicationType eq 'TmV3c3BhcGVycw' and GroupDuplicates eq LexisNexis.ServicesApi.GroupDuplicates'ModerateSimilarity' and Language eq LexisNexis.ServicesApi.Language'English'")
client_id = 'F8PT8FS<KEY>' # real Client ID
secret = '<KEY>' # real Secret
############# Begin Function Definitions #############
def get_token(client_id, secret):
"""Gets Authorizaton token to use in other requests."""
auth_url = 'https://auth-api.lexisnexis.com/oauth/v2/token'
payload = ('grant_type=client_credentials&scope=http%3a%2f%2f'
'oauth.lexisnexis.com%2fall')
headers = {'Content-Type': 'application/x-www-form-urlencoded'}
r = requests.post(
auth_url,
auth=HTTPBasicAuth(client_id, secret),
headers=headers,
data=payload)
json_data = r.json()
return json_data['access_token']
def build_url(content='News', query='', skip=0, expand='Document', top=50, filter=filter):
"""Builds the URL part of the request to Web Services API."""
if filter != None: # Filter is an optional parameter
api_url = ('https://services-api.lexisnexis.com/v1/' + content +
'?$expand=' + expand + '&$search=' + query +
'&$skip=' + str(skip) + '&$top=' + str(top) +
'&$filter=' + filter)
else:
api_url = ('https://services-api.lexisnexis.com/v1/' + content +
'?$expand=' + expand + '&$search=' + query +
'&$skip=' + str(skip) + '&$top=' + str(top))
return api_url
def build_header(token):
"""Builds the headers part of the request to Web Services API."""
headers = {'Accept': 'application/json;odata.metadata=minimal',
'Connection': 'Keep-Alive',
'Host': 'services-api.lexisnexis.com'}
headers['Authorization'] = 'Bearer ' + token
return headers
def get_result_count(json_data):
"""Gets the number of results from @odata.count in the response"""
return json_data['@odata.count']
def time_now():
"""Gets current time to the second."""
now = datetime.now()
return now.strftime('%Y-%m-%d-%H%M%S')
############# End Function Defnitions #############
############# Begin business logic #############
token = get_token(client_id, secret) # 1 token will work for multiple requests
request_headers = build_header(token)
skip_value = 0 # Sets starting skip
top = 50 # Adjusts the number of results to return
while True:
request_url = build_url(content='News', query=query, skip=skip_value, expand='Document', top=top, filter=filter) # Filter is set to filter=None here. Change to filter=filter to use the filter specified above
r = requests.get(request_url, headers=request_headers)
with open(str(time_now()) + '.json', 'w') as f_out: # Creates a file with the current time as the file name.
f_out.write(r.text)
skip_value = (skip_value + top)
json_data = r.json()
if skip_value > get_result_count(json_data): # Check to see if all the results have been looped through
break
sleep(12) # Limit 5 requests per minute (every 12 seconds)
| codes/lexis_api_access.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.9 64-bit
# language: python
# name: python38964bitc399adc8f8ec440ba2e333c1f6098275
# ---
# This notebook contains a few exercises on NumPy, Pandas and Scipy.
#
# Assigned readings:
# * [A Visual Intro to NumPy and Data Representation](http://jalammar.github.io/visual-numpy) by <NAME>, **up to "Transposing and Reshaping**.
# * [Pandas DataFrame introduction](https://pandas.pydata.org/docs/getting_started/intro_tutorials/01_table_oriented.html)
# * [Pandas read-write tutorial](https://pandas.pydata.org/docs/getting_started/intro_tutorials/02_read_write.html)
# * [Scipy introduction](https://docs.scipy.org/doc/scipy/tutorial/general.html)
# * [Scipy IO tutorial](https://docs.scipy.org/doc/scipy/tutorial/io.html)
#
# Exercises marked with **!** require information not found in the assigned readings. To solve them you will have to explore the online documentations:
# * [NumPy](https://numpy.org/doc/stable/user/index.html)
# * [Pandas](https://pandas.pydata.org/docs/user_guide/index.html)
# * [Scipy](https://docs.scipy.org/doc/scipy/tutorial/index.html)
# # Numpy
#
# ## Operations on 1D arrays
#
# To practice with operations on 1D NumPy arrays, we will illustrate the law of large numbers. Before we start, we will import the NumPy module and fix the random seed used in the random number generator:
import numpy as np
np.random.seed(0)
# **Exercise 1.1.1**
#
# Create a 1D array of 50 random numbers drawn from the uniform distribution in [0,1]. Determine the minimum, maximum and mean value in the array.
# **Exercise 1.1.2**
#
# Create a Python list with 100 elements where element $i$ is the mean of an array of $i$ elements drawn from the uniform distribution in [0,1].
# Which one of the 5th, 50th and 100th element is closest to 0.5?
# Assuming that the previous Python list is stored in a variable called `means`, its content can be plotted as follows:
#
#
from matplotlib import pyplot as plt
plt.plot(means)
# If all went well, the list should converge to 0.5!
# ## Operations on 2D arrays
#
# We will practice operations on 2D NumPy arrays by manipulating 2D images. The Python Imaging Library (PIL) provides an easy way to load 2D images of various types in NumPy arrays. Here, we will practice with a PNG image representing the NumPy logo:
from PIL import Image
import os
image = np.array(Image.open(os.path.join('data','numpy.png')))
# NumPy arrays representing images can easily be shown with Matplotlib:
from matplotlib import pyplot as plt
plt.imshow(image)
# **Exercise 1.2.1**
#
# Determine the size of the image (number of pixels in x and y dimension).
# **Exercise 1.2.2**
#
# Plot the bottom half of the image, i.e., the lines from x=250 on.
# **! Exercise 1.2.3**
#
# Write a program to remove the whitespace around the image.
# **! Exercise 1.2.4**
#
# Using NumPy's `linalg` module, solve the equation **Ax** = **b**, where:
#
# $$
# \textbf{A}=
# \begin{bmatrix}
# 8 & -6 & 2\\
# -4 & 11 & -7\\
# 4 & -7 & 6
# \end{bmatrix}
# \quad
# \mathrm{and} \quad \textbf{b} = \begin{bmatrix}
# 28\\
# -40\\
# 33
# \end{bmatrix}
# $$
# Determine the inverse of **A**
# # Pandas
#
# We will explore file `airbnb.csv`, a dataset of Airbnb prices in New York City. The dataset was exported from [Kaggle](https://www.kaggle.com/dgomonov/new-york-city-airbnb-open-data).
# **Exercise 2.1**
#
# Load the dataset in a Pandas data frame and show a sample of the data frame:
# Each row holds information for a given listing and columns represent the attributes of a listing.
# **Exercise 2.2**
#
# What is the highest price listed in the dataset?
# What is the total number of reviews contained in the dataset?
# What are the min, max, and mean of the following features?
# * Price
# * Number of reviewers
# * Minimum nights
# **Exercise 2.3**
#
# How many listings have a price lower than $100?
# **Exercise 2.4**
#
# What is the cheapest private room in Manhattan?
# **! Exercise 2.5**
#
# Among the numerical features (latitude, longitude, minimum nights, etc), which one is the most correlated to the listing price?
# # Scipy
#
#
# **Exercise 3.1**
#
# A colleague of yours who uses MATLAB sent you data in the mat file `points.mat`. Load this file and retrieve the x and y arrays in it. Using matplotlib, plot the (x, y) points.
# **! Exercise 3.2**
#
# Using Scipy's `interpolate` module, interpolate the datapoints using (1) nearest neighbors, and (2) cubic splines. Plot the interpolants.
# **! Exercise 3.3**
#
# Using Scipy's `integrate` module, determine the integral (area under the curve) of the interpolants between 1 and 10.
| exercises/numpy-pandas-scipy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Random Forest model for prediction of avalanche accidents
# import of standard Python libraries for data analysis
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# import needed objects from Scikit learn library for machine learning
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix, classification_report
from sklearn import preprocessing
from sklearn import metrics
from sklearn.metrics import f1_score
from sklearn.model_selection import train_test_split, cross_val_score
from numpy import mean
from sklearn.datasets import make_classification
from sklearn.model_selection import RepeatedStratifiedKFold
# I would like to see all rows and columns of dataframes
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
# ## Reading dataset and final preprocessing
# reading dataset
df = pd.read_csv("final_data.csv")
df.head()
# viewing all variables to decide which are not useful
df.columns
# removing variables not needed for random forest model
df_clean = df.drop(columns=["massif_num","lon","lat","aval_type", "acccidental_risk_index",
'snow_thickness_1D', 'snow_thickness_3D', 'snow_thickness_5D',
'snow_water_1D', 'snow_water_3D', 'snow_water_5D', 'risk_index',
'thickness_of_wet_snow_top_of_snowpack','thickness_of_frozen_snow_top_of_snowpack',
'surface_air_pressure_mean', 'rainfall_rate', 'drainage', 'runoff',
'liquid_water_in_soil', 'frozen_water_in_soil', 'elevation','snow_melting_rate'])
# viewing if data type of the variables is suitable for Random Forest
# there are 2 string categorical variables that needs to be transformed: day and massif name
df_clean.info()
df_clean.shape
# for transformation of massif names I will use OneHotEncoding with pd.get_dummies method
# selecting of values for dummies
df_clean.massif_name.unique()
massifs = ('Chablais', 'Aravis', 'Mont-Blanc', 'Bauges', 'Beaufortin',
'Hte-tarent', 'Chartreuse', 'Belledonne', 'Maurienne', 'Vanoise',
'Hte-maurie', 'Gdes-rouss', 'Thabor', 'Vercors', 'Oisans',
'Pelvoux', 'Queyras', 'Devoluy', 'Champsaur', 'Parpaillon',
'Ubaye', 'Ht_Var-Ver', 'Mercantour')
# +
# creating initial dataframe
df_massifs = pd.DataFrame(massifs, columns=['massif_name'])
# generate binary values using get_dummies
dum_df = pd.get_dummies(df_massifs, columns=["massif_name"], prefix="massif")
# merge initial dataframe with dummies
df_massifs = df_massifs.join(dum_df)
# merge final datatset with dataframe with dummies
df_clean = df_clean.merge(df_massifs, how="left", on="massif_name")
# -
# checking dataset with dummy variables
df_clean.head()
# getting rid of redundant variable
df_clean = df_clean.drop(columns=['massif_name'])
# second categorical variable to transform is day
# slicing is used to get years and months from day variable
# and after we need to transform years and months to integers
df_clean["year"] = (df_clean.day.str[:4]).astype(int)
df_clean["month"] = (df_clean.day.str[5:7]).astype(int)
df_clean.head()
# getting rid of redundant variable
df_clean = df_clean.drop(columns=['day', 'year'])
# checking dataset with new time variables
df_clean.head()
# verifying there are no null values in dataset
(df_clean.apply(lambda x: x.isnull().sum())).sum()
# checking one last time my dataset before creating RF models
df_clean.info()
# ## Problem with very imbalanced dataset
# - metrics like ROC AUC or Accuracy will be very high, but that does not provide any significant insight, because of imbalanced dataset with majority of No avalanche accidents
# - in classification report metrics **Precision will have lesser importance for my analysis than Recall**, because I want to reduce number of False Negatives rather then number of False Positives
# - **For performance of RF model, I will look mainly on weighted F1-score** metrics because it is the most suitable for this kind of analysis. F1-score is in fact weighted average of the precision and recall.
# Percentage of cases with and without avalanches showing imbalanced dataset
round((df_clean.aval_accident.value_counts()/540818)*100, 2)
# ### Vanilla Undersampling with RF class weight 'balanced_subsample'
# - strategy to improve recall and F1 score, especially for cases where avalanche accidents happened
# - class weight balanced subsample is used because in previous iterations of model this class weight had the best performance metrics
# +
# class count for undersampling
count_class_0, count_class_1 = df_clean.aval_accident.value_counts()
# dividing by class
df_class_0 = df_clean[df_clean['aval_accident'] == 0]
df_class_1 = df_clean[df_clean['aval_accident'] == 1]
# +
# creating new dataset for undersampling and plotting
df_class_0_under = df_class_0.sample(count_class_1)
df_test_under = pd.concat([df_class_0_under, df_class_1], axis=0)
print('Random under-sampling:')
print(df_test_under.aval_accident.value_counts())
df_test_under.aval_accident.value_counts().plot(kind='bar', title='Count (target)');
# +
# labels are the values we want to predict
labels_under = np.array(df_test_under['aval_accident'])
# remove the labels from the features
features_under = df_test_under.drop(columns=['aval_event', 'aval_accident'])
# saving feature names for later use
feature_list_under = list(features_under.columns)
# convert to numpy array
features_under = np.array(features_under)
# -
# splitting dataset into train and test
train_features_under, test_features_under, train_labels_under, test_labels_under = train_test_split(features_under, labels_under, test_size = 0.33, random_state = 42)
# displaying sizes of train/test features and labels
print('Training Features Shape:', train_features_under.shape)
print('Training Labels Shape:', train_labels_under.shape)
print('Testing Features Shape:', test_features_under.shape)
print('Testing Labels Shape:', test_labels_under.shape)
# +
# generating dataset
features_under, labels_under = make_classification(n_samples=20000, n_features=2, n_redundant=0,
n_clusters_per_class=1, weights=[0.99], flip_y=0, random_state=4)
# defining model
rfc_under=RandomForestClassifier(n_estimators=100, class_weight='balanced_subsample')
# defining evaluation procedure
cv = RepeatedStratifiedKFold(n_splits=15, n_repeats=3, random_state=1)
# evaluating model
scores = cross_val_score(rfc_under, features_under, labels_under, scoring='roc_auc', cv=cv, n_jobs=-1)
# +
#fitting model and prediciton
rfc_under.fit(train_features_under,train_labels_under)
label_pred_under=rfc_under.predict(test_features_under)
# -
# summary of performance
print('Mean ROC AUC: %.3f' % mean(scores))
print("Accuracy:",round(metrics.accuracy_score(test_labels_under, label_pred_under),5))
print("F1_score weighted:", round(f1_score(test_labels_under, label_pred_under, average='weighted'),5))
# displaying confusion matrix
sns.set(font_scale=1.75)
cf_matrix_under_1 = confusion_matrix(test_labels_under, label_pred_under)
f, ax = plt.subplots(figsize=(5, 5))
sns.heatmap(cf_matrix_under_1, annot=True, ax=ax, cmap="YlGnBu", fmt=".0f", linewidths=.5)
plt.title("Confusion matrix for avalanche accidents with undersampling, cw balanced subsample")
plt.xlabel('Predicted')
plt.ylabel('Real');
# dispplaying report of performance of model
print(classification_report(test_labels_under, label_pred_under))
# +
# creating dataframe to easily show importance of different variables on prediction
features_under_df = df_clean.drop(columns=['aval_event', 'aval_accident'])
pd.DataFrame({'Variable':features_under_df.columns,
'Importance':rfc_under.feature_importances_}).sort_values('Importance', ascending=False)
# -
# ### Vanilla Undersampling with RF class weight "balanced"
# - verification of performance for different class_weight in RF model
# - here class weight balanced is chosen
# +
# defining model, this time with different class weight
rfc_under_b=RandomForestClassifier(n_estimators=100, class_weight='balanced')
# evaluating model
scores = cross_val_score(rfc_under_b, features_under, labels_under, scoring='roc_auc', cv=cv, n_jobs=-1)
# +
#fitting model and prediciton
rfc_under_b.fit(train_features_under,train_labels_under)
label_pred_under=rfc_under_b.predict(test_features_under)
# -
# summary of performance
print('Mean ROC AUC: %.5f' % mean(scores))
print("Accuracy:",round(metrics.accuracy_score(test_labels_under, label_pred_under),5))
print("F1_score weighted:", round(f1_score(test_labels_under, label_pred_under, average='weighted'),5))
# displaying confusion matrix
cf_matrix_under_2 = confusion_matrix(test_labels_under, label_pred_under)
f, ax = plt.subplots(figsize=(5, 5))
sns.heatmap(cf_matrix_under_2, annot=True, ax=ax, cmap="YlGnBu", fmt=".0f", linewidths=.5)
plt.title("Confusion matrix for avalanche accidents with undersampling, cw balanced")
plt.xlabel('Predicted')
plt.ylabel('Real');
# displaying report of performance of model
print(classification_report(test_labels_under, label_pred_under))
# +
# creating dataframe to easily show importance of different variables on prediction
features_under_df_b = df_clean.drop(columns=['aval_event', 'aval_accident'])
pd.DataFrame({'Variable':features_under_df_b.columns,
'Importance':rfc_under_b.feature_importances_}).sort_values('Importance', ascending=False)
# -
# ## Results without undersampling
# - in reality there won't be 50 % of cases with and 50 % without avalanche
# - therefore predictive capacity of the my **Random Forest model will be more close to results without undersampling strategy**, even though in reality there are more avalanches than they are being detected so there is more than 0,7 % of avalanches which I have in my dataset
# ### RF model without resampling with class weight balanced subsample
# +
# labels are the values we want to predict
labels_ac = np.array(df_clean['aval_accident'])
# removing the labels from the features
features_ac = df_clean.drop(columns=['aval_event', 'aval_accident'])
# saving feature names for later use
feature_list_ac = list(features_ac.columns)
# converting to numpy array
features_ac = np.array(features_ac)
# -
# splitting dataset into train and test
train_features_ac, test_features_ac, train_labels_ac, test_labels_ac = train_test_split(features_ac, labels_ac, test_size = 0.33, random_state = 42)
# displaying sizes of train/test features and labels
print('Training Features Shape:', train_features_ac.shape)
print('Training Labels Shape:', train_labels_ac.shape)
print('Testing Features Shape:', test_features_ac.shape)
print('Testing Labels Shape:', test_labels_ac.shape)
# +
# generating dataset
features_ac, labels_ac = make_classification(n_samples=20000, n_features=2, n_redundant=0,
n_clusters_per_class=1, weights=[0.99], flip_y=0, random_state=4)
# I have chosen model rfc_under that has provided with better performance results ()
# evaluating model
scores = cross_val_score(rfc_under, features_ac, labels_ac, scoring='roc_auc', cv=cv, n_jobs=-1)
# +
# %%time
# fitting model and prediction
rfc_under.fit(train_features_ac,train_labels_ac)
label_pred_ac = rfc_under.predict(test_features_ac)
# -
# summary of performance
print('Mean ROC AUC: %.5f' % mean(scores))
print("Accuracy:",round(metrics.accuracy_score(test_labels_ac, label_pred_ac),5))
print("F1_score weighted:", round(f1_score(test_labels_ac, label_pred_ac, average='weighted'),5))
# displaying confusion matrix
f, ax = plt.subplots(figsize=(5, 5))
cf_matrix_2 = confusion_matrix(test_labels_ac, label_pred_ac)
sns.heatmap(cf_matrix_2, annot=True, ax=ax, cmap="YlGnBu", fmt=".0f", linewidths=.5)
plt.title("Confusion matrix for avalanche accidents without undersampling, cw balanced subsample")
plt.xlabel('Predicted')
plt.ylabel('Real');
# displaying report of performance of model
print(classification_report(test_labels_ac, label_pred_ac))
# creating dataframe to easily show importance of different variables on prediction
features_ac_df = df_clean.drop(columns=['aval_event', 'aval_accident'])
pd.DataFrame({'Variable':features_ac_df.columns,
'Importance':rfc_under.feature_importances_}).sort_values('Importance', ascending=False)
# ### RF model without resampling with class weight balanced
# +
# %%time
# evaluate model
scores = cross_val_score(rfc_under_b, features_ac, labels_ac, scoring='roc_auc', cv=cv, n_jobs=-1)
# fit the model with class weight "balanced"
rfc_under_b.fit(train_features_ac,train_labels_ac)
label_pred_ac = rfc_under_b.predict(test_features_ac)
# -
# displaying confusion matrix
cf_matrix = confusion_matrix(test_labels_ac, label_pred_ac)
f, ax = plt.subplots(figsize=(5, 5))
sns.heatmap(cf_matrix, annot=True, ax=ax, cmap="YlGnBu", fmt=".0f", linewidths=.5)
plt.title("Confusion matrix for avalanche accidents without undersampling, cw balanced")
plt.xlabel('Predicted')
plt.ylabel('Real');
# displaying report of performance of model
print(classification_report(test_labels_ac, label_pred_ac))
# summarize performance
print('Mean ROC AUC: %.3f' % mean(scores))
print("Accuracy:",round(metrics.accuracy_score(test_labels_ac, label_pred_ac),5))
print("F1_score weighted:", round(f1_score(test_labels_ac, label_pred_ac, average='weighted'),5))
# ## Results with only 10 most important variables
# - To better understand importance of different snow and meteo variables on predictive capacity I decided to reduce number of variables used for RF model
# +
# displaying 10 most important variables
variables = pd.DataFrame({'Variable':features_ac_df.columns,
'Importance':rfc_under.feature_importances_}).sort_values('Importance', ascending=False)
variables.head(10)
# -
# saving 10 most important variables as variable
vars_10 = variables.Variable[0:10]
# checking type of variable vars_10
type(vars_10)
# changing variable to dataframe and verification of the change
vars_10 = vars_10.to_frame()
type(vars_10)
# display of array of 10 variables for easier copypaste
vars_10.Variable.unique()
# creating new dataframe only with 10 most imporatnt variables
df_vars = df_clean[['snow_thickness_7D', 'snow_water_7D', 'month',
'freezing_level_altitude_mean', 'net_radiation',
'whiteness_albedo', 'rain_snow_transition_altitude_mean',
'air_temp_min', 'near_surface_humidity_mean',
'penetration_ram_resistance', 'aval_accident']]
# checking the result
df_vars.info()
# displaying of new dataframe
df_vars.shape
# +
# labels are the values we want to predict
labels_var = np.array(df_vars['aval_accident'])
# removing the labels from the features
features_var = df_vars.drop(columns=['aval_accident'])
# saving feature names for later use
feature_list_var = list(features_var.columns)
# converting to numpy array
features_var = np.array(features_var)
# -
# splitting dataset into train and test
train_features_var, test_features_var, train_labels_var, test_labels_var = train_test_split(features_var, labels_var, test_size = 0.33, random_state = 42)
# displaying sizes of train/test features and labels
print('Training Features Shape:', train_features_var.shape)
print('Training Labels Shape:', train_labels_var.shape)
print('Testing Features Shape:', test_features_var.shape)
print('Testing Labels Shape:', test_labels_var.shape)
# +
# %%time
# generate dataset
features_var, labels_var = make_classification(n_samples=20000, n_features=2, n_redundant=0,
n_clusters_per_class=1, weights=[0.99], flip_y=0, random_state=4)
# define model
rfc_var=RandomForestClassifier(n_estimators=100, class_weight='balanced')
# define evaluation procedure
cv = RepeatedStratifiedKFold(n_splits=15, n_repeats=3, random_state=1)
# evaluate model
scores = cross_val_score(rfc_var, features_var, labels_var, scoring='roc_auc', cv=cv, n_jobs=-1)
# +
# %%time
# fitting the model and prediction
rfc_var.fit(train_features_var, train_labels_var)
label_pred_var = rfc_var.predict(test_features_var)
# -
# displaying confusion matrix
cf_matrix_var = confusion_matrix(test_labels_var, label_pred_var)
f, ax = plt.subplots(figsize=(5, 5))
sns.heatmap(cf_matrix_var, annot=True, ax=ax, cmap="YlGnBu", fmt=".0f", linewidths=.5)
plt.title("Confusion matrix for avalanche accidents with 10 variables, cw balanced")
plt.xlabel('Predicted')
plt.ylabel('Real');
# displaying report of performance of model
print(classification_report(test_labels_var, label_pred_var))
# summarize performance of the model
print('Mean ROC AUC: %.3f' % mean(scores))
print("Accuracy:",round(metrics.accuracy_score(test_labels_var, label_pred_var),5))
print("F1_score weighted:", round(f1_score(test_labels_var, label_pred_var, average='weighted'),5))
# creating dataframe to easily show importance of 10 most important variables on prediction
features_var_df = df_vars.drop(columns=['aval_accident'])
pd.DataFrame({'Variable':features_var_df.columns,
'Importance':rfc_var.feature_importances_}).sort_values('Importance', ascending=False)
# ## Summary of Random Forest results
# - Undersampling:
#
# I used **undersampling where 50 % of cases included days with avalanche and 50 % of cases where without avalanche**. I used Random Forest model with undersampling for two options: with class weights 1) "balanced subsample" and 2) "balanced". **Model did return slightly better results for class weight balanced. Weighted F1-score was 0.89324** for class weight balanced, while for class weight balanced subsample is was 0.89002. **Recall for day with avalanche was 0,93 for both models, while recall for day without avalanche was 0.86 for model with balanced class weight** and 0.85 for model with balanced subsample.
#
# - Normal Sample:
#
# After I used trained RF models on normal sample with highly imbalanced data in favour of days without avalanches. Again I did two iterations for different class weights. Both iterations had very similar results. **Recall for day without avalanche was 1**. This means it was perfect, but this result is not suprising nor that ipressive because of imbalanced nature of my dataset. On the other hand, **recall for days with avalanche was 0.44 for both iterations. This means I could correctly predict only less than every second avalanche**, which is not very good result. Weighted F1-score was in both iterations better than previously with undersampling. **Normal sample had weighted F1-score 0.995, which 0.102 better than best result with undersampling**.
#
# - Reduction of variables:
#
# When I **reduced number of independent variables to 10 most important variables**, I could observe descrease of performance of RF model in weigthed F1-score and also in recall for days with avalanche. **The recall was 0.43, while we got previously 0.44 recall for days with avalanche without reduction of variables. Therefore it is not recommended to reduce number of variable to reduce time execution of Random Forest model, because we put more importance on keeping higher recall for days with avalanche**.
| random_forest-avalanche_accidents.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from hocr_utils import utils
from PIL import Image
# ## Transform PIL images to hOCR
image = Image.open('./data/sample.png')
hocr = utils.images_to_hocr([image])
# ## Transform pdf to hOCR
hocr = utils.pdf_to_hocr('./data/sample.pdf')
# ## Transform hOCR to dictionary
hocr_dict = utils.hocr_to_dict(hocr)
| notebooks/Examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Homework 3 - Pandas
# Load required modules
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# ## Pandas Introduction
#
# ## Reading File
# #### 1.1) Read the CSV file called 'data3.csv' into a dataframe called df.
# #### Data description
# * Data source: http://www.fao.org/nr/water/aquastat/data/query/index.html
# * Data, units:
# * GDP, current USD (CPI adjusted)
# * NRI, mm/yr
# * Population density, inhab/km^2
# * Total area of the country, 1000 ha = 10km^2
# * Total Population, unit 1000 inhabitants
# your code here
# #### 1.2) Display the first 10 rows of the dataframe.
# your code here
# #### 1.3) Display the column names.
# your code here
# #### 1.4) Use iloc to display the first 3 rows and first 4 columns.
# your code here
# ## Data Preprocessing
#
# #### 2.1) Find all the rows that have 'NaN' in the 'Symbol' column. Display first 5 rows.
#
# ##### Hint : You might have to use a mask
# your code here
# #### 2.2) Now, we will try to get rid of the NaN valued rows and columns. First, drop the column 'Other' which only has 'NaN' values. Then drop all other rows that have any column with a value 'NaN'. Store the result in place. Then display the last 5 rows of the dataframe.
# your code here
# #### 2.3) For our analysis we do not want all the columns in our dataframe. Lets drop all the redundant columns/ features.
# #### **Drop columns**: **Area Id, Variable Id, Symbol**. Save the new dataframe as df1. Display the first 5 rows of the new dataframe.
# your code here
# #### 2.4) Display all the unique values in your new dataframe for each of the columns: Area, Variable Name, Year.
# your code here
# #### 2.5) Display some basic statistical details like percentile, mean, std etc. of our dataframe.
# your code here
# ## Plot
# #### 3.1) Plot a bar graph showing the count for each unique value in the column 'Area'. Give it a title.
# your code here
# ## Extract specific statistics from the preprocessed data:
#
# #### 4.1) Create a dataframe 'dftemp' to store rows where Area is 'Iceland'. Display the dataframe.
# your code here
# #### 4.2) Print the years (with the same format as 2.5) when the National Rainfall Index (NRI) was greater than 900 and less than 950 in Iceland. Use the dataframe you created in the previous question 'dftemp'.
# your code here
| Homework/Homework 3/pandas-exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.1 64-bit (''AmbulanceGame'': conda)'
# language: python
# name: python38164bitambulancegameconda313376b6b30b4ff1a63b667ba23e8abb
# ---
import ambulance_game as abg
import numpy as np
import matplotlib.pyplot as plt
# +
lambda_2 = 0.2
lambda_1 = 0.3
mu = 0.2
num_of_servers = 5
threshold = 6
system_capacity = 10
buffer_capacity = 10
num_of_trials = 10
seed_num = 0
runtime = 10000
output = "both"
accuracy = 10
# -
plot_over = "lambda_2"
max_parameter_value = 1
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
0.1,
lambda_1,
mu,
num_of_servers,
threshold,
num_of_trials,
seed_num,
runtime,
system_capacity,
buffer_capacity,
output,
plot_over,
max_parameter_value,
accuracy,
)
# $$
# W = W_o \frac{\lambda_1}{\lambda_1 + \lambda_2} + W_a \frac{\lambda_2}{\lambda_1 + \lambda_2}
# $$
plot_over = "lambda_2"
max_parameter_value = 1
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2=0.1,
lambda_1=lambda_1,
mu=mu,
num_of_servers=num_of_servers,
threshold=threshold,
num_of_trials=num_of_trials,
seed_num=seed_num,
runtime=runtime,
warm_up_time=100,
system_capacity=system_capacity,
buffer_capacity=buffer_capacity,
output=output,
plot_over=plot_over,
max_parameter_value=max_parameter_value,
accuracy=accuracy,
)
# $$
# W = W_o \frac{\lambda_1}{\lambda_1 + \lambda_2} P(L_o') \; + \; W_a \frac{\lambda_2}{\lambda_1 + \lambda_2} P(L_a')
# $$
# # Overall Waiting Time Formula
#
# $$
# W = W_o \frac{\lambda_1 P(L_o')}{\lambda_1 P(L_o') + \lambda_2 P(L_a')} \; + \; W_a \frac{\lambda_2 P(L_a')}{\lambda_1 P(L_o') + \lambda_2 P(L_a')}
# % = \frac{W_o \lambda_1 P(L_o') + W_a \lambda_2 P(L_a')}{\lambda_1 P(L_o') + \lambda_2 P(L_a')}
# $$
# +
lambda_2 = 0.2
lambda_1 = 0.3
mu = 0.2
num_of_servers = 5
threshold = 6
system_capacity = 10
buffer_capacity = 10
num_of_trials = 10
seed_num = 0
runtime = 10000
output = "both"
accuracy = 10
# -
plt.figure(figsize=(20, 10))
abg.get_heatmaps(
lambda_2,
lambda_1=lambda_1,
mu=mu,
num_of_servers=num_of_servers,
threshold=threshold,
system_capacity=system_capacity,
buffer_capacity=buffer_capacity,
seed_num=seed_num,
runtime=runtime,
num_of_trials=num_of_trials,
)
plot_over = "lambda_2"
max_parameter_value = 1
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2=0.1,
lambda_1=lambda_1,
mu=mu,
num_of_servers=num_of_servers,
threshold=threshold,
num_of_trials=num_of_trials,
seed_num=seed_num,
runtime=runtime,
warm_up_time=100,
system_capacity=system_capacity,
buffer_capacity=buffer_capacity,
output=output,
plot_over=plot_over,
max_parameter_value=max_parameter_value,
accuracy=accuracy,
)
plot_over = "lambda_1"
max_parameter_value = 1
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2=lambda_2,
lambda_1=0.1,
mu=mu,
num_of_servers=num_of_servers,
threshold=threshold,
num_of_trials=num_of_trials,
seed_num=seed_num,
runtime=runtime,
warm_up_time=100,
system_capacity=system_capacity,
buffer_capacity=buffer_capacity,
output=output,
plot_over=plot_over,
max_parameter_value=max_parameter_value,
accuracy=accuracy,
)
plot_over = "mu"
max_parameter_value = 0.5
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2=lambda_2,
lambda_1=lambda_1,
mu=0.1,
num_of_servers=num_of_servers,
threshold=threshold,
num_of_trials=num_of_trials,
seed_num=seed_num,
runtime=runtime,
warm_up_time=100,
system_capacity=system_capacity,
buffer_capacity=buffer_capacity,
output=output,
plot_over=plot_over,
max_parameter_value=max_parameter_value,
accuracy=accuracy,
)
plot_over = "num_of_servers"
max_parameter_value = 10
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2=lambda_2,
lambda_1=lambda_1,
mu=mu,
num_of_servers=3,
threshold=threshold,
num_of_trials=num_of_trials,
seed_num=seed_num,
runtime=runtime,
warm_up_time=100,
system_capacity=system_capacity,
buffer_capacity=buffer_capacity,
output=output,
plot_over=plot_over,
max_parameter_value=max_parameter_value,
accuracy=8,
)
plot_over = "threshold"
max_parameter_value = 14
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2=lambda_2,
lambda_1=lambda_1,
mu=mu,
num_of_servers=num_of_servers,
threshold=5,
num_of_trials=num_of_trials,
seed_num=seed_num,
runtime=runtime,
warm_up_time=100,
system_capacity=system_capacity,
buffer_capacity=buffer_capacity,
output=output,
plot_over=plot_over,
max_parameter_value=max_parameter_value,
accuracy=accuracy,
)
plot_over = "system_capacity"
max_parameter_value = 25
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2=lambda_2,
lambda_1=lambda_1,
mu=mu,
num_of_servers=num_of_servers,
threshold=threshold,
num_of_trials=num_of_trials,
seed_num=seed_num,
runtime=runtime,
warm_up_time=100,
system_capacity=6,
buffer_capacity=buffer_capacity,
output=output,
plot_over=plot_over,
max_parameter_value=max_parameter_value,
accuracy=accuracy,
)
plot_over = "buffer_capacity"
max_parameter_value = 25
x_axis, mean_sim, mean_markov, all_sim = abg.get_plot_comparing_times(
lambda_2=lambda_2,
lambda_1=lambda_1,
mu=mu,
num_of_servers=num_of_servers,
threshold=threshold,
num_of_trials=num_of_trials,
seed_num=seed_num,
runtime=runtime,
warm_up_time=100,
system_capacity=system_capacity,
buffer_capacity=6,
output=output,
plot_over=plot_over,
max_parameter_value=max_parameter_value,
accuracy=accuracy,
)
| nbs/meetings/2020-06-16 meeting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
with open("test", "r") as fp:
s = fp.readline().strip()
size = len(s)
lower_s = s.lower()
while True:
s_arr = []
flag = False
for i, c in enumerate(s):
if flag:
flag = False
continue
if i < len(s)-1:
if (c.lower() == s[i+1].lower()) and ((c.islower() and s[i+1].isupper()) or (c.isupper() and s[i+1].islower())):
flag = True
else:
s_arr.append(c)
s_arr.append(s[-1])
s = "".join(s_arr)
if len(s) < size:
size = len(s)
else:
break
print(s)
print(len(s))
| day5/day 5 part 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## NLP model creation and training
# + hide_input=true
from fastai.gen_doc.nbdoc import *
from fastai.text import *
# -
# The main thing here is [`RNNLearner`](/text.learner.html#RNNLearner). There are also some utility functions to help create and update text models.
# ## Quickly get a learner
# + hide_input=true
show_doc(language_model_learner)
# -
# The model used is given by `arch` and `config`. It can be:
#
# - an [`AWD_LSTM`](/text.models.awd_lstm.html#AWD_LSTM)([Merity et al.](https://arxiv.org/abs/1708.02182))
# - a [`Transformer`](/text.models.transformer.html#Transformer) decoder ([Vaswani et al.](https://arxiv.org/abs/1706.03762))
# - a [`TransformerXL`](/text.models.transformer.html#TransformerXL) ([Dai et al.](https://arxiv.org/abs/1901.02860))
#
# They each have a default config for language modelling that is in <code>{lower_case_class_name}_lm_config</code> if you want to change the default parameter. At this stage, only the AWD LSTM support `pretrained=True` but we hope to add more pretrained models soon. `drop_mult` is applied to all the dropouts weights of the `config`, `learn_kwargs` are passed to the [`Learner`](/basic_train.html#Learner) initialization.
# + hide_input=true
jekyll_note("Using QRNN (change the flag in the config of the AWD LSTM) requires to have cuda installed (same version as pytorch is using).")
# -
path = untar_data(URLs.IMDB_SAMPLE)
data = TextLMDataBunch.from_csv(path, 'texts.csv')
learn = language_model_learner(data, AWD_LSTM, drop_mult=0.5)
# + hide_input=true
show_doc(text_classifier_learner)
# -
# Here again, the backbone of the model is determined by `arch` and `config`. The input texts are fed into that model by bunch of `bptt` and only the last `max_len` activations are considered. This gives us the backbone of our model. The head then consists of:
# - a layer that concatenates the final outputs of the RNN with the maximum and average of all the intermediate outputs (on the sequence length dimension),
# - blocks of ([`nn.BatchNorm1d`](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm1d), [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout), [`nn.Linear`](https://pytorch.org/docs/stable/nn.html#torch.nn.Linear), [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU)) layers.
#
# The blocks are defined by the `lin_ftrs` and `drops` arguments. Specifically, the first block will have a number of inputs inferred from the backbone arch and the last one will have a number of outputs equal to data.c (which contains the number of classes of the data) and the intermediate blocks have a number of inputs/outputs determined by `lin_ftrs` (of course a block has a number of inputs equal to the number of outputs of the previous block). The dropouts all have a the same value ps if you pass a float, or the corresponding values if you pass a list. Default is to have an intermediate hidden size of 50 (which makes two blocks model_activation -> 50 -> n_classes) with a dropout of 0.1.
path = untar_data(URLs.IMDB_SAMPLE)
data = TextClasDataBunch.from_csv(path, 'texts.csv')
learn = text_classifier_learner(data, AWD_LSTM, drop_mult=0.5)
# + hide_input=true
show_doc(RNNLearner)
# -
# Handles the whole creation from <code>data</code> and a `model` with a text data using a certain `bptt`. The `split_func` is used to properly split the model in different groups for gradual unfreezing and differential learning rates. Gradient clipping of `clip` is optionally applied. `alpha` and `beta` are all passed to create an instance of [`RNNTrainer`](/callbacks.rnn.html#RNNTrainer). Can be used for a language model or an RNN classifier. It also handles the conversion of weights from a pretrained model as well as saving or loading the encoder.
# + hide_input=true
show_doc(RNNLearner.get_preds)
# -
# If `ordered=True`, returns the predictions in the order of the dataset, otherwise they will be ordered by the sampler (from the longest text to the shortest). The other arguments are passed [`Learner.get_preds`](/basic_train.html#Learner.get_preds).
# ### Loading and saving
# + hide_input=true
show_doc(RNNLearner.load_encoder)
# + hide_input=true
show_doc(RNNLearner.save_encoder)
# + hide_input=true
show_doc(RNNLearner.load_pretrained)
# -
# Opens the weights in the `wgts_fname` of `self.model_dir` and the dictionary in `itos_fname` then adapts the pretrained weights to the vocabulary of the <code>data</code>. The two files should be in the models directory of the `learner.path`.
# ## Utility functions
# + hide_input=true
show_doc(convert_weights)
# -
# Uses the dictionary `stoi_wgts` (mapping of word to id) of the weights to map them to a new dictionary `itos_new` (mapping id to word).
# ## Get predictions
# + hide_input=true
show_doc(LanguageLearner, title_level=3)
# + hide_input=true
show_doc(LanguageLearner.predict)
# -
# If `no_unk=True` the unknown token is never picked. Words are taken randomly with the distribution of probabilities returned by the model. If `min_p` is not `None`, that value is the minimum probability to be considered in the pool of words. Lowering `temperature` will make the texts less randomized.
# + hide_input=true
show_doc(LanguageLearner.beam_search)
# -
# ## Basic functions to get a model
# + hide_input=true
show_doc(get_language_model)
# + hide_input=true
show_doc(get_text_classifier)
# -
# This model uses an encoder taken from the `arch` on `config`. This encoder is fed the sequence by successive bits of size `bptt` and we only keep the last `max_seq` outputs for the pooling layers.
#
# The decoder use a concatenation of the last outputs, a `MaxPooling` of all the outputs and an `AveragePooling` of all the outputs. It then uses a list of `BatchNorm`, `Dropout`, `Linear`, `ReLU` blocks (with no `ReLU` in the last one), using a first layer size of `3*emb_sz` then following the numbers in `n_layers`. The dropouts probabilities are read in `drops`.
#
# Note that the model returns a list of three things, the actual output being the first, the two others being the intermediate hidden states before and after dropout (used by the [`RNNTrainer`](/callbacks.rnn.html#RNNTrainer)). Most loss functions expect one output, so you should use a Callback to remove the other two if you're not using [`RNNTrainer`](/callbacks.rnn.html#RNNTrainer).
# ## Undocumented Methods - Methods moved below this line will intentionally be hidden
# ## New Methods - Please document or move to the undocumented section
# + hide_input=true
show_doc(MultiBatchEncoder.forward)
# -
#
# + hide_input=true
show_doc(LanguageLearner.show_results)
# -
#
# + hide_input=true
show_doc(MultiBatchEncoder.concat)
# -
#
# + hide_input=true
show_doc(MultiBatchEncoder)
# -
#
# + hide_input=true
show_doc(decode_spec_tokens)
# -
#
# + hide_input=true
show_doc(MultiBatchEncoder.reset)
# -
#
| docs_src/text.learner.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.manifold import TSNE
# -
crypto_df = pd.read_csv('crypto_data.csv', index_col=0)
# print(df.shape)
# Only keep cryptocurrencies that are trading
crypto_df = crypto_df.loc[crypto_df['IsTrading'] == True]
print(crypto_df.shape)
# Delete 'IsTrading' column
crypto_df = crypto_df.drop(columns = 'IsTrading')
crypto_df.head()
# Remove rows with at least 1 null value
# crypto_df.isnull().sum()
crypto_df = crypto_df.dropna()
crypto_df.isnull().sum()
# Remove rows with cryptocurrencies having no coins mined
crypto_df = crypto_df.loc[crypto_df['TotalCoinsMined'] != 0]
print(crypto_df.shape)
# Store the 'CoinName' in its own DataFrame prior to dropping from crypto_df
coinname_df = pd.DataFrame(data = crypto_df, columns = ['CoinName'])
coinname_df.head()
# Drop 'CoinName' from crypto_df
crypto_df = crypto_df.drop(columns = 'CoinName')
crypto_df.head()
# Create dummy variables for text features with get_dummies
crypto_dummies= pd.get_dummies(crypto_df, columns=['Algorithm', 'ProofType'])
crypto_dummies.head()
# Standardize data
crypto_scaled = StandardScaler().fit_transform(crypto_dummies)
print(crypto_scaled[0:1])
# +
# Use PCA to reduce dimensions to 3 principal components
crypto_scaled = crypto_scaled[~np.isnan(crypto_scaled).any(axis=1)]
np.isnan(crypto_scaled).sum()
pca = PCA(n_components=0.90)
crypto_pca = pca.fit_transform(crypto_scaled)
# Create a DataFrame with the principal components data
pca_df = pd.DataFrame(
data=crypto_pca
)
pca_df.head()
# +
# reduce dimensions of pricipal components with t-SNE
tsne_inp = np.array(pca_df)
model = TSNE(n_components=2, n_iter=5000, perplexity=30.0)
tsne_features = model.fit_transform(tsne_inp)
result = tsne_features.tolist()
tSNE_df = pd.DataFrame(
data=tsne_features
)
pca_df.head()
# -
# Create a scatterplot of the t-SNE results
sns.scatterplot(x=tsne_features.T[0], y=tsne_features.T[1])
plt.title('t-SNE Scatter Plot')
# +
inertia = []
k = list(range(1, 11))
# Calculate the inertia for the range of k values
for i in k:
km = KMeans(n_clusters=i, random_state=0)
km.fit(tSNE_df)
inertia.append(km.inertia_)
# Create the Elbow Curve plot with seaborn
elbow_data = {"k": k, "inertia": inertia}
df_elbow = pd.DataFrame(elbow_data)
sns.lineplot(data=df_elbow, x='k', y='inertia', ci=68)
plt.title('Elbow Curve Plot')
# -
# After k=4, the distortion begins to decrease in a linear fashion. Therefore, for the given cryptocurrency dataset, the optimal number of currency clusters for the data is 4.
| CryptoSubmission.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from tensorflow.keras import Model
from tensorflow.keras.layers import Input, GlobalAveragePooling1D, GlobalMaxPooling1D, Dropout, Dense
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.optimizers import *
from tensorflow.keras.backend import clear_session
import numpy as np
import matplotlib.pyplot as plt
from random import randint
from transformer import TransformerBlock, TokenAndPositionEmbedding
# +
from utils import load_dataset
X_train, y_train, X_test, y_test = load_dataset('GATA')
# -
transposed = np.array([x.T for x in X_train])
# +
sample_size = 10000
embed_dim = 4
num_heads = 2 # Number of attention heads
ff_dim = 256 # Hidden layer size in feed forward network inside transformer
clear_session()
inputs = Input(shape=(1000, 4))
# inputs = Input(shape=(1000,))
# embedding_layer = TokenAndPositionEmbedding(1000, sample_size, 4)
# x = embedding_layer(inputs)
x = inputs
transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim)
x = transformer_block(x)
x = GlobalMaxPooling1D()(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(64, activation='relu')(x)
x = Dropout(0.1)(x)
outputs = Dense(17, activation='relu')(x)
model = Model(inputs, outputs)
model.summary()
# -
model.compile(optimizer=Adam(), loss='mae')
history = model.fit(
transposed[:10000], y_train[:10000], epochs=100
)
# +
start = 0
end = 100
plt.title('all epochs')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.plot(range(start, end), history.history['loss'][start:end])
plt.show()
# -
model.evaluate(np.array([x.T for x in X_test]), y_test)
# +
n = randint(0, len(y_test))
pred = model.predict(np.expand_dims(X_test[n].T, 0))
print(n, '\n', pred, '\n', y_test[n])
# -
model.save('model_saves/transformer-model')
model
| src/GATA-transformers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Importing and formatting data
#
# Import MNIST dataset of 60,000 training images and 10,000 testing images
# +
import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
# For drawing the MNIST digits as well as plots to help us evaluate performance we
# will make extensive use of matplotlib
from matplotlib import pyplot as plt
# All of the Keras datasets are in keras.datasets
from tensorflow.keras.datasets import mnist
# Keras has already split the data into training and test data
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# Training images is a list of 60,000 2D lists.
# Each 2D list is 28 by 28—the size of the MNIST pixel data.
# Each item in the 2D array is an integer from 0 to 255 representing its grayscale
# intensity where 0 means white, 255 means black.
print(len(training_images), training_images[0].shape)
# training_labels are a value between 0 and 9 indicating which digit is represented.
# The first item in the training data is a 5
print(len(training_labels), training_labels[0])
# -
# Visualize the first 100 images in the dataset
# Lets visualize the first 100 images from the dataset
for i in range(100):
ax = plt.subplot(10, 10, i+1)
ax.axis('off')
plt.imshow(training_images[i], cmap='Greys')
# Fixing the data format: using `numpy.reshape` and `keras.util.to_categorical`
# +
from tensorflow.keras.utils import to_categorical
# Preparing the dataset
# Setup train and test splits
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# 28 x 28 = 784, because that's the dimensions of the MNIST data.
image_size = 784
# Reshaping the training_images and test_images to lists of vectors with length 784
# instead of lists of 2D arrays. Same for the test_images
training_data = training_images.reshape(training_images.shape[0], image_size)
test_data = test_images.reshape(test_images.shape[0], image_size)
# [
# [1,2,3]
# [4,5,6]
# ]
# => [1,2,3,4,5,6]
# Just showing the changes...
print("training data: ", training_images.shape, " ==> ", training_data.shape)
print("test data: ", test_images.shape, " ==> ", test_data.shape)
# +
# Create 1-hot encoded vectors using to_categorical
num_classes = 10 # Because it's how many digits we have (0-9)
# to_categorical takes a list of integers (our labels) and makes them into 1-hot vectors
training_labels = to_categorical(training_labels, num_classes)
test_labels = to_categorical(test_labels, num_classes)
# -
# Recall that before this transformation, training_labels[0] was the value 5. Look now:
print(training_labels[0])
# +
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
# Using Leakly ReLU is slightly different in Keras, which can be annoying.
# Additionally, Keras allows us to choose any slope we want for the "leaky" part
# rather than being statically 0.01 as in the above two functions.
from tensorflow.keras.layers import LeakyReLU
# Sequential models are a series of layers applied linearly.
medium_model = Sequential()
# The first layer must specify it's input_shape.
# This is how the first two layers are added, the input layer and the hidden layer.
medium_model.add(Dense(units=15, input_shape=(image_size,)))
medium_model.add(LeakyReLU(alpha=.1))
medium_model.add(Dense(units=15))
medium_model.add(LeakyReLU(alpha=.09))
medium_model.add(Dropout(rate=0.1))
medium_model.add(Dense(units=18))
medium_model.add(LeakyReLU(alpha=.08))
medium_model.add(Dropout(rate=0.1))
medium_model.add(Dense(units=20))
medium_model.add(LeakyReLU(alpha=.07))
medium_model.add(Dropout(rate=0.05))
medium_model.add(Dense(units=23))
medium_model.add(LeakyReLU(alpha=.06))
medium_model.add(Dropout(rate=0.05))
medium_model.add(Dense(units=25))
medium_model.add(LeakyReLU(alpha=.05))
medium_model.add(Dropout(rate=0.05))
medium_model.add(Dense(units=30))
medium_model.add(LeakyReLU(alpha=.04))
medium_model.add(Dropout(rate=0.05))
# This is how the output layer gets added, the 'softmax' activation function ensures
# that the sum of the values in the output nodes is 1. Softmax is very
# common in classification networks.
medium_model.add(Dense(units=num_classes, activation='softmax'))
# This function provides useful text data for our network
medium_model.summary()
# -
# Compiling and training the model
#
# +
# sgd stands for stochastic gradient descent.
# categorical_crossentropy is a common loss function used for categorical classification.
# accuracy is the percent of predictions that were correct.
medium_model.compile(optimizer="nadam", loss='kullback_leibler_divergence', metrics=['accuracy'])
# The network will make predictions for 128 flattened images per correction.
# It will make a prediction on each item in the training set 5 times (5 epochs)
# And 10% of the data will be used as validation data.
history = medium_model.fit(training_data, training_labels, batch_size=128, epochs=50, verbose=True, validation_split=.1)
# -
# Evaluating our model
# +
loss, accuracy = medium_model.evaluate(test_data, test_labels, verbose=True)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
print(f'Test loss: {loss:.3}')
print(f'Test accuracy: {accuracy:.3}')
print(history.history['accuracy'])
print(history.history['val_accuracy'])
# -
# Look at specific results
# +
from numpy import argmax
# Predicting once, then we can use these repeatedly in the next cell without recomputing the predictions.
predictions = medium_model.predict(test_data)
# For pagination & style in second cell
page = 0
fontdict = {'color': 'black'}
# +
# Repeatedly running this cell will page through the predictions
for i in range(16):
ax = plt.subplot(4, 4, i+1)
ax.axis('off')
plt.imshow(test_images[i + page], cmap='Greys')
prediction = argmax(predictions[i + page])
true_value = argmax(test_labels[i + page])
fontdict['color'] = 'black' if prediction == true_value else 'red'
plt.title("{}, {}".format(prediction, true_value), fontdict=fontdict)
page += 16
plt.tight_layout()
plt.show()
# -
| 02-training-and-regularization-tactics/05-practice-exercise-pyramid2-leaky-relu-div-KL-nadam-dropout1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Building the Boston Crime Database
#
# This notebook describes the process adopted to build the Boston Crime
# database from a CSV file
# +
import psycopg2
# Connect to dq database and create new database
conn = psycopg2.connect(dbname="dq", user="dq")
conn.autocommit = True
cursor = conn.cursor()
cursor.execute("CREATE DATABASE crime_db;")
conn.commit()
conn.close()
conn = psycopg2.connect(dbname="crime_db", user="dq")
cursor = conn.cursor()
cursor.execute("CREATE SCHEMA crimes;")
# +
import csv
with open('boston.csv') as file:
reader = csv.reader(file)
col_headers = next(reader)
first_row = next(reader)
print(col_headers)
print(first_row)
# +
# Read a csv file and column index and return unique values of the column
import csv
def get_col_value_set(csv_file, col_index):
col_values = set()
with open(csv_file) as f:
reader = csv.reader(f, delimiter=",", skipinitialspace=True)
next(reader) #skip the header
for row in reader:
col_values.add(row[col_index])
return col_values
for x in range(7):
unique_col_values = get_col_value_set('boston.csv', x)
print("Column", x, "has", len(unique_col_values), "unique values:")
# print(unique_col_values)
# +
# Identify what index the description column is
print(col_headers)
# Determine the maximum length of a given column
col_values = get_col_value_set('boston.csv', 6)
max_len = 0
max_val = ''
for val in col_values:
length = len(val)
if length > max_len:
max_len = length
max_val = val
print("Max length:", max_len)
print("Value with maximum length:", max_val)
# -
# Based on the analysis of each column value from the CSV file, in creating the database tables,
# - incident_number will be integer
# - offense_code will be varchar(4)
# - description will be varchar(58)
# - date will be date
# - day_of_the_week will be an ENUM
# - lat will be numeric
# - long will be numeric
# +
# Creating an ENUM type for day_of_the_week
import psycopg2
conn = psycopg2.connect(dbname="crime_db", user="dq")
conn.autocommit = True
cursor = conn.cursor()
cursor.execute("CREATE TYPE weekday AS ENUM('Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday');")
conn.commit()
conn.close()
# +
# Creating the crimes.boston_crimes table
conn = psycopg2.connect(dbname="crime_db", user="dq")
conn.autocommit = True
cursor = conn.cursor()
# cursor.execute("CREATE SCHEMA crimes;")
cursor.execute("""
CREATE TABLE crimes.boston_crimes (
incident_number integer PRIMARY KEY,
offense_code VARCHAR(4),
description VARCHAR(58),
incidence_date DATE,
day_of_the_week weekday,
lat numeric,
long numeric
);
""")
conn.commit()
conn.close()
# +
# Load data from boston.csv file into the boston_crimes table
conn = psycopg2.connect(dbname="crime_db", user="dq")
conn.autocommit = True
cursor = conn.cursor()
with open("boston.csv") as f:
cursor.copy_expert("COPY crimes.boston_crimes FROM STDIN WITH CSV HEADER;", f)
conn.commit()
conn.close()
# +
conn = psycopg2.connect(dbname="crime_db", user="dq")
conn.autocommit = True
cursor = conn.cursor()
# Revoke all privileges on the crimes_db by the public group
cursor.execute("REVOKE ALL ON SCHEMA public FROM public;")
cursor.execute("REVOKE ALL ON DATABASE crime_db FROM public;")
# Create two user groups
cursor.execute("CREATE GROUP readonly NOLOGIN;")
cursor.execute("CREATE GROUP readwrite NOLOGIN;")
# Grant CONNECT privilege to both user groups
cursor.execute("GRANT CONNECT ON DATABASE crime_db TO readonly;")
cursor.execute("GRANT CONNECT ON DATABASE crime_db TO readwrite;")
# Grant USAGE of crimes schema to both user groups
cursor.execute("GRANT USAGE ON SCHEMA crimes TO readonly;")
cursor.execute("GRANT USAGE ON SCHEMA crimes TO readwrite;")
# Grant group specific privileges to the user groups
cursor.execute("GRANT SELECT ON ALL TABLES IN SCHEMA crimes TO readonly;")
cursor.execute("GRANT SELECT, INSERT, DELETE, UPDATE ON ALL TABLES IN SCHEMA crimes TO readwrite;")
# Create user called data_analyst and assign the user to readonly group
cursor.execute("CREATE USER data_analyst WITH PASSWORD '<PASSWORD>';")
cursor.execute("GRANT readonly TO data_analyst;")
# Create user called data_scientist and assign the user to readwrite group
cursor.execute("CREATE USER data_scientist WITH PASSWORD '<PASSWORD>';")
cursor.execute("GRANT readwrite TO data_scientist;")
conn.commit()
conn.close()
# +
# Test that table set up is accurate
conn = psycopg2.connect(dbname="crime_db", user="dq")
conn.autocommit = True
cursor = conn.cursor()
cursor.execute("SELECT grantee, privilege_type FROM information_schema.table_privileges WHERE grantee='readwrite';")
cursor.fetchall()
# -
cursor.execute("SELECT grantee, privilege_type FROM information_schema.table_privileges WHERE grantee='readonly';")
cursor.fetchall()
| _notebooks/Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Try pyqg
# Excecute the notebooks from the docs
#
#
# [Two-layer model](docs/examples/two-layer.ipynb)
#
# [Barotropic model](docs/examples/barotropic.ipynb)
#
# [SQG model](docs/examples/sqg.ipynb)
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/graviraja/100-Days-of-NLP/blob/applications%2Fclassification/applications/classification/sentiment_classification/Sentimix%20using%20LSTM.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="hk-2e9TGbqjb" colab_type="text"
# ### Initial Setup
# + id="X3PRIz4MNIeH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="2d86e328-3a70-4b71-ca64-2cf6c2e5199d"
from google.colab import drive
drive.mount('/content/drive')
# + id="y-32oko7VGiO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="206fa4a7-c7fb-4b14-9568-dd7a0354cd66"
# !pip install contractions -q
# + [markdown] id="IY-7bk9weRQk" colab_type="text"
# Dataset can be found [here](https://github.com/gopalanvinay/thesis-vinay-gopalan)
# + id="tI-AbaIrQbcV" colab_type="code" colab={}
train_file = '/content/drive/My Drive/train_14k_split_conll.txt'
test_file = '/content/drive/My Drive/dev_3k_split_conll.txt'
# + [markdown] id="t2ZjhFUEZ5V1" colab_type="text"
# ### Imports
# + id="b9zNmcPjAJhX" colab_type="code" colab={}
import re
import time
import string
import contractions
import numpy as np
import pandas as pd
from collections import Counter
from sklearn.model_selection import train_test_split
from sklearn import metrics
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.optim.lr_scheduler import ReduceLROnPlateau
from torch.utils.data import Dataset, DataLoader
import matplotlib.pyplot as plt
import seaborn as sns
# + id="ev7LpvzHC2tn" colab_type="code" colab={}
with open(train_file) as f:
data = f.readlines()
# + id="mTHj_NUkVx0m" colab_type="code" colab={}
with open(test_file, 'r') as f:
test_data = f.readlines()
# + id="XK63TG3hMtMl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3bc7ad9b-7c94-4e33-fcc0-a82d59f1fe87"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
# + [markdown] id="G7R1R-KsZzGv" colab_type="text"
# ### Data Parsing
# + id="sRJbb8s3DWkC" colab_type="code" colab={}
def parse_data(data):
sentences, sentences_info, sentiment = [], [], []
all_langs = []
single_sentence, single_sentence_info = [], []
sent = ""
for idx, each_line in enumerate(data):
line = each_line.strip()
tokens = line.split('\t')
num_tokens = len(tokens)
if num_tokens == 2:
# add the word
single_sentence.append(tokens[0])
# add the word info(lang)
single_sentence_info.append(tokens[1])
all_langs.append(tokens[1])
elif num_tokens == 3 and idx > 0:
# append the sentence data
sentences.append(single_sentence)
sentences_info.append(single_sentence_info)
sentiment.append(sent)
sent = tokens[-1]
# clear the single sentence
single_sentence = []
single_sentence_info = []
# new line after the sentence
elif num_tokens == 1:
continue
else:
sent = tokens[-1]
# for the last sentence
if len(single_sentence) > 0:
sentences.append(single_sentence)
sentences_info.append(single_sentence_info)
sentiment.append(sent)
assert len(sentences) == len(sentences_info) == len(sentiment)
return sentences, sentences_info, sentiment, all_langs
# + id="58fx0BGMDXgT" colab_type="code" colab={}
sentences, sentences_info, sentiment, all_langs = parse_data(data)
# + id="4qM0Z_yvDa1M" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 527} outputId="7b5466bb-c312-4dbd-a615-ef702452f306"
data[:30]
# + id="BPIDrNPEV_Pw" colab_type="code" colab={}
test_sentences, test_sentences_info, test_sentiment, test_all_langs = parse_data(test_data)
# + id="3Fsd6Eq3DZGT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ee30d6cb-5670-44fe-e828-bafdbd6d620a"
len(sentiment)
# + [markdown] id="6N6WNOUqZkif" colab_type="text"
# ### Data Exploration
# + id="EIfbCARYUF01" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="9d492535-76f6-453f-e50c-b1959b69fc76"
sns.countplot(all_langs)
# + id="49lMosdbUMj1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="da82e173-38a8-4da3-9f60-19fc6ae4ffe2"
sns.countplot(sentiment)
# + id="OnQ8LUXgUOvL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="44d482ca-06e6-4d75-f790-c2e86df15422"
set(sentiment)
# + id="WprvaUZtUR0m" colab_type="code" colab={}
sent_num_tokens = [len(sent) for sent in sentences]
# + id="ZGKnYPtIUUcQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 337} outputId="600df5e8-8575-4e79-8ac2-9ede16610007"
plt.figure(figsize=(15, 5))
sns.countplot(sent_num_tokens)
# + id="Joo0mQ6eUXMQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 510} outputId="c5473fff-d244-4d9d-ac01-29241e6c210e"
sentences[10]
# + id="5FaMfnkeUaCS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 510} outputId="e63081c5-3ea5-4294-e7bb-3a07fa5dce59"
sentences_info[10]
# + id="fUwzn0R6Ui8w" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="27f4764f-bda2-442c-c891-559e501e9ae8"
sentiment[0]
# + [markdown] id="Lo5w-_VTZqJT" colab_type="text"
# ### Data Cleaning
# + id="xx8Dsop6UnFZ" colab_type="code" colab={}
url_pattern = r'https(.*)/\s\w+'
special_chars = r'[_…\*\[\]\(\)&“]'
names_with_numbers = r'([A-Za-z]+)\d{3,}'
apostee = r"([\w]+)\s'\s([\w]+)"
names = r"@[\s]*[\w]+[\s]*[_]+[\s]*[\w]+|@[\s]*[\w]+"
def preprocess_data(sentence_tokens):
sentence = " ".join(sentence_tokens)
sentence = " " + sentence
# remove rt and … from string
sentence = sentence.replace(" RT ", "")
sentence = sentence.replace("…", "")
# replace apostee
sentence = sentence.replace("’", "'")
# replace names
sentence = re.sub(re.compile(names), " ", sentence)
# remove special chars
# sentence = re.sub(re.compile(special_chars), "", sentence)
# remove urls
sentence = re.sub(re.compile(url_pattern), "", sentence)
## remove duplicate characters
# sentence = re.sub(r"(.)\1{3,}", r'\1', sentence)
# combine only ' related words => ... it ' s ... -> ... it's ...
sentence = re.sub(re.compile(apostee), r"\1'\2", sentence)
# fix contractions
sentence = contractions.fix(sentence)
# replace names ending with numbers with only names (remove numbers)
sentence = re.sub(re.compile(names_with_numbers), r" ", sentence)
## consider only printable chars (many greek, urdu, hindi chars are there)
# sentence = [ch for ch in sentence if ch in string.printable]
# sentence = "".join(sentence).strip()
sentence = " ".join(sentence.split()).strip()
return sentence
# + id="GfbWaN1UVS7u" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="b9844654-a123-4a61-814e-425ed181733e"
" ".join(sentences[1]), sentiment[1]
# + id="GZEdoQguVVua" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="9329ed70-f958-4c5c-e152-b53439f13984"
preprocess_data(sentences[1])
# + id="CDs7Ll0xVXmT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="944af5c9-0237-4d00-9996-e8470a8c6eb0"
" ".join(sentences[29]), sentiment[29]
# + id="56C4jG5kVaQW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="5a3998ed-7864-4d54-9b47-3e77832cde60"
preprocess_data(sentences[29])
# + id="Yud-i8-cVcVq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="3954a1a8-73c0-4d75-bd1d-cced8b137f05"
" ".join(sentences[10]), sentiment[10]
# + id="rvF985NFVgdQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="ba864dc6-17c4-421e-e009-9d151bc9ffa4"
preprocess_data(sentences[10])
# + id="DufJNfWEVi-o" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="95c83aea-224a-4e42-eafa-0594020044d0"
# %%time
processed_sentences = []
for sent in sentences:
processed_sentences.append(preprocess_data(sent))
# + id="SM7QJcUEVmmw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="cd59e1b9-85b8-4678-a5f2-12e29f6c6ee2"
# %%time
test_data = []
for sent in test_sentences:
test_data.append(preprocess_data(sent))
# + id="nGmKDpOPYrCu" colab_type="code" colab={}
sentiment_mapping = {
"negative": 0,
"neutral": 1,
"positive": 2
}
# + id="SzrNIirpYygm" colab_type="code" colab={}
labels = [sentiment_mapping[sent] for sent in sentiment]
test_label = [sentiment_mapping[sent] for sent in test_sentiment]
# + [markdown] id="wRHN41iSaA8S" colab_type="text"
# ### Train-Val-Test Splits
# + id="srl2_Y8YYZfK" colab_type="code" colab={}
train_data, val_data, train_label, val_label = train_test_split(processed_sentences, labels, test_size=0.2)
# + id="Z1NYPijBaNH_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d355ccbf-e0bc-4e66-aa4f-6bf522e2392e"
len(train_data), len(val_data), len(test_data)
# + [markdown] id="6kV6QZzxahas" colab_type="text"
# ### Train-Val-Test Distributions
# + id="7oEAb6FpaauD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 301} outputId="fdbac47b-75df-42b8-c449-37427272922a"
sns.countplot(train_label)
plt.xlabel('Training Data', fontsize=16)
# + id="U2jGVf-Gams7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 301} outputId="a0a0a6c2-f6ed-409c-97e2-fd16a0cc0e78"
sns.countplot(val_label)
plt.xlabel('Validation Data', fontsize=16)
# + id="8pJ7tEnaap6e" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 301} outputId="017534e9-af2b-4ab0-d181-29e661c65c5d"
sns.countplot(test_label)
plt.xlabel('Testing Data', fontsize=16)
# + [markdown] id="3WBAj2ZKbZgT" colab_type="text"
# ### Vocabulary
# + id="IClOtTgnUVFK" colab_type="code" colab={}
class Vocabulary(object):
def __init__(self):
self.word2idx = {}
self.idx2word = {}
self.idx = 0
def add_word(self, word):
if not word in self.word2idx:
self.word2idx[word] = self.idx
self.idx2word[self.idx] = word
self.idx += 1
def __call__(self, word):
if not word in self.word2idx:
return self.word2idx['<unk>']
return self.word2idx[word]
def __len__(self):
return len(self.word2idx)
# + id="bsTQzM7ParpY" colab_type="code" colab={}
def build_vocab(sentences, threshold=15):
"""Build a simple vocabulary wrapper."""
counter = Counter()
for i, sent in enumerate(sentences):
counter.update(sent.split())
# If the word frequency is less than 'threshold', then the word is discarded.
words = [word for word, cnt in counter.items() if cnt >= threshold]
# Create a vocab wrapper and add some special tokens.
vocab = Vocabulary()
vocab.add_word('<pad>')
vocab.add_word('<start>')
vocab.add_word('<end>')
vocab.add_word('<unk>')
# Add the words to the vocabulary.
for i, word in enumerate(words):
vocab.add_word(word)
return vocab
# + id="KxYxyz_4a2t3" colab_type="code" colab={}
vocab = build_vocab(train_data)
# + id="dyoCsZika72x" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a5d285a2-0fdf-4fbf-adfb-8ab5566db0d9"
len(vocab)
# + [markdown] id="Zf-DZ6Z8eJG-" colab_type="text"
# ### Dataset Wrapper
# + id="Xxfae7RVbh6Q" colab_type="code" colab={}
class SentiMixDataSet(Dataset):
def __init__(self, inputs, labels):
self.sentences = inputs
self.labels = labels
def __len__(self):
return len(self.labels)
def __getitem__(self, item):
sentence = self.sentences[item]
sentiment = int(self.labels[item])
tokens = []
tokens.append(vocab('<start>'))
for tok in sentence.split():
tokens.append(vocab(tok))
tokens.append(vocab('<end>'))
return torch.LongTensor(tokens), sentiment
# + id="fZURqHbEbnUO" colab_type="code" colab={}
train_dataset = SentiMixDataSet(train_data, train_label)
val_dataset = SentiMixDataSet(val_data, val_label)
test_dataset = SentiMixDataSet(test_data, test_label)
# + id="vltav2bCb5TW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="4a5eeb5f-b182-43e6-dd3b-a5dd878287b5"
# sample check
train_dataset[0]
# + [markdown] id="QquDZ_W2VkYa" colab_type="text"
# ### DataLoaders
# + id="S4kTCGy0Vjb8" colab_type="code" colab={}
def collate_fn(data):
data.sort(key=lambda x: len(x[0]), reverse=True)
sentences, sentiments = zip(*data)
sent_lengths = [len(sent) for sent in sentences]
inputs = torch.zeros((len(sentences), max(sent_lengths)), dtype=torch.long)
labels = torch.zeros(len(sentences), dtype=torch.long)
for i, sent in enumerate(sentences):
end = sent_lengths[i]
inputs[i, :end] = sent[:end]
labels[i] = sentiments[i]
return inputs, sent_lengths, labels
# + id="TtbPCw1FVpuS" colab_type="code" colab={}
BATCH_SIZE = 16
train_data_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, collate_fn=collate_fn, shuffle=True)
valid_data_loader = DataLoader(val_dataset, batch_size=BATCH_SIZE, collate_fn=collate_fn)
test_data_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE, collate_fn=collate_fn)
# + id="9IrHrhnVV1kW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6c4a711d-b1ad-40c5-919a-e35d57a59c68"
sample = next(iter(train_data_loader))
sample[0].shape, len(sample[1]), sample[2].shape
# + [markdown] id="s9ohTmLlcEn_" colab_type="text"
# ### RNN Model
# + id="Kl5hVHMQE_CW" colab_type="code" colab={}
class RNNModel(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, output_dim, num_layers, dropout=0.4):
super().__init__()
self.hid_dim = hid_dim
self.num_layers = num_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.GRU(
emb_dim,
hid_dim,
num_layers=num_layers,
batch_first=True,
bidirectional=True,
dropout=dropout
)
self.fc = nn.Linear(hid_dim, 50)
self.out = nn.Linear(50, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, inputs, input_lengths):
# inputs => [batch_size, seq_len]
# input_lengths => [batch_size]
embedded = self.dropout(self.embedding(inputs))
packed_input = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths, batch_first=True)
_, hidden = self.rnn(packed_input)
hidden = hidden.view(self.num_layers, 2, -1, self.hid_dim)
final_forward_hidden = hidden[-1, -2, :, :]
final_backward_hidden = hidden[-1, -1, :, :]
# final_*_hidden => [batch_size, hidden_dim]
combined = final_forward_hidden + final_backward_hidden
combined = self.dropout(combined)
# combined => [batch_size, hidden_dim]
intermediate = F.relu(self.fc(combined))
intermediate = self.dropout(intermediate)
logits = self.out(intermediate)
# logits => [batch_size, output_dim]
return logits
# + [markdown] id="mj4XHFbVb_lb" colab_type="text"
# ### Model Configurations
# + id="9FBI41Szb68_" colab_type="code" colab={}
input_dim = len(vocab)
output_dim = 3
emb_dim = 200
hid_dim = 100
num_layers = 2
NUM_EPOCHS = 20
model_path = "rnn.pt"
# + id="2pdc2kz7cCvu" colab_type="code" colab={}
model = RNNModel(input_dim, emb_dim, hid_dim, output_dim, num_layers)
# + id="xbAwdqs4cGNO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="2e5ceeda-b24d-4ee8-e38b-623a6a0965cb"
model.to(device)
# + id="7DshV_81gOQ0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="eaa3b075-91b3-4072-98fd-8e7c0fc3eea6"
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f"The model has {count_parameters(model)} trainable parameters")
# + [markdown] id="8hS49QWlcJmA" colab_type="text"
# ### Loss Criterion & Optimizer
# + id="rQ1DhUEQcH0m" colab_type="code" colab={}
lr = 1e-4
min_lr = 3e-5
lr_decay=0.5
lr_patience=2
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = ReduceLROnPlateau(optimizer, 'min', lr_decay, lr_patience, verbose=True, min_lr=min_lr)
# + [markdown] id="7p6XnbHZcZWx" colab_type="text"
# ### Training Method
# + id="MS_6YaOJcXSA" colab_type="code" colab={}
def train(iterator, clip=2.0):
epoch_loss = 0
model.train()
for batch in iterator:
sentences = batch[0].to(device)
sentence_lengths = batch[1]
targets = batch[2].to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(sentences, sentence_lengths)
loss = criterion(outputs, targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
# + [markdown] id="dQDnHALGceCD" colab_type="text"
# ### Validation Method
# + id="V-mdE1EjcbQX" colab_type="code" colab={}
def categorical_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
max_preds = preds.argmax(dim = 1, keepdim = True) # get the index of the max probability
correct = max_preds.squeeze(1).eq(y)
return correct.sum() / torch.FloatTensor([y.shape[0]]).to(device)
# + id="RGqEUuLIcgN0" colab_type="code" colab={}
def evaluate(iterator):
epoch_loss = 0
model.eval()
epoch_acc = 0
with torch.no_grad():
for batch in iterator:
sentences = batch[0].to(device)
sentence_lengths = batch[1]
targets = batch[2].to(device)
logits = model(sentences, sentence_lengths)
# logits => [batch_size, num_labels]
loss = criterion(logits, targets)
acc = categorical_accuracy(logits, targets)
epoch_acc += acc.item()
epoch_loss += loss.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
# + id="WpzK_DhuciER" colab_type="code" colab={}
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = elapsed_time - (elapsed_mins * 60)
return elapsed_mins, elapsed_secs
# + [markdown] id="mLrySiqwcjtj" colab_type="text"
# ### Model Training
# + id="8LPNFm71cjOX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 731} outputId="cffa3f99-e467-42fa-822a-8c0bf0e73778"
best_valid_loss = float('inf')
for epoch in range(NUM_EPOCHS):
start_time = time.time()
train_loss = train(train_data_loader)
val_loss, val_acc = evaluate(valid_data_loader)
end_time = time.time()
scheduler.step(val_loss)
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
print(f"Epoch: {epoch + 1:02} | Time: {epoch_mins}m {epoch_secs:.2f}s")
print(f"\tTrain Loss: {train_loss:.3f} | Val Loss: {val_loss:.3f} | Val Acc: {val_acc:.3f}")
if val_loss < best_valid_loss:
best_valid_loss = val_loss
torch.save(model.state_dict(), model_path)
# + id="ChLgj12YcnhW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="13d1b1bd-dfe2-487b-c569-bcae2b2ce4e9"
model.load_state_dict(torch.load(model_path))
# + [markdown] id="jFigMW42eCNT" colab_type="text"
# ### Evaluation
# + id="HK5-WoMbdt3k" colab_type="code" colab={}
def cal_metrics(model, data_loader):
model.eval()
fin_outputs = []
fin_targets = []
with torch.no_grad():
for batch in data_loader:
sentences = batch[0].to(device)
sentence_lengths = batch[1]
targets = batch[2].to(device)
predictions = model(sentences, sentence_lengths)
# predictions => [batch_size, num_labels]
outputs = predictions.max(dim=1)[1]
fin_targets.extend(targets.detach().cpu().numpy().tolist())
fin_outputs.extend(outputs.detach().cpu().numpy().tolist())
assert len(fin_outputs) == len(fin_targets)
cf = metrics.classification_report(fin_targets, fin_outputs)
print(cf)
# + id="7nqx-Bn7jUP4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="40981072-cf91-4b22-e521-ade749682a23"
cal_metrics(model, test_data_loader)
# + id="oSSmxN6VhX7m" colab_type="code" colab={}
| applications/classification/sentiment_classification/Sentimix using LSTM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Trading
# language: python
# name: trading
# ---
# #!/usr/bin/env python
# -*- coding: utf-8 -*-
import quandl
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import time
import datetime
from datetime import datetime
# +
#selected = ['WALMEX', 'GRUMAB', 'PE&OLES']
# get adjusted closing prices of 5 selected companies with Quandl
quandl.ApiConfig.api_key = 'Qa3CCQjeQQM<KEY>'
selected = ['CNP', 'F', 'WMT', 'GE', 'TSLA']
data = quandl.get_table('WIKI/PRICES', ticker = selected,
qopts = { 'columns': ['date', 'ticker', 'adj_close'] },
date = { 'gte': '2014-1-1', 'lte': '2016-12-31' }, paginate=True)
# reorganise data pulled by setting date as index with
# columns of tickers and their corresponding adjusted prices
clean = data.set_index('date')
table = clean.pivot(columns='ticker')
# -
table
# +
import yfinance as yf
msft = yf.Tickers("spy qqq")
# -
table = msft.history()['Close']
table
selected = ["SPY","QQQ"]
# +
# calculate daily and annual returns of the stocks
returns_daily = table.pct_change()
returns_annual = returns_daily.mean() * 250
# get daily and covariance of returns of the stock
cov_daily = returns_daily.cov()
cov_annual = cov_daily * 250
# empty lists to store returns, volatility and weights of imiginary portfolios
port_returns = []
port_volatility = []
stock_weights = []
# set the number of combinations for imaginary portfolios
num_assets = len(selected)
num_portfolios = 50000
# populate the empty lists with each portfolios returns,risk and weights
for single_portfolio in range(num_portfolios):
weights = np.random.random(num_assets)
weights /= np.sum(weights)
returns = np.dot(weights, returns_annual)
volatility = np.sqrt(np.dot(weights.T, np.dot(cov_annual, weights)))
port_returns.append(returns)
port_volatility.append(volatility)
stock_weights.append(weights)
# a dictionary for Returns and Risk values of each portfolio
portfolio = {'Returns': port_returns,
'Volatility': port_volatility}
# extend original dictionary to accomodate each ticker and weight in the portfolio
for counter,symbol in enumerate(selected):
portfolio[symbol+' Weight'] = [Weight[counter] for Weight in stock_weights]
# make a nice dataframe of the extended dictionary
df = pd.DataFrame(portfolio)
# get better labels for desired arrangement of columns
column_order = ['Returns', 'Volatility'] + [stock+' Weight' for stock in selected]
# reorder dataframe columns
df = df[column_order]
# plot the efficient frontier with a scatter plot
plt.style.use('seaborn')
df.plot.scatter(x='Volatility', y='Returns', figsize=(10, 8), grid=True)
plt.xlabel('Volatility (Std. Deviation)')
plt.ylabel('Expected Returns')
plt.title('Efficient Frontier')
plt.show()
# +
# }
# # get adjusted closing prices of 5 selected companies with Quandl
# quandl.ApiConfig.api_key = '<KEY>'
# selected = ['CNP', 'F', 'WMT', 'GE', 'TSLA']
# data = quandl.get_table('WIKI/PRICES', ticker = selected,
# qopts = { 'columns': ['date', 'ticker', 'adj_close'] },
# date = { 'gte': '2014-1-1', 'lte': '2016-12-31' }, paginate=True)
# # reorganise data pulled by setting date as index with
# # columns of tickers and their corresponding adjusted prices
# clean = data.set_index('date')
# table = clean.pivot(columns='ticker')
# calculate daily and annual returns of the stocks
returns_daily = table.pct_change()
returns_annual = returns_daily.mean() * 250
# get daily and covariance of returns of the stock
cov_daily = returns_daily.cov()
cov_annual = cov_daily * 250
# empty lists to store returns, volatility and weights of imiginary portfolios
port_returns = []
port_volatility = []
sharpe_ratio = []
stock_weights = []
# set the number of combinations for imaginary portfolios
num_assets = len(selected)
num_portfolios = 50000
#set random seed for reproduction's sake
np.random.seed(101)
# populate the empty lists with each portfolios returns,risk and weights
for single_portfolio in range(num_portfolios):
weights = np.random.random(num_assets)
weights /= np.sum(weights)
returns = np.dot(weights, returns_annual)
volatility = np.sqrt(np.dot(weights.T, np.dot(cov_annual, weights)))
sharpe = returns / volatility
sharpe_ratio.append(sharpe)
port_returns.append(returns)
port_volatility.append(volatility)
stock_weights.append(weights)
# a dictionary for Returns and Risk values of each portfolio
portfolio = {'Returns': port_returns,
'Volatility': port_volatility,
'Sharpe Ratio': sharpe_ratio}
# extend original dictionary to accomodate each ticker and weight in the portfolio
for counter,symbol in enumerate(selected):
portfolio[symbol+' Weight'] = [Weight[counter] for Weight in stock_weights]
# make a nice dataframe of the extended dictionary
df = pd.DataFrame(portfolio)
# get better labels for desired arrangement of columns
column_order = ['Returns', 'Volatility', 'Sharpe Ratio'] + [stock+' Weight' for stock in selected]
# reorder dataframe columns
df = df[column_order]
# plot frontier, max sharpe & min Volatility values with a scatterplot
plt.style.use('seaborn-dark')
df.plot.scatter(x='Volatility', y='Returns', c='Sharpe Ratio',
cmap='RdYlGn', edgecolors='black', figsize=(10, 8), grid=True)
plt.xlabel('Volatility (Std. Deviation)')
plt.ylabel('Expected Returns')
plt.title('Efficient Frontier')
plt.show()
# +
# find min Volatility & max sharpe values in the dataframe (df)
min_volatility = df['Volatility'].min()
max_sharpe = df['Sharpe Ratio'].max()
# use the min, max values to locate and create the two special portfolios
sharpe_portfolio = df.loc[df['Sharpe Ratio'] == max_sharpe]
min_variance_port = df.loc[df['Volatility'] == min_volatility]
# plot frontier, max sharpe & min Volatility values with a scatterplot
plt.style.use('seaborn-dark')
df.plot.scatter(x='Volatility', y='Returns', c='Sharpe Ratio',
cmap='RdYlGn', edgecolors='black', figsize=(10, 8), grid=True)
plt.scatter(x=sharpe_portfolio['Volatility'], y=sharpe_portfolio['Returns'], c='red', marker='D', s=200)
plt.scatter(x=min_variance_port['Volatility'], y=min_variance_port['Returns'], c='blue', marker='D', s=200 )
plt.xlabel('Volatility (Std. Deviation)')
plt.ylabel('Expected Returns')
plt.title('Efficient Frontier')
plt.show()
# -
print(min_variance_port.T)
print(sharpe_portfolio.T)
from pandas_datareader import data
import pandas as pd
from yahoo_finance import Share
# +
# Define the instruments to download. We would like to see Apple, Microsoft and the S&P500 index.
tickers = ['WALMEX','GMEXICOB','PE&OLES']
# Define which online source one should use
data_source = 'google'
# We would like all available data from 01/01/2000 until 12/31/2016.
start_date = '2015-01-16'
end_date = '2018-01-16'
# User pandas_reader.data.DataReader to load the desired data. As simple as that.
panel_data = data.DataReader(tickers, data_source, start_date, end_date)
# Getting just the adjusted closing prices. This will return a Pandas DataFrame
# The index in this DataFrame is the major index of the panel_data.
close = panel_data.ix['Close']
# Getting all weekdays between 01/01/2000 and 12/31/2016
all_weekdays = pd.date_range(start=start_date, end=end_date, freq='B')
# How do we align the existing prices in adj_close with our new set of dates?
# All we need to do is reindex close using all_weekdays as the new indec
close= close.reindex(all_weekdays)
# +
selected = ['WALMEX', 'GMEXICOB', 'PE&OLES']
# get adjusted closing prices of 5 selected companies with Quandl
quandl.ApiConfig.api_key = '<KEY>'
data = quandl.get_table('WIKI/PRICES', ticker = selected,
qopts = { 'columns': ['date', 'ticker', 'adj_close'] },
date = { 'gte': '2015-01-16', 'lte': '2018-01-16' }, paginate=True)
# reorganise data pulled by setting date as index with
# columns of tickers and their corresponding adjusted prices
clean = data.set_index('date')
table = close
# -
table.head()
# +
# calculate daily and annual returns of the stocks
returns_daily = table.pct_change()
returns_annual = returns_daily.mean() * 250
# get daily and covariance of returns of the stock
cov_daily = returns_daily.cov()
cov_annual = cov_daily * 250
# empty lists to store returns, volatility and weights of imiginary portfolios
port_returns = []
port_volatility = []
stock_weights = []
# set the number of combinations for imaginary portfolios
num_assets = len(selected)
num_portfolios = 50000
# populate the empty lists with each portfolios returns,risk and weights
for single_portfolio in range(num_portfolios):
weights = np.random.random(num_assets)
weights /= np.sum(weights)
returns = np.dot(weights, returns_annual)
volatility = np.sqrt(np.dot(weights.T, np.dot(cov_annual, weights)))
port_returns.append(returns)
port_volatility.append(volatility)
stock_weights.append(weights)
# a dictionary for Returns and Risk values of each portfolio
portfolio = {'Returns': port_returns,
'Volatility': port_volatility}
# extend original dictionary to accomodate each ticker and weight in the portfolio
for counter,symbol in enumerate(selected):
portfolio[symbol+' Weight'] = [Weight[counter] for Weight in stock_weights]
# make a nice dataframe of the extended dictionary
df = pd.DataFrame(portfolio)
# get better labels for desired arrangement of columns
column_order = ['Returns', 'Volatility'] + [stock+' Weight' for stock in selected]
# reorder dataframe columns
df = df[column_order]
# plot the efficient frontier with a scatter plot
plt.style.use('seaborn')
df.plot.scatter(x='Volatility', y='Returns', figsize=(10, 8), grid=True)
plt.xlabel('Volatility (Std. Deviation)')
plt.ylabel('Expected Returns')
plt.title('Efficient Frontier')
plt.show()
# +
table = close
# calculate daily and annual returns of the stocks
returns_daily = table.pct_change()
returns_annual = returns_daily.mean() * 250
# get daily and covariance of returns of the stock
cov_daily = returns_daily.cov()
cov_annual = cov_daily * 250
# empty lists to store returns, volatility and weights of imiginary portfolios
port_returns = []
port_volatility = []
sharpe_ratio = []
stock_weights = []
# set the number of combinations for imaginary portfolios
num_assets = len(selected)
num_portfolios = 50000
#set random seed for reproduction's sake
np.random.seed(101)
# populate the empty lists with each portfolios returns,risk and weights
for single_portfolio in range(num_portfolios):
weights = np.random.random(num_assets)
weights /= np.sum(weights)
returns = np.dot(weights, returns_annual)
volatility = np.sqrt(np.dot(weights.T, np.dot(cov_annual, weights)))
sharpe = returns / volatility
sharpe_ratio.append(sharpe)
port_returns.append(returns)
port_volatility.append(volatility)
stock_weights.append(weights)
# a dictionary for Returns and Risk values of each portfolio
portfolio = {'Returns': port_returns,
'Volatility': port_volatility,
'Sharpe Ratio': sharpe_ratio}
# extend original dictionary to accomodate each ticker and weight in the portfolio
for counter,symbol in enumerate(selected):
portfolio[symbol+' Weight'] = [Weight[counter] for Weight in stock_weights]
# make a nice dataframe of the extended dictionary
df = pd.DataFrame(portfolio)
# get better labels for desired arrangement of columns
column_order = ['Returns', 'Volatility', 'Sharpe Ratio'] + [stock+' Weight' for stock in selected]
# reorder dataframe columns
df = df[column_order]
# plot frontier, max sharpe & min Volatility values with a scatterplot
plt.style.use('seaborn-dark')
df.plot.scatter(x='Volatility', y='Returns', c='Sharpe Ratio',
cmap='RdYlGn', edgecolors='black', figsize=(10, 8), grid=True)
plt.xlabel('Volatility (Std. Deviation)')
plt.ylabel('Expected Returns')
plt.title('Efficient Frontier')
plt.show()
# +
# find min Volatility & max sharpe values in the dataframe (df)
min_volatility = df['Volatility'].min()
max_sharpe = df['Sharpe Ratio'].max()
# use the min, max values to locate and create the two special portfolios
sharpe_portfolio = df.loc[df['Sharpe Ratio'] == max_sharpe]
min_variance_port = df.loc[df['Volatility'] == min_volatility]
# plot frontier, max sharpe & min Volatility values with a scatterplot
plt.style.use('seaborn-dark')
df.plot.scatter(x='Volatility', y='Returns', c='Sharpe Ratio',
cmap='RdYlGn', edgecolors='black', figsize=(10, 8), grid=True)
plt.scatter(x=sharpe_portfolio['Volatility'], y=sharpe_portfolio['Returns'], c='red', marker='D', s=200)
plt.scatter(x=min_variance_port['Volatility'], y=min_variance_port['Returns'], c='blue', marker='D', s=200 )
plt.xlabel('Volatility (Std. Deviation)')
plt.ylabel('Expected Returns')
plt.title('Efficient Frontier')
plt.show()
# -
print(min_variance_port.T)
print(sharpe_portfolio.T)
df.head()
close_original=close.copy()
close[selected[2]].plot()
plt.title(selected[2])
close[selected[1]].plot()
plt.title(selected[1])
close[selected[0]].plot()
plt.title(selected[0])
close.GMEXICOB.plot()
plt.title(selected[2])
for i in range(0,3):
print selected[i]
print close[selected[i]].describe()
for i in range(0,3):
print selected[i]
print "precio del inicio"
print close[selected[i]][0]
print "precio actual"
print close[selected[i]][len(close)-1]
print "La media es: "
print close[selected[i]].mean()
print "La varianza es: "
print (close[selected[i]].std())**2
print "La volatilidad es: "
print close[selected[i]].std()
print "El rendimiento del portafolio es: " + str(int((close[selected[i]][len(close1)-1]/close[selected[i]][0])*100)-100)+ " %"
close.cov()
close.corr()
for i in range(0,3):
print selected[i]+ " : " +str(float(sharpe_portfolio[selected[i]+" Weight"]*2000000))
close1=DataFrame(close.copy())
close1.head()
close1['PORT']=float(sharpe_portfolio[selected[0]+" Weight"]*2000000)*close1[selected[0]]+float(sharpe_portfolio[selected[1]+" Weight"]*2000000)*close1[selected[1]]+float(sharpe_portfolio[selected[2]+" Weight"]*2000000)*close1[selected[2]]
close1.head()
print close1.PORT.describe()
print "PORT"
print "La media es: "
print close1.PORT.mean()
print "La varianza es: "
print (close1.PORT.std())**2
print "La volatilidad es: "
print close1.PORT.std()
print "El rendimiento del portafolio es: " + str(int((close1.PORT[len(close1)-1]/close1.PORT[0])*100)-100)+ " %"
close1.PORT[len(close1)-1]
close1.PORT[0]
close1.cov()
close1.corr()
close_original.head()
close_anual=close_original[close_original.index[datetime.date(close_original.index())>datetime.date(2017,3,16)]]
close_original['Fecha']=close_original.index
datetime.date(1943,3, 13)
close_original[close_original.Fecha]
now = datetime.datetime.now()
| examples/Acciones_v1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
import datetime
import calendar
# ### variable
#
# - PatientId - 환자를 식별할수 식별자
# - AppointmentID - 예약의 식별자
# - Gender = 성별 (여성의 비율이 크다, woman takes way more care of they health in comparison to man.)
# - ScheduledDay = 진료예약을 접수한 날짜
# - AppointmentDay = 실제 진료 예약 날짜(진료를 받는날)
# - Age = 환자의 나이
# - Neighbourhood = 병원이 있는 위치
# - Scholarship = (Observation, this is a broad topic, consider reading this article https://en.wikipedia.org/wiki/Bolsa_Fam%C3%ADlia)
# - Hipertension = 고혈압의 여부
# - Diabetes = 당뇨병 여부
# - Alcoholism = 알코올 중독의 여부
# - Handcap = 장애의 수 (0,1,2,3,4)
# - SMS_received = 1건 또는 그 이상 메세지를 환자에게 보냈는지의 여부 (0: 환자에게 메세지를 보내지 않음, 1: 환자에게 메세지를 보냄)
# - No-show = 진료예약한 날짜에 진료를 받기위해 나타났는지의 여부. (Yes: 오지 않음, No: 병원 방문함)
df=pd.read_csv("KaggleV2-May-2016.csv")
df.describe()
df.tail()
df.columns
df.info()
# ### PatientId
# - 데이터는 62,299명의 진료예약에 관한 정보를 담고 있음을 알 수 있다.
# - 그러므로 동일한 인원이 받은 기록이 있음을 확인 할 수 있다.
len(set(df.PatientId))
# ### AppointmentId
# - 레코드의 식별자 역할은 PatientId가 아닌 AppointmentId이다.
len(set(df.AppointmentID))
# 진료예약 접수 시간에 의한 순차적인 흐름을 보이고는 있지만 전부다 일치하는 것은 아니므로 AppointmentID가 진료예약 접수 시간의 순차에 완벽하게 따른다고 할 수 없다.
sum(df.sort_values(by=["ScheduledDay"])["AppointmentID"].values==df.sort_values(by=["AppointmentID"])["AppointmentID"].values)
plt.figure(figsize=(12,8))
# plt.subplot(121)
plt.xlabel("index")
plt.ylabel("AppointmentID")
plt.plot(df.sort_values(by=["ScheduledDay"])["AppointmentID"].values)
# plt.subplot(122)
plt.plot(df.sort_values(by=["AppointmentID"])["AppointmentID"].values, color="red")
# plt.xlabel("index")
plt.show()
# ### Gender
# - 전체 데이터에서 여성이 71,840명, 남성이 38,687명으로 여성이 절반이상인 약 65%를 차지하고 있음을 확인 할 수 있다.
df.groupby("Gender").size()/len(df)
# 여성과 남성은 거의 비슷한 비율로 예약날짜에 오지 않았다는 사실을 확인 할 수 있다.
(df.groupby(["Gender","No-show"]).size()/df.groupby("Gender").size()).reset_index(inplace=False, name="prop")
# ### ScheduledDay
df["scheduled_ymd"]=df.ScheduledDay.apply(lambda x : x[:10])
# scheduledDay을 연, 월, 일, 시간, 분, 시간+분+초, 요일형태로 변환
df["scheduled_Year"] = pd.to_datetime(df.ScheduledDay).apply(lambda x: x.year)
df["scheduled_month"] = pd.to_datetime(df.ScheduledDay).apply(lambda x: x.month)
df["scheduled_day"] = pd.to_datetime(df.ScheduledDay).apply(lambda x: x.day)
df["scheduled_Hour"] = pd.to_datetime(df.ScheduledDay).apply(lambda x: x.hour)
df["scheduled_Minute"] = pd.to_datetime(df.ScheduledDay).apply(lambda x: x.minute)
df["scheduled_dayofweek"] = pd.to_datetime(df.ScheduledDay).apply(lambda x : calendar.weekday(x.timetuple().tm_year, x.timetuple().tm_mon, x.timetuple().tm_mday))
# ### AppointmentDay
df["appoint_ymd"]=df.AppointmentDay.apply(lambda x : x[:10])
df["appoint_Year"] = pd.to_datetime(df.AppointmentDay).apply(lambda x: x.year)
df["appoint_month"] = pd.to_datetime(df.AppointmentDay).apply(lambda x: x.month)
df["appoint_day"] = pd.to_datetime(df.AppointmentDay).apply(lambda x: x.day)
df["appoint_Hour"] = pd.to_datetime(df.AppointmentDay).apply(lambda x: x.hour)
df["appoint_Minute"] = pd.to_datetime(df.AppointmentDay).apply(lambda x: x.minute)
df["appoint_dayofweek"] = pd.to_datetime(df.AppointmentDay)\
.apply(lambda x : calendar.weekday(x.timetuple().tm_year, x.timetuple().tm_mon, x.timetuple().tm_mday))
df.head(10)
# 실제 진료날짜를 나타내는 ScheduleDay와 진료예약을 잡은 날짜를 의미하는 AppointmentDay와의 차이를 구하기 위해 다음과 같은 작업을 하였다.
df["differ_day"]=pd.to_datetime(df.AppointmentDay.apply(lambda x : x[:10]))-pd.to_datetime(df.ScheduledDay.apply(lambda x : x[:10]))
df.
df.groupby(by=["differ_day"]).size().reset_index(name="count")[:8]
df.groupby(by=["differ_day","No-show"]).size().reset_index(name="count")[:8]
# 위의 결과로서 진료를 받는날보다 진료예약을 한 날이 더 이후인 경우를 의미하는 데이터들은 의미가 맞지 않으므로 제거할것이다.
sum(df.differ_day > "1 days")
np.where(df.differ_day=="179 days")
np.where(df.differ_day=="-6 days")
# df.iloc[102786]
set(list(np.where(df.differ_day=="-6 days")[0])+list(np.where(df.differ_day=="-1 days")[0]))
len(set(df.index)-set(list(np.where(df.differ_day=="-6 days")[0])+list(np.where(df.differ_day=="-1 days")[0])))
df=df.loc[set(df.index)-set(list(np.where(df.differ_day=="-6 days")[0])+list(np.where(df.differ_day=="-1 days")[0]))]\
.reset_index(inplace=False, drop=True)
# ### Age
sns.distplot(df.Age)
plt.show()
sum(df.Age==-1) ### 이건 뭐지????
np.where(df.Age==-1)
# 여성이므로 나이가 -1인것은 임산부가 태아의 진료를 받으려고 예약한 자료라고 추측 해 볼 수 있을 것이다.
np.where(df.PatientId==465943158731293.0)
df.iloc[99827]
sum(df.Age==0)
df.iloc[np.where(df.Age > 100)]
df.iloc[np.where(df.Age==0)]
# PatientId가 나이에 상관이 있는지 확인해 보기 위해 비교해본 결과 다음과 같이 나이에 영향을 받고 있지 않음을 확인할 수 있다.
sum(df.sort_values(by=["Age"])["PatientId"].values==df.sort_values(by=["PatientId"])["PatientId"].values)
# 연령대로 나누는 이유는 10대미만의 사람들은 혼자 오기 보다는 부모님이 동반해서 같이 오는 경향이 많이 있을 것 같아서 부모님과 같이 오게 되는 인원들은 어떠한 어떠한 경향을 보이는지 확인하려고 한다.
df["Age_group"]=df.Age.apply(lambda x : "10대미만" if 0<=x<10 else "10대" if 10<=x<20 else "20대" if 20<=x<30 else "30대" if 30<=x<40 \
else "40대" if 40<=x<50 else "50대" if 50<=x<60 else "60대" if 60<=x<70 else "70대" if 70<=x<80 else "80대" \
if 80<=x<90 else "90대" if 90<=x<100 else "100대" if 100<=x<110 else "110대")
df.groupby(["Age_group","No-show"]).size().reset_index(name="count")
# 위의 100세 이상 부터는 인원이 거의 없기 때문에 90세 이상으로 범주를 통일 시켜준다. 인원수가 적은 이유 뿐만 아니라 대체적으로 전 연령대가 예약한 날짜에 진료를 받으러 오지 않는 경향이 있기 때문에 그 경향을 맞추주기 위한 이유도 있다. 예를들어 110대 같은 경우는 예약진료를 받으러 오는 사람과 오지 않는 사람의 비율이 동일한데 이것은 인원이 적기 때문에 그럴 수 있다는 판단하에 범주를 통일 시켜준 것이다.
df["Age_group"]=df.Age.apply(lambda x : "10대미만" if 0<=x<10 else "10대" if 10<=x<20 else "20대" if 20<=x<30 else "30대" if 30<=x<40 \
else "40대" if 40<=x<50 else "50대" if 50<=x<60 else "60대" if 60<=x<70 else "70대" if 70<=x<80 else "80대" \
if 80<=x<90 else "90대이상")
df.groupby(["Age_group","No-show"]).size().reset_index(name="count")
data1=df.groupby(["Age_group","No-show"]).size().reset_index(name="count").loc[0:1].append\
(df.groupby(["Age_group","No-show"]).size().reset_index(name="count").loc[4:])
data1=df.groupby(["Age_group","No-show"]).size().reset_index(name="count").loc[2:3].append(data1)
data1=data1.reset_index(drop=True)
df.columns
# ### Neighbourhood
#
# - 예약한 병원의 장소
len(set(df.Neighbourhood)) # 81개의 지역정보
df.groupby(["Neighbourhood","No-show"]).size()
# ### Scholarship
#
# - 정부의 사회 복지 프로그램 중 하나로써, 학교를 다니고 있는 아이들이 있는
# - 정부의 복지금을 받지 않고 있는 환자들이 많다는 것을 확인 할 수 있다.
df.groupby("Scholarship").size()
# ### Hipertension
# - 고혈압을 앓고 있지 않는 환자가 더 많음을 확인 할 수 있다.
df.groupby("Hipertension").size()
# 각각 예약 진료 방문여부에 대해 count를 해보면 다음과 같다.
df.groupby(["Hipertension","No-show"]).size()
# ### Diabetes
# - 당뇨병이 없는 환자가 더 많음을 확인 할 수 있다.
df.groupby("Diabetes").size()
# 각각 예약 진료 방문여부에 대해 count를 해보면 다음과 같다.
df.groupby(["Diabetes","No-show"]).size()
# ### Alcoholism
# - 알코올 중독 여부를 의미한다.
df.groupby("Diabetes").size()
# 각각 예약 진료 방문여부에 대해 count를 해보면 다음과 같다.
df.groupby(["Diabetes","No-show"]).size()
# ### Handcap
df.groupby("Handcap").size()
# 각각 예약 진료 방문여부에 대해 count를 해보면 다음과 같다.
df.groupby(["Handcap","No-show"]).size()
# ### SMS_received
# 진료 예약 문자를 받은 환자를 의미함.
df.groupby("SMS_received").size()
# 각각 예약 진료 방문여부에 대해 count를 해보면 다음과 같다.
# 오히려 문자를 안받은 환자들이 더 많이 재방문을 한다고 할 수 있다.
df.groupby(["SMS_received","No-show"]).size()
# !pip install -U imbalanced-learn
xcol_name=list(set(list(df.columns))-set(["No-show","ScheduledDay","AppointmentDay"]))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df[xcol_name], df["No-show"], test_size=0.5, random_state=0)
X_train.
from sklearn.svm import SVC
model = SVC().fit(X_train, y_train)
| classification_version_0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.008151, "end_time": "2020-09-21T20:05:03.466151", "exception": false, "start_time": "2020-09-21T20:05:03.458000", "status": "completed"} tags=[]
# # Daily Power Generation in India
#
# ### India is the world's third-largest producer and third largest consumer of electricity.
#
# India's electricity sector is dominated by fossil fuels, in particular coal, which during the 2018-19 fiscal year produced about three-quarters of the country's electricity. The government is making efforts to increase investment in renewable energy. The government's National Electricity Plan of 2018 states that the country does not need more non-renewable power plants in the utility sector until 2027, with the commissioning of 50,025 MW coal-based power plants under construction and addition of 275,000 MW total renewable power capacity after the retirement of nearly 48,000 MW old coal-fired plants.
#
# India has recorded rapid growth in electricity generation since 1985, increasing from 179 TW-hr in 1985 to 1,057 TW-hr in 2012. The majority of the increase came from coal-fired plants and non-conventional renewable energy sources (RES), with the contribution from natural gas, oil, and hydro plants decreasing in 2012-2017. The gross utility electricity generation (excluding imports from Bhutan) was 1,384 billion kWh in 2019-20, representing 1.0 % annual growth compared to 2018-2019. The contribution from renewable energy sources was nearly 20% of the total. In the year 2019-20, all the incremental electricity generation is contributed by renewable energy sources as the power generation from fossil fuels decreased.
# + [markdown] papermill={"duration": 0.00732, "end_time": "2020-09-21T20:05:03.480930", "exception": false, "start_time": "2020-09-21T20:05:03.473610", "status": "completed"} tags=[]
# ### Please visit this [link](https://npp.gov.in/dashBoard/cp-map-dashboard) for more interesting Dashboards from [https://npp.gov.in/](https://npp.gov.in/)
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 3.497153, "end_time": "2020-09-21T20:05:06.985471", "exception": false, "start_time": "2020-09-21T20:05:03.488318", "status": "completed"} tags=[]
import numpy as np
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import plotly.io as pio
import pandas_profiling
import os
import calendar
pio.templates.default = "plotly_dark"
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# + papermill={"duration": 0.060804, "end_time": "2020-09-21T20:05:07.059596", "exception": false, "start_time": "2020-09-21T20:05:06.998792", "status": "completed"} tags=[]
data1 = pd.read_csv('../input/daily-power-generation-in-india-20172020/file_02.csv')
data1.head()
# + [markdown] papermill={"duration": 0.009109, "end_time": "2020-09-21T20:05:07.077834", "exception": false, "start_time": "2020-09-21T20:05:07.068725", "status": "completed"} tags=[]
# ### Data Preprocessing
# + papermill={"duration": 0.023579, "end_time": "2020-09-21T20:05:07.112955", "exception": false, "start_time": "2020-09-21T20:05:07.089376", "status": "completed"} tags=[]
data1['Date'] = pd.to_datetime(data1['Date'])
# + papermill={"duration": 0.025728, "end_time": "2020-09-21T20:05:07.152914", "exception": false, "start_time": "2020-09-21T20:05:07.127186", "status": "completed"} tags=[]
data1['Thermal Generation Estimated (in MU)'] = data1['Thermal Generation Estimated (in MU)'].str.replace(',','').astype('float')
data1['Thermal Generation Estimated (in MU)'].values
# + papermill={"duration": 0.025541, "end_time": "2020-09-21T20:05:07.187903", "exception": false, "start_time": "2020-09-21T20:05:07.162362", "status": "completed"} tags=[]
data1['Thermal Generation Actual (in MU)'] = data1['Thermal Generation Actual (in MU)'].str.replace(',','').astype('float')
data1['Thermal Generation Actual (in MU)'].values
# + papermill={"duration": 0.027004, "end_time": "2020-09-21T20:05:07.229388", "exception": false, "start_time": "2020-09-21T20:05:07.202384", "status": "completed"} tags=[]
def time_series_overall(df, groupby, dict_features, filter=None):
temp = df.groupby(groupby).agg(dict_features)
fig = go.Figure()
for f,c in zip(dict_features, px.colors.qualitative.D3):
fig.add_traces(go.Scatter(y=temp[f].values,
x=temp.index,
name=f,
marker=dict(color=c)
))
fig.update_traces(marker_line_color='rgb(255,255,255)',
marker_line_width=2.5, opacity=0.7)
fig.update_layout(
width=1000,
xaxis=dict(title="Date", showgrid=False),
yaxis=dict(title="MU", showgrid=False),
legend=dict(
x=0,
y=1.2))
fig.show()
# + papermill={"duration": 0.341719, "end_time": "2020-09-21T20:05:07.580929", "exception": false, "start_time": "2020-09-21T20:05:07.239210", "status": "completed"} tags=[]
dict_features = {
"Thermal Generation Estimated (in MU)": "sum",
"Thermal Generation Actual (in MU)": "sum",
}
time_series_overall(data1, groupby="Date", dict_features=dict_features)
dict_features = {
"Nuclear Generation Estimated (in MU)": "sum",
"Nuclear Generation Actual (in MU)": "sum",
}
time_series_overall(data1, groupby="Date", dict_features=dict_features)
dict_features = {
"Hydro Generation Estimated (in MU)": "sum",
"Hydro Generation Actual (in MU)": "sum"
}
time_series_overall(data1, groupby="Date", dict_features=dict_features)
# + [markdown] papermill={"duration": 0.013832, "end_time": "2020-09-21T20:05:07.609317", "exception": false, "start_time": "2020-09-21T20:05:07.595485", "status": "completed"} tags=[]
# ### Report -
#
# - These graphs clearly show that the Actual Power Generated is much higher than the Estimated one!
# + [markdown] papermill={"duration": 0.013693, "end_time": "2020-09-21T20:05:07.636875", "exception": false, "start_time": "2020-09-21T20:05:07.623182", "status": "completed"} tags=[]
# ## State wise Visualisation of National Share
# + papermill={"duration": 0.028985, "end_time": "2020-09-21T20:05:07.679690", "exception": false, "start_time": "2020-09-21T20:05:07.650705", "status": "completed"} tags=[]
state_df = pd.read_csv('/kaggle/input/daily-power-generation-in-india-20172020/State_Region_corrected.csv')
state_df.head()
# + [markdown] papermill={"duration": 0.016663, "end_time": "2020-09-21T20:05:07.710809", "exception": false, "start_time": "2020-09-21T20:05:07.694146", "status": "completed"} tags=[]
# ## Region wise representation of Nation wise Power Distribution In India
# + papermill={"duration": 0.08005, "end_time": "2020-09-21T20:05:07.805298", "exception": false, "start_time": "2020-09-21T20:05:07.725248", "status": "completed"} tags=[]
state = state_df.groupby('Region')['National Share (%)'].sum().sort_values(ascending = False)
state.index
fig = px.pie(state,values = state.values, names=state.index,
title='Distribution of Power')
fig.show()
# + [markdown] papermill={"duration": 0.01467, "end_time": "2020-09-21T20:05:07.835010", "exception": false, "start_time": "2020-09-21T20:05:07.820340", "status": "completed"} tags=[]
# ## State wise representation of Nation wise Power Distribution
# + papermill={"duration": 0.119846, "end_time": "2020-09-21T20:05:07.969593", "exception": false, "start_time": "2020-09-21T20:05:07.849747", "status": "completed"} tags=[]
fig = px.bar(state_df.nlargest(15, "National Share (%)"),
x = 'National Share (%)',
y = 'State / Union territory (UT)',
text="National Share (%)",
color ='State / Union territory (UT)')
fig.show()
# + [markdown] papermill={"duration": 0.015338, "end_time": "2020-09-21T20:05:08.000760", "exception": false, "start_time": "2020-09-21T20:05:07.985422", "status": "completed"} tags=[]
# **Hey Kagglers, if you found this Visualisation Interesting and Insightful, do upvote and let me know your thoughts on it!**
| india-s-power-generation-report-2017-2020.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import time
import datetime
import matplotlib.pyplot as plt
import numpy as np
# # Working with large data sets
# ## Lazy evaluation, pure functions and higher order functions
# ### Lazy and eager evaluation
# A list comprehension is **eager**.
[x*x for x in range(3)]
# A generator expression is **lazy**.
(x*x for x in range(3))
# You can use generators as **iterators**.
g = (x*x for x in range(3))
next(g)
next(g)
next(g)
next(g)
# A generator is **single use**.
for i in g:
print(i, end=", ")
g = (x*x for x in range(3))
for i in g:
print(i, end=", ")
# The list constructor forces evaluation of the generator.
list(x*x for x in range(3))
# An eager **function**.
def eager_updown(n):
xs = []
for i in range(n):
xs.append(i)
for i in range(n, -1, -1):
xs.append(i)
return xs
eager_updown(3)
# A lazy **generator**.
def lazy_updown(n):
for i in range(n):
yield i
for i in range(n, -1, -1):
yield i
lazy_updown(3)
list(lazy_updown(3))
# ### Pure and impure functions
# A pure function is like a mathematical function. Given the same inputs, it always returns the same output, and has no side effects.
def pure(alist):
return [x*x for x in alist]
# An impure function has **side effects**.
def impure(alist):
for i in range(len(alist)):
alist[i] = alist[i]*alist[i]
return alist
xs = [1,2,3]
ys = pure(xs)
print(xs, ys)
ys = impure(xs)
print(xs, ys)
# #### Quiz
#
# Say if the following functions are pure or impure.
def f1(n):
return n//2 if n % 2==0 else n*3+1
def f2(n):
return np.random.random(n)
def f3(n):
n = 23
return n
def f4(a, n=[]):
n.append(a)
return n
# ### Higher order functions
list(map(f1, range(10)))
list(filter(lambda x: x % 2 == 0, range(10)))
from functools import reduce
reduce(lambda x, y: x + y, range(10), 0)
reduce(lambda x, y: x + y, [[1,2], [3,4], [5,6]], [])
# #### Using the operator module
#
# The `operator` module provides all the Python operators as functions.
import operator as op
reduce(op.mul, range(1, 6), 1)
list(map(op.itemgetter(1), [[1,2,3],[4,5,6],[7,8,9]]))
# #### Using itertools
import itertools as it
list(it.combinations(range(1,6), 3))
# Generate all Boolean combinations
list(it.product([0,1], repeat=3))
list(it.starmap(op.add, zip(range(5), range(5))))
list(it.takewhile(lambda x: x < 3, range(10)))
data = sorted('the quick brown fox jumps over the lazy dog'.split(), key=len)
for k, g in it.groupby(data, key=len):
print(k, list(g))
# #### Using toolz
# ! pip install toolz
import toolz as tz
list(tz.partition(3, range(10)))
list(tz.partition(3, range(10), pad=None))
n = 30
dna = ''.join(np.random.choice(list('ACTG'), n))
dna
tz.frequencies(tz.sliding_window(2, dna))
# #### Using pipes and the curried namespace
from toolz import curried as c
tz.pipe(
dna,
c.sliding_window(2), # using curry
c.frequencies,
)
composed = tz.compose(
c.frequencies,
c.sliding_window(2),
)
composed(dna)
# #### Processing many sets of DNA strings without reading into memory
m = 10000
n = 300
dnas = (''.join(np.random.choice(list('ACTG'), n, p=[.1, .2, .3, .4]))
for i in range(m))
dnas
tz.merge_with(sum,
tz.map(
composed,
dnas
)
)
# ## Working with out-of-core memory
# ### Using `memmap`
#
# You can selectively retrieve parts of `numpy` arrays stored on disk into memory for processing with `memmap`.
#
# Memory-mapped files are used for accessing small segments of large files on disk, without reading the entire file into memory. The `numpy.memmap` can be used anywhere an `ndarray` is used. The maximum size of a `memmap` array is limited by what the operating system allows - in particular, this is different on 32 and 64 bit architectures.
# #### Creating the memory mapped file
# +
n = 100
filename = 'random.dat'
shape = (n, 1000, 1000)
# create memmap
fp = np.memmap(filename, dtype='float64', mode='w+', shape=shape)
# store some data in it
for i in range(n):
x = np.random.random(shape[1:])
fp[i] = x
# flush to disk and remove file handler
del fp
# -
# #### Usign the memory mapped file
# +
fp = np.memmap(filename, dtype='float64', mode='r', shape=shape)
# only one block is retreived into memory at a time
start = time.time()
xs = [fp[i].mean() for i in range(n)]
elapsed = time.time() - start
print(np.mean(xs), 'Total: %.2fs Per file: %.2fs' % (elapsed, elapsed/n))
# -
# ### Using HDF5
#
# HDF5 is a hierarchical file format that allows selective disk reads, but also provides a tree structure for organizing your data sets. It can also include metadata annotation for documentation. Because of its flexibility, you should seriously consider using HDF5 for your data storage needs.
#
# I suggest using the python package `h5py` for working with HDF5 files. See [documentation](http://docs.h5py.org/en/latest/).
import h5py
# #### Creating an HDF5 file
# +
# %%time
n = 5
filename = 'random.hdf5'
shape = (n, 1000, 1000)
groups = ['Sim%02d' % i for i in range(5)]
with h5py.File(filename, 'w') as f:
# Create hierarchical group structure
for group in groups:
g = f.create_group(group)
# Add metadata for each group
g.attrs['created'] = str(datetime.datetime.now())
# Save 100 arrays in each group
for i in range(n):
x = np.random.random(shape[1:])
dset = g.create_dataset('x%06d' % i, shape=x.shape)
dset[:] = x
# Add metadata for each array
dset.attrs['created'] = str(datetime.datetime.now())
# -
# #### Using an HDF5 file
f = h5py.File('random.hdf5', 'r')
# The HDF5 objects can be treated like dictionaries.
for name in f:
print(name)
for key in f.keys():
print(key)
sim1 = f.get('Sim01')
list(sim1.keys())[:5]
# Or recursed through like trees
f.visit(lambda x: print(x))
# Retrieving data and attributes
sim2 = f.get('Sim02')
sim2.attrs['created']
x = sim2.get('x000003')
print(x.shape)
print(x.dtype)
print(list(x.attrs.keys()))
print(x.attrs['created'])
np.mean(x)
f.close()
# ### Using SQLite3
#
# When data is on a relational database, it is useful to do as much preprocessing as possible using SQL - this will be performed using highly efficient compiled routines on the (potentially remote) computer where the database exists.
#
# Here we will use SQLite3 together with `pandas` to summarize a (potentially) large database.
import pandas as pd
from sqlalchemy import create_engine
engine = create_engine('sqlite:///data/movies.db')
# +
q = '''
SELECT year, count(*) as number
FROM data
GROUP BY year
ORDER BY number DESC
'''
# The coounting, grouping and sorting is done by the database, not pandas
# So this query will work even if the movies dataabse is many terabytes in size
df = pd.read_sql_query(q, engine)
df.head()
# -
# ### Out-of-memory data conversions
#
# There is a convenient Python package called `odo` that will convert data between different formats without having to load all the data into memory first. This allows conversion of potentially huge files.
#
# [Odo](http://www.startrek.com/database_article/odo) is a shape shifting character in the Star Trek universe.
# ! pip install odo
import odo
odo.odo('sqlite:///data/movies.db::data', 'data/movies.csv')
# ! head data/movies.csv
# ## Probabilistic data structures
#
# A `data sketch` is a probabilistic algorithm or data structure that approximates some statistic of interest, typically using very little memory and processing time. Often they are applied to streaming data, and so must be able to incrementally process data. Many data sketches make use of hash functions to distribute data into buckets uniformly. Typically, data sketches have the following desirable properties
#
# - sub-linear in space
# - single scan
# - can be parallelized
# - can be combined (merge)
#
# Examples where counting distinct values is useful:
#
# - number of unique users in a Twitter stream
# - number of distinct records to be fetched by a database query
# - number of unique IP addresses accessing a website
# - number of distinct queries submitted to a search engine
# - number of distinct DNA motifs in genomics data sets (e.g. microbiome)
#
# Packages for data sketches in Python are relatively immature, and if you are interested, you could make a large contribution by creating a comprehensive open source library of data sketches in Python.
# ### HyperLogLog
#
# Counting the number of **distinct** elements exactly requires storage of all distinct elements (e.g. in a set) and hence grows with the cardinality $n$. Probabilistic data structures known as Distinct Value Sketches can do this with a tiny and fixed memory size.
#
# A hash function takes data of arbitrary size and converts it into a number in a fixed range. Ideally, given an arbitrary set of data items, the hash function generates numbers that follow a uniform distribution within the fixed range. Hash functions are immensely useful throughout computer science (for example - they power Python sets and dictionaries), and especially for the generation of probabilistic data structures.
#
# The binary digits in a (say) 32-bit hash are effectively random, and equivalent to a sequence of fair coin tosses. Hence the probability that we see a run of 5 zeros in the smallest hash so far suggests that we have added $2^5$ unique items so far. This is the intuition behind the loglog family of Distinct Value Sketches. Note that the biggest count we can track with 32 bits is $2^{32} = 4294967296$.
#
# The accuracy of the sketch can be improved by averaging results with multiple coin flippers. In practice, this is done by using the first $k$ bit registers to identify $2^k$ different coin flippers. Hence, the max count is now $2 ** (32 - k)$. The hyperloglog algorithm uses the harmonic mean of the $2^k$ flippers which reduces the effect of outliers and hence the variance of the estimate.
# ! pip install hyperloglog
from hyperloglog import HyperLogLog
# #### Compare unique counts with set and hyperloglog
# +
def flatten(xs):
return (x for sublist in xs for x in sublist)
def error(a, b, n):
return abs(len(a) - len(b))/n
print('True\t\tHLL\t\tRel Error')
with open('data/Ulysses.txt') as f:
word_list = flatten(line.split() for line in f)
s = set([])
hll = HyperLogLog(error_rate=0.01)
for i, word in enumerate(word_list):
s.add(word)
hll.add(word)
if i%int(.2e5)==0:
print('%8d\t%8d\t\t%.3f' %
(len(s), len(hll),
0 if i==0 else error(s, hll, i)))
# -
# ### Bloom filters
#
# Bloom filters are designed to answer queries about whether a specific item is in a collection. If the answer is NO, then it is definitive. However, if the answer is yes, it might be a false positive. The possibility of a false positive makes the Bloom filter a probabilistic data structure.
#
# A bloom filter consists of a bit vector of length $k$ initially set to zero, and $n$ different hash functions that return a hash value that will fall into one of the $k$ bins. In the construction phase, for every item in the collection, $n$ hash values are generated by the $n$ hash functions, and every position indicated by a hash value is flipped to one. In the query phase, given an item, $n$ hash values are calculated as before - if any of these $n$ positions is a zero, then the item is definitely not in the collection. However, because of the possibility of hash collisions, even if all the positions are one, this could be a false positive. Clearly, the rate of false positives depends on the ratio of zero and one bits, and there are Bloom filter implementations that will dynamically bound the ratio and hence the false positive rate.
#
# Possible uses of a Bloom filter include:
#
# - Does a particular sequence motif appear in a DNA string?
# - Has this book been recommended to this customer before?
# - Check if an element exists on disk before performing I/O
# - Check if URL is a potential malware site using in-browser Bloom filter to minimize network communication
# - As an alternative way to generate distinct value counts cheaply (only increment count if Bloom filter says NO)
# ! pip install git+https://github.com/jaybaird/python-bloomfilter.git
from pybloom import ScalableBloomFilter
# The Scalable Bloom Filter grows as needed to keep the error rate small
sbf = ScalableBloomFilter(error_rate=0.001)
with open('data/Ulysses.txt') as f:
word_set = set(flatten(line.split() for line in f))
for word in word_set:
sbf.add(word)
# #### Ask Bloom filter if test words were in Ulysses
test_words = ['banana', 'artist', 'Dublin', 'masochist', 'Obama']
for word in test_words:
print(word, word in sbf)
for word in test_words:
print(word, word in word_set)
# ## Small-scale distributed programming
# ### Using `dask`
#
# For data sets that are not too big (say up to 1 TB), it is typically sufficient to process on a single workstation. The package dask provides 3 data structures that mimic regular Python data structures but perform computation in a distributed way allowing you to make optimal use of multiple cores easily.
#
# These structures are
#
# - dask array ~ numpy array
# - dask bag ~ Python dictionary
# - dask dataframe ~ pandas dataframe
#
# From the [official documentation](http://dask.pydata.org/en/latest/index.html),
#
# ```
# Dask is a simple task scheduling system that uses directed acyclic graphs (DAGs) of tasks to break up large computations into many small ones.
#
# Dask enables parallel computing through task scheduling and blocked algorithms. This allows developers to write complex parallel algorithms and execute them in parallel either on a modern multi-core machine or on a distributed cluster.
#
# On a single machine dask increases the scale of comfortable data from fits-in-memory to fits-on-disk by intelligently streaming data from disk and by leveraging all the cores of a modern CPU.
# ```
# ! pip install dask
import dask
import dask.array as da
import dask.bag as db
import dask.dataframe as dd
# #### `dask` arrays
#
# These behave like `numpy` arrays, but break a massive job into **tasks** that are then executed by a **scheduler**. The default scheduler uses threading but you can also use multiprocessing or distributed or even serial processing (mainly for debugging). You can tell the dask array how to break the data into **chunks** for processing.
#
# From official documents
#
# ```
# For performance, a good choice of chunks follows the following rules:
#
# A chunk should be small enough to fit comfortably in memory. We’ll have many chunks in memory at once.
# A chunk must be large enough so that computations on that chunk take significantly longer than the 1ms overhead per task that dask scheduling incurs. A task should take longer than 100ms.
# Chunks should align with the computation that you want to do. For example if you plan to frequently slice along a particular dimension then it’s more efficient if your chunks are aligned so that you have to touch fewer chunks. If you want to add two arrays then its convenient if those arrays have matching chunks patterns.
# ```
# +
# We resuse the 100 * 1000 * 1000 random numbers in the memmap file on disk
n = 100
filename = 'random.dat'
shape = (n, 1000, 1000)
fp = np.memmap(filename, dtype='float64', mode='r', shape=shape)
# We can decide on the chunk size to be distributed for computing
xs = [da.from_array(fp[i], chunks=(200,500)) for i in range(n)]
xs = da.concatenate(xs)
avg = xs.mean().compute()
# -
avg
# +
# Typically we store Dask arrays inot HDF5
da.to_hdf5('data/xs.hdf5', '/foo/xs', xs)
# -
with h5py.File('data/xs.hdf5', 'r') as f:
print(f.get('/foo/xs').shape)
# #### `dask` data frames
#
# Dask dataframes can treat multiple pandas dataframes that might not simultaneously fit into memory like a single dataframe. See use of globbing to specify multiple source files.
for i in range(5):
f = 'data/x%03d.csv' % i
np.savetxt(f, np.random.random((1000, 5)), delimiter=',')
df = dd.read_csv('data/x*.csv', header=None)
print(df.describe().compute())
# #### `dask` bags
#
#
# Dask bags work like dictionaries for unstructured or semi-structured data sets, typically over many files.
# #### The AA subdirectory consists of 101 1 MB plain text files from the English Wikipedia
text = db.read_text('data/wiki/AA/*')
# +
# %%time
words = text.str.split().concat().frequencies().topk(10, key=lambda x: x[1])
top10 = words.compute()
# -
print(top10)
# This is slow because of disk access. Fix by changing scheduler to work asynchronously.
# +
# %%time
words = text.str.split().concat().frequencies().topk(10, key=lambda x: x[1])
top10 = words.compute(get = dask.async.get_sync)
# -
print(top10)
# #### Conversion from bag to dataframe
import string
freqs = (text.
str.translate({ord(char): None for char in string.punctuation}).
str.lower().
str.split().
concat().
frequencies())
# ##### Get the top 5 words sorted by key (not value)
freqs.topk(5).compute(get = dask.async.get_sync)
df_freqs = freqs.to_dataframe(columns=['word', 'n'])
df_freqs.head(n=5)
# #### The compute method converts to a regular pandas dataframe
#
# For data sets that fit in memory, pandas is faster and allows some operations like sorting that are not provided by dask dataframes.
df = df_freqs.compute()
df.sort_values('word', ascending=False).head(5)
| notebook/17_Functional_Programming.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.4 ('tvm-test')
# language: python
# name: python3
# ---
# (sphx_glr_how_to_compile_models_from_pytorch.py)=
# # 编译 PyTorch 模型
#
# **Author**: [<NAME>](https://github.com/alexwong/)
#
# 本文是使用 Relay 部署 PyTorch 模型的入门教程。
#
# 对于我们来说,首先应该安装 PyTorch。TorchVision 也是必需的,因为我们将使用它作为我们的模型动物园。
#
# 快速的解决方案是通过 pip 进行安装:
#
# ```python
# pip install torch==1.7.0
# pip install torchvision==0.8.1
# ```
#
# 或者参考[官方网站](https://pytorch.org/get-started/locally/)。
#
# PyTorch 版本应该向后兼容,但应该与适当的 TorchVision 版本一起使用。
#
# 目前,TVM 支持 PyTorch 1.7 和 1.4。其他版本可能不稳定。
# +
import numpy as np
# PyTorch imports
import torch
import torchvision
import set_env # 设置 TVM 环境
import tvm
from tvm import relay
from tvm.contrib.download import download_testdata
# -
# ## 载入 PyTorch 预训练模型
# +
model_name = "resnet18"
model = getattr(torchvision.models, model_name)(pretrained=True)
model = model.eval()
# 我们通过跟踪获取 TorchScripted 模型
input_shape = [1, 3, 224, 224]
input_data = torch.randn(input_shape)
scripted_model = torch.jit.trace(model, input_data).eval()
# -
# ## 加载测试图片
# +
from PIL import Image
img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
img_path = download_testdata(img_url, "cat.png", module="data")
img = Image.open(img_path).resize((224, 224))
# Preprocess the image and convert to tensor
from torchvision import transforms
my_preprocess = transforms.Compose(
[
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
]
)
img = my_preprocess(img)
img = np.expand_dims(img, 0)
# -
# ## 导入 Graph 到 Relay
#
# 将 PyTorch 图转换为 Relay 图。`input_name` 可以是任意的。
input_name = "input0"
shape_list = [(input_name, img.shape)]
mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
# ## Relay 构建
#
# 使用给定的输入规范将 graph 编译为 llvm 目标:
target = tvm.target.Target("llvm", host="llvm")
dev = tvm.cpu(0)
with tvm.transform.PassContext(opt_level=3):
lib = relay.build(mod, target=target, params=params)
# ## 在 TVM 上执行可移植 Graph
#
# 可以尝试在目标上部署编译后的模型。
# +
from tvm.contrib import graph_executor
dtype = "float32"
m = graph_executor.GraphModule(lib["default"](dev))
# Set inputs
m.set_input(input_name, tvm.nd.array(img.astype(dtype)))
# Execute
m.run()
# Get outputs
tvm_output = m.get_output(0)
# -
# ## 查找 synset 名称
#
# 在 1000 类 synset 中查找预测 top 1 索引。
# +
synset_url = "".join(
[
"https://raw.githubusercontent.com/Cadene/",
"pretrained-models.pytorch/master/data/",
"imagenet_synsets.txt",
]
)
synset_name = "imagenet_synsets.txt"
synset_path = download_testdata(synset_url, synset_name, module="data")
with open(synset_path) as f:
synsets = f.readlines()
synsets = [x.strip() for x in synsets]
splits = [line.split(" ") for line in synsets]
key_to_classname = {spl[0]: " ".join(spl[1:]) for spl in splits}
class_url = "".join(
[
"https://raw.githubusercontent.com/Cadene/",
"pretrained-models.pytorch/master/data/",
"imagenet_classes.txt",
]
)
class_name = "imagenet_classes.txt"
class_path = download_testdata(class_url, class_name, module="data")
with open(class_path) as f:
class_id_to_key = f.readlines()
class_id_to_key = [x.strip() for x in class_id_to_key]
# Get top-1 result for TVM
top1_tvm = np.argmax(tvm_output.numpy()[0])
tvm_class_key = class_id_to_key[top1_tvm]
# Convert input to PyTorch variable and get PyTorch result for comparison
with torch.no_grad():
torch_img = torch.from_numpy(img)
output = model(torch_img)
# Get top-1 result for PyTorch
top1_torch = np.argmax(output.numpy())
torch_class_key = class_id_to_key[top1_torch]
print("Relay top-1 id: {}, class name: {}".format(top1_tvm, key_to_classname[tvm_class_key]))
print("Torch top-1 id: {}, class name: {}".format(top1_torch, key_to_classname[torch_class_key]))
| xinetzone/docs/how_to/compile_models/from_pytorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: dev
# kernelspec:
# display_name: Python [conda env:root] *
# language: python
# name: conda-root-py
# ---
# Update sklearn to prevent version mismatches
# !pip install sklearn --upgrade
# install joblib. This will be used to save your model.
# Restart your kernel after installing
# !pip install joblib
import pandas as pd
# # Read the CSV and Perform Basic Data Cleaning
df = pd.read_csv("exoplanet_data.csv")
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
df.head()
# # Select your features (columns)
# Set features. This will also be used as your x values.
target = df["koi_disposition"]
data = df.drop("koi_disposition", axis=1)
data.head()
# # Create a Train Test Split
#
# Use `koi_disposition` for the y values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data, target, random_state=42)
X_train.head()
X_test.head()
# # Pre-processing
#
# Scale the data using the MinMaxScaler and perform some feature selection
# +
from sklearn.preprocessing import MinMaxScaler
# Scale your data
X_scaler = MinMaxScaler().fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
print(X_train_scaled)
# -
# # Train the SVC Model
# +
from sklearn.svm import SVC
model1 = SVC(kernel='linear')
model1.fit(X_train_scaled, y_train)
print(f"Training Data Score: {model1.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {model1.score(X_test_scaled, y_test)}")
# -
# # Hyperparameter Tuning
#
# Use `GridSearchCV` to tune the model's parameters
# +
# Create the GridSearchCV model
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [1, 5, 10, 50],
'gamma': [0.0001, 0.0005, 0.001, 0.005]}
grid1 = GridSearchCV(model1, param_grid, verbose=3)
# -
# Train the model with GridSearch
grid1.fit(X_train_scaled, y_train)
print(grid1.best_params_)
print(grid1.best_score_)
grid1.score(X_train_scaled, y_train)
grid1.score(X_test_scaled, y_test)
predictions = grid1.predict(X_test_scaled)
from sklearn.metrics import classification_report
print(classification_report(y_test, predictions))
# # Save the Model
# save your model by updating "your_name" with your name
# and "your_model" with your model variable
# be sure to turn this in to BCS
# if joblib fails to import, try running the command to install in terminal/git-bash
import joblib
filename = 'model1.sav'
joblib.dump(model1, 'model1.sav')
# # Train Random Forest Model
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf = rf.fit(X_train_scaled, y_train)
print(rf.score(X_train_scaled, y_train))
print(rf.score(X_test_scaled, y_test))
sorted(zip(rf.feature_importances_, data), reverse=True)
param_grid2 = {'n_estimators': [250, 300, 350],
'max_depth': [125, 150, 175]}
grid2 = GridSearchCV(rf, param_grid2, verbose=3)
grid2.fit(X_train_scaled, y_train)
print(grid2.best_params_)
print(grid2.best_score_)
grid2.score(X_train_scaled, y_train)
grid2.score(X_test_scaled, y_test)
predictions2 = grid2.predict(X_test_scaled)
from sklearn.metrics import classification_report
print(classification_report(y_test, predictions2))
import joblib
filename = 'model2.sav'
joblib.dump(rf, 'model2.sav')
| .ipynb_checkpoints/model_1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Nesting Depth
# +
def startb(number):
braces = ''.join(["(" * number]) + str(number)
return braces
def endb(number):
braces = ''.join([")" * number])
return braces
def nestingDepth(brace_input):
prev_digit = int(brace_input[0])
output = startb(prev_digit)
opened = prev_digit
for brace_digit in brace_input[1:]:
digit = int(brace_digit)
if prev_digit == digit:
output += str(digit)
continue
if opened == digit:
output += startb(digit)
opened = digit
elif opened < digit:
output += endb(prev_digit)
output += startb(digit)
opened = digit
elif opened > digit:
closed = opened - digit
output += endb(closed)
opened = opened - closed
output += str(digit)
prev_digit = digit
output += endb(opened)
return output
# nestingDepth(brace_input)
# -
# Input
tests = int(input())
# tests = 1
for test in range(tests):
brace_input = str(input())
# brace_input = "0212232201"
output = nestingDepth(brace_input)
print("Case #{0}: {1}".format(test+1, output))
| algorithm/strings/CodeJam2020-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Tce3stUlHN0L"
# ##### Copyright 2019 The TensorFlow Authors.
# + cellView="form" id="tuOe1ymfHZPu"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="qFdPvlXBOdUN"
# # 混合精度
# + [markdown] id="MfBg1C5NB3X0"
# <table class="tfo-notebook-buttons" align="left">
# <td> <a target="_blank" href="https://tensorflow.google.cn/guide/mixed_precision"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png"> 在 TensorFlow.org 上查看</a> </td>
# <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/mixed_precision.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
# <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/mixed_precision.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
# <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/mixed_precision.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
# </table>
# + [markdown] id="xHxb-dlhMIzW"
# ## 概述
#
# 混合精度是指训练时在模型中同时使用 16 位和 32 位浮点类型,从而加快运行速度,减少内存使用的一种训练方法。通过让模型的某些部分保持使用 32 位类型以保持数值稳定性,可以缩短模型的单步用时,而在评估指标(如准确率)方面仍可以获得同等的训练效果。本指南介绍如何使用 Keras 混合精度 API 来加快模型速度。利用此 API 可以在现代 GPU 上将性能提高三倍以上,而在 TPU 上可以提高 60%。
# + [markdown] id="3vsYi_bv7gS_"
# 如今,大多数模型使用 float32 dtype,这种数据类型占用 32 位内存。但是,还有两种精度较低的 dtype,即 float16 和 bfloat16,它们都是占用 16 位内存。现代加速器使用 16 位 dtype 执行运算的速度更快,因为它们有执行 16 位计算的专用硬件,并且从内存中读取 16 位 dtype 的速度也更快。
#
# NVIDIA GPU 使用 float16 执行运算的速度比使用 float32 快,而 TPU 使用 bfloat16 执行运算的速度也比使用 float32 快。因此,在这些设备上应尽可能使用精度较低的 dtype。但是,出于对数值的要求,为了让模型训练获得相同的质量,一些变量和计算仍需使用 float32。利用 Keras 混合精度 API,float16 或 bfloat16 可以与 float32 混合使用,从而既可以获得 float16/bfloat16 的性能优势,也可以获得 float32 的数值稳定性。
#
# 注:在本指南中,术语“数值稳定性”是指使用较低精度的 dtype(而不是较高精度的 dtype)对模型质量的影响。如果使用 float16 或 bfloat16 执行运算,则与使用 float32 执行运算相比,使用这些较低精度的 dtype 会导致模型获得的评估准确率或其他指标相对较低,那么我们就说这种运算“数值不稳定”。
# + [markdown] id="MUXex9ctTuDB"
# ## 设置
# + id="IqR2PQG4ZaZ0"
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import mixed_precision
# + [markdown] id="814VXqdh8Q0r"
# ## 支持的硬件
#
# 虽然混合精度在大多数硬件上都可以运行,但是在最新的 NVIDIA GPU 和 Cloud TPU 上才能加速模型。NVIDIA GPU 支持混合使用 float16 和 float32,而 TPU 则支持混合使用 bfloat16 和 float32。
#
# 在 NVIDIA GPU 中,计算能力为 7.0 或更高的 GPU 可以获得混合精度的最大性能优势,因为这些型号具有称为 Tensor 核心的特殊硬件单元,可以加速 float16 矩阵乘法和卷积运算。旧款 GPU 使用混合精度无法实现数学运算性能优势,不过可以节省内存和带宽,因此也可以在一定程度上提高速度。您可以在 NVIDIA 的 [CUDA GPU 网页](https://developer.nvidia.com/cuda-gpus)上查询 GPU 的计算能力。可以最大程度从混合精度受益的 GPU 示例包括 RTX GPU、V100 和 A100。
# + [markdown] id="-q2hisD60F0_"
# 注:如果在 Google Colab 中运行本指南中示例,则 GPU 运行时通常会连接 P100。P100 的计算能力为 6.0,预计速度提升不明显。
#
# 您可以使用以下命令检查 GPU 类型。如果要使用此命令,必须安装 NVIDIA 驱动程序,否则会引发错误。
# + id="j-Yzg_lfkoa_"
# !nvidia-smi -L
# + [markdown] id="hu_pvZDN0El3"
# 所有 Cloud TPU 均支持 bfloat16。
#
# 即使在预计无法提升速度的 CPU 和旧款 GPU 上,混合精度 API 仍可以用于单元测试、调试或试用 API。不过,在 CPU 上,混合精度的运行速度会明显变慢。
# + [markdown] id="HNOmvumB-orT"
# ## 设置 dtype 策略
# + [markdown] id="54ecYY2Hn16E"
# 要在 Keras 中使用混合精度,您需要创建一条 `tf.keras.mixed_precision.Policy`,通常将其称为 *dtype 策略*。Dtype 策略可以指定将在其中运行的 dtype 层。在本指南中,您将从字符串 `'mixed_float16'` 构造策略,并将其设置为全局策略。这会导致随后创建的层使用 float16 和 float32 的混合精度。
# + id="x3kElPVH-siO"
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_global_policy(policy)
# + [markdown] id="6ids1rT_UM5q"
# 简而言之,您可以直接将字符串传递给 `set_global_policy`,这通常在实践中完成。
# + id="6a8iNFoBUSqR"
# Equivalent to the two lines above
mixed_precision.set_global_policy('mixed_float16')
# + [markdown] id="oGAMaa0Ho3yk"
# 该策略指定了层的两个重要方面:完成层的计算所使用的 dtype 和层变量的 dtype。上面的代码创建了一条 `mixed_float16` 策略(即通过将字符串 `'mixed_float16'` 传递给其构造函数而构建的 `mixed_precision.Policy` )。凭借此策略,层可以使用 float16 计算和 float32 变量。计算使用 float16 来提高性能,而变量使用 float32 来确保数值稳定性。您可以直接在策略中查询这些属性。
# + id="GQRbYm4f8p-k"
print('Compute dtype: %s' % policy.compute_dtype)
print('Variable dtype: %s' % policy.variable_dtype)
# + [markdown] id="MOFEcna28o4T"
# 如前所述,在计算能力至少为 7.0 的 NVIDIA GPU 上,`mixed_float16` 策略可以大幅提升性能。在其他 GPU 和 CPU 上,该策略也可以运行,但可能无法提升性能。对于 TPU,则应使用 `mixed_bfloat16` 策略。
# + [markdown] id="cAHpt128tVpK"
# ## 构建模型
# + [markdown] id="nB6ujaR8qMAy"
# 接下来,我们开始构建一个简单的模型。过小的模型往往无法获得混合精度的优势,因为 TensorFlow 运行时的开销通常占据大部分执行时间,导致 GPU 的性能提升几乎可以忽略不计。因此,如果使用 GPU,我们会构建两个比较大的 `Dense` 层,每个层具有 4096 个单元。
# + id="0DQM24hL_14Q"
inputs = keras.Input(shape=(784,), name='digits')
if tf.config.list_physical_devices('GPU'):
print('The model will run with 4096 units on a GPU')
num_units = 4096
else:
# Use fewer units on CPUs so the model finishes in a reasonable amount of time
print('The model will run with 64 units on a CPU')
num_units = 64
dense1 = layers.Dense(num_units, activation='relu', name='dense_1')
x = dense1(inputs)
dense2 = layers.Dense(num_units, activation='relu', name='dense_2')
x = dense2(x)
# + [markdown] id="2dezdcqnOXHk"
# 每个层都有一条策略,默认情况下会使用全局策略。因此,每个 `Dense` 层都具有 `mixed_float16` 策略,这是因为之前已将 `mixed_float16` 设置为全局策略。这样,dense 层就会执行 float16 计算,并使用 float32 变量。为了执行 float16 计算,它们会将输入转换为 float16 类型,因此,输出也是 float16 类型。它们的变量是 float32 类型,在调用层时,它们会将变量转换为 float16 类型,从而避免 dtype 不匹配所引起的错误。
# + id="kC58MzP4PEcC"
print(dense1.dtype_policy)
print('x.dtype: %s' % x.dtype.name)
# 'kernel' is dense1's variable
print('dense1.kernel.dtype: %s' % dense1.kernel.dtype.name)
# + [markdown] id="_WAZeqDyqZcb"
# 接下来创建输出预测。通常,您可以按如下方法创建输出预测,但是对于 float16,其结果不一定具有数值稳定性。
# + id="ybBq1JDwNIbz"
# INCORRECT: softmax and model output will be float16, when it should be float32
outputs = layers.Dense(10, activation='softmax', name='predictions')(x)
print('Outputs dtype: %s' % outputs.dtype.name)
# + [markdown] id="D0gSWxc9NN7q"
# 模型末尾的 softmax 激活值本应为 float32 类型。但由于 dtype 策略是 `mixed_float16`,softmax 激活通常会使用 float16 dtype 进行计算,并且会输出 float16 张量。
#
# 这一问题可以通过分离 Dense 和 softmax 层,并将 `dtype='float32'` 传递至 softmax 层来解决。
# + id="IGqCGn4BsODw"
# CORRECT: softmax and model output are float32
x = layers.Dense(10, name='dense_logits')(x)
outputs = layers.Activation('softmax', dtype='float32', name='predictions')(x)
print('Outputs dtype: %s' % outputs.dtype.name)
# + [markdown] id="tUdkY_DHsP8i"
# 将 `dtype='float32'` 传递至 softmax 层的构造函数会将该层的 dtype 策略重写为 `float32` 策略,从而由后者执行计算并保持变量为 float32 类型。同样,我们也可以传递 `dtype=mixed_precision.Policy('float32')`;层始终将 dtype 参数转换为策略。由于 `Activation` 层没有变量,因此会忽略该策略的变量 dtype,但是该策略的计算 dtype 为 float32,因此 softmax 和模型的输出也是 float32。
#
# 您可以在模型中间添加 float16 类型的 softmax,但模型末尾的 softmax 应为 float32 类型。原因是,如果从 softmax 传递给损失函数的中间张量是 float16 或 bfloat16 类型,则会出现数值问题。
#
# 如果您认为使用 float16 计算无法获得数值稳定性,则可以通过传递 `dtype='float32'`,将任何层的 dtype 重写为 float32 类型。但通常,只有模型的最后一层才需要这样重写,因为对大多数层来说,`mixed_float16` 和 `mixed_bfloat16` 的精度已经足够。
#
# 即使模型不以 softmax 结尾,输出也仍是 float32。虽然对这一特定模型来说并非必需,但可以使用以下代码将模型输出转换为 float32 类型:
# + id="dzVAoLI56jR8"
# The linear activation is an identity function. So this simply casts 'outputs'
# to float32. In this particular case, 'outputs' is already float32 so this is a
# no-op.
outputs = layers.Activation('linear', dtype='float32')(outputs)
# + [markdown] id="tpY4ZP7us5hA"
# 接下来,完成并编译模型,并生成输入数据:
# + id="g4OT3Z6kqYAL"
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop(),
metrics=['accuracy'])
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255
# + [markdown] id="0Sm8FJHegVRN"
# 本示例将输入数据从 int8 强制转换为 float32。我们不转换为 float16 是因为在 CPU 上除以 255 时,float16 的运算速度比 float32 慢。在这种情况下,性能差距可以忽略不计,但一般来说,在 CPU 上执行运算时,数学处理输入应使用 float32 类型。该模型的第一层会将输入转换为 float16,因为每一层都会将浮点输入强制转换为其计算 dtype。
#
# 检索模型的初始权重。这样可以通过加载权重来从头开始训练。
# + id="0UYs-u_DgiA5"
initial_weights = model.get_weights()
# + [markdown] id="zlqz6eVKs9aU"
# ## 使用 Model.fit 训练模型
#
# 接下来,训练模型:
# + id="hxI7-0ewmC0A"
history = model.fit(x_train, y_train,
batch_size=8192,
epochs=5,
validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])
# + [markdown] id="MPhJ9OPWt4x5"
# 请注意,模型会在日志中打印每个步骤的时间:例如,“25ms/step”。第一个周期可能会变慢,因为 TensorFlow 会花一些时间来优化模型,但之后每个步骤的时间应当会稳定下来。
#
# 如果在 Colab 中运行本指南中,您可以使用 float32 比较混合精度的性能。为此,请在“Setting the dtype policy”部分将策略从 `mixed_float16` 更改为 `float32`,然后重新运行所有代码单元,直到此代码点。在计算能力至少为 7.0 的 GPU 上,您会发现每个步骤的时间大大增加,表明混合精度提升了模型的速度。在继续学习本指南之前,请确保将策略改回 `mixed_float16` 并重新运行代码单元。
#
# 在计算能力至少为 8.0 的 GPU(Ampere GPU 及更高版本)上,使用混合精度时,与使用 float32 相比,您可能看不到本指南中小模型的性能提升。这是由于使用 [TensorFloat-32](https://tensorflow.google.cn/api_docs/python/tf/config/experimental/enable_tensor_float_32_execution) 导致的,它会在 `tf.linalg.matmul` 等某些 float32 运算中自动使用较低精度的数学计算。使用 float32 时,TensorFloat-32 会展现混合精度的一些性能优势。不过,在真实模型中,由于内存带宽节省和 TensorFloat-32 不支持的运算,您通常仍会看到混合精度的显著性能提升。
#
# 如果在 TPU 上运行混合精度,您会发现与在 GPU(尤其是 Ampere 架构之前的 GPU)上运行混合精度相比,性能提升并不明显。这是因为即使默认 dtype 策略为 float32,TPU 也会在后台执行一些 bfloat16 运算。这类似于 Ampere GPU 默认使用 TensorFloat-32 的方式。在实际模型上使用混合精度时,与 Ampere GPU 相比,TPU 获得的性能提升通常较少。
#
# 对于很多实际模型,使用混合精度时还可以将批次大小加倍而不会耗尽内存,因为 float16 张量只需要使用 float32 一半的内存。不过,这对本文中所讲的小模型毫无意义,因为您几乎可以使用任何 dtype 来运行该模型,而每个批次可以包含有 60,000 张图片的整个 MNIST 数据集。
# + [markdown] id="mNKMXlCvHgHb"
# ## 损失放大
#
# 损失放大是 `tf.keras.Model.fit` 使用 `mixed_float16` 策略自动执行,从而避免数值下溢的一种技术。本部分介绍什么是损失放大,下一部分介绍如何将其与自定义训练循环一起使用。
# + [markdown] id="1xQX62t2ow0g"
# ### 下溢和溢出
#
# float16 数据类型的动态范围比 float32 窄。这意味着大于 $65504$ 的数值会因溢出而变为无穷大,小于 $6.0 \times 10^{-8}$ 的数值则会因下溢而变成零。float32 和 bfloat16 的动态范围要大得多,因此一般不会出现下溢或溢出的问题。
#
# 例如:
# + id="CHmXRb-yRWbE"
x = tf.constant(256, dtype='float16')
(x ** 2).numpy() # Overflow
# + id="5unZLhN0RfQM"
x = tf.constant(1e-5, dtype='float16')
(x ** 2).numpy() # Underflow
# + [markdown] id="pUIbhQypRVe_"
# 实际上,float16 也极少出现下溢的情况。此外,在正向传递中出现下溢的情形更是十分罕见。但是,在反向传递中,梯度可能因下溢而变为零。损失放大就是一个防止出现下溢的技巧。
# + [markdown] id="FAL5qij_oNqJ"
# ### 损失放大概述
#
# 损失放大的基本概念非常简单:只需将损失乘以某个大数字(如 $1024$)即可得到*损失放大{/em0值。这会将梯度放大 $1024$ 倍,大大降低了发生下溢的几率。计算出最终梯度后,将其除以 $1024$ 即可得到正确值。*
#
# 该过程的伪代码是:
#
# ```
# loss_scale = 1024
# loss = model(inputs)
# loss *= loss_scale
# # Assume `grads` are float32. You do not want to divide float16 gradients.
# grads = compute_gradient(loss, model.trainable_variables)
# grads /= loss_scale
# ```
#
# 选择合适的损失标度比较困难。如果损失标度太小,梯度可能仍会因下溢而变为零。如果太大,则会出现相反的问题:梯度可能因溢出而变为无穷大。
#
# 为了解决这一问题,TensorFlow 会动态确定损失放大,因此,您不必手动选择。如果使用 `tf.keras.Model.fit`,则会自动完成损失放大,您不必做任何额外的工作。如果您使用自定义训练循环,则必须显式使用特殊的优化器封装容器 `tf.keras.mixed_precision.LossScaleOptimizer` 才能使用损失放大。下一部分会对此进行介绍。
#
# + [markdown] id="yqzbn8Ks9Q98"
# ## 使用自定义训练循环训练模型
# + [markdown] id="CRANRZZ69nA7"
# 到目前为止,您已经使用 `tf.keras.Model.fit` 训练了一个具有混合精度的 Keras 模型。接下来,您会将混合精度与自定义训练循环一起使用。如果您还不知道什么是自定义训练循环,请先阅读[自定义训练指南](../tutorials/customization/custom_training_walkthrough.ipynb)。
# + [markdown] id="wXTaM8EEyEuo"
# 使用混合精度运行自定义训练循环需要对使用 float32 运行训练的模型进行两方面的更改:
#
# 1. 使用混合精度构建模型(已完成)
# 2. 如果使用 `mixed_float16`,则明确使用损失放大。
#
# + [markdown] id="M2zpp7_65mTZ"
# 对于步骤 (2),您将使用 `tf.keras.mixed_precision.LossScaleOptimizer` 类,此类会封装优化器并应用损失放大。默认情况下,它会动态地确定损失放大,因此您不必选择其中之一。按如下方式构造一个 `LossScaleOptimizer`。
# + id="ogZN3rIH0vpj"
optimizer = keras.optimizers.RMSprop()
optimizer = mixed_precision.LossScaleOptimizer(optimizer)
# + [markdown] id="FVy5gnBqTE9z"
# 如果您愿意,可以选择一个显式损失放大或以其他方式自定义损失放大行为,但强烈建议保留默认的损失放大行为,因为经过验证,它可以在所有已知模型上很好地工作。如果要自定义损失放大行为,请参阅 `tf.keras.mixed_precision.LossScaleOptimizer` 文档。
# + [markdown] id="JZYEr5hA3MXZ"
# 接下来,定义损失对象和 `tf.data.Dataset`:
# + id="9cE7Mm533hxe"
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
train_dataset = (tf.data.Dataset.from_tensor_slices((x_train, y_train))
.shuffle(10000).batch(8192))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(8192)
# + [markdown] id="4W0zxrxC3nww"
# 接下来,定义训练步骤函数。您将使用损失放大优化器中的两个新方法来放大损失和缩小梯度:
#
# - `get_scaled_loss(loss)`:将损失值乘以损失标度值
# - `get_unscaled_gradients(gradients)`:获取一系列放大的梯度作为输入,并将每一个梯度除以损失标度,从而将其缩小为实际值
#
# 为了防止梯度发生下溢,必须使用这些函数。随后,如果全部没有出现 Inf 或 NaN 值,则 `LossScaleOptimizer.apply_gradients` 会应用这些梯度。它还会更新损失标度,如果梯度出现 Inf 或 NaN 值,则会将其减半,而如果出现零值,则会增大损失标度。
# + id="V0vHlust4Rug"
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
predictions = model(x)
loss = loss_object(y, predictions)
scaled_loss = optimizer.get_scaled_loss(loss)
scaled_gradients = tape.gradient(scaled_loss, model.trainable_variables)
gradients = optimizer.get_unscaled_gradients(scaled_gradients)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
# + [markdown] id="rcFxEjia6YPQ"
# 在训练的开始阶段,`LossScaleOptimizer` 可能会跳过前几个步骤。先使用非常大的损失标度,以便快速确定最佳值。经过几个步骤后,损失标度将稳定下来,这时跳过的步骤将会很少。这一过程是自动执行的,不会影响训练质量。
# + [markdown] id="IHIvKKhg4Y-G"
# 现在,定义测试步骤:
#
# + id="nyk_xiZf42Tt"
@tf.function
def test_step(x):
return model(x, training=False)
# + [markdown] id="hBs98MZyhBOB"
# 加载模型的初始权重,以便您可以从头开始重新训练:
# + id="jpzOe3WEhFUJ"
model.set_weights(initial_weights)
# + [markdown] id="s9Pi1ADM47Ud"
# 最后,运行自定义训练循环:
# + id="N274tJ3e4_6t"
for epoch in range(5):
epoch_loss_avg = tf.keras.metrics.Mean()
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='test_accuracy')
for x, y in train_dataset:
loss = train_step(x, y)
epoch_loss_avg(loss)
for x, y in test_dataset:
predictions = test_step(x)
test_accuracy.update_state(y, predictions)
print('Epoch {}: loss={}, test accuracy={}'.format(epoch, epoch_loss_avg.result(), test_accuracy.result()))
# + [markdown] id="d7daQKGerOFE"
# ## GPU 性能提示
#
# 下面是在 GPU 上使用混合精度时的一些性能提示。
#
# ### 增大批次大小
#
# 当使用混合精度时,如果不影响模型质量,可以尝试使用双倍批次大小运行。因为 float16 张量只使用一半内存,所以,您通常可以将批次大小增大一倍,而不会耗尽内存。增大批次大小通常可以提高训练吞吐量,即模型每秒可以运行的训练元素数量。
#
# ### 确保使用 GPU Tensor 核心
#
# 如前所述,现代 NVIDIA GPU 使用称为 Tensor 核心的特殊硬件单元, 可以非常快速地执行 float16 矩阵乘法运算。但是,Tensor 核心要求张量的某些维度是 8 的倍数。在下面的示例中,当且仅当参数值是 8 的倍数时,才能使用 Tensor 核心。
#
# - tf.keras.layers.Dense(**units=64**)
# - tf.keras.layers.Conv2d(**filters=48**, kernel_size=7, stride=3)
# - 其他卷积层也是如此,如 tf.keras.layers.Conv3d
# - tf.keras.layers.LSTM(**units=64**)
# - 其他 RNN 也是如此,如 tf.keras.layers.GRU
# - tf.keras.Model.fit(epochs=2, **batch_size=128**)
#
# 您应该尽可能使用 Tensor 核心。如果要了解更多信息,请参阅 [NVIDIA 深度学习性能指南](https://docs.nvidia.com/deeplearning/sdk/dl-performance-guide/index.html),其中介绍了使用 Tensor 核心的具体要求以及与 Tensor 核心相关的其他性能信息。
#
# ### XLA
#
# XLA 是一款可以进一步提高混合精度性能,也可以在较小程度上提高 float32 性能的编译器。请参阅 [XLA 指南](https://tensorflow.google.cn/xla)以了解详情。
# + [markdown] id="2tFDX8fm6o_3"
# ## Cloud TPU 性能提示
#
# 就像在 GPU 上一样,您也应该尝试将批次大小增大一倍,因为 bfloat16 张量同样只使用一半内存。双倍批次大小可能会提高训练吞吐量。
#
# TPU 不需要任何其他特定于混合精度的调整即可获得最佳性能。TPU 已经要求使用 XLA,它们可以从某些是 $128$ 的倍数的维度获得优势,不过就像使用混合精度一样,但这同样适用于 float32 类型。有关一般 TPU 性能提示,请参阅 [Cloud TPU 性能指南](https://cloud.google.com/tpu/docs/performance-guide),这些提示对混合精度和 float32 张量均适用。
# + [markdown] id="--wSEU91wO9w"
# ## 总结
#
# - 如果您使用的是计算能力至少为 7.0 的 TPU 或 NVIDIA GPU,则应使用混合精度,因为它可以将性能提升多达 3 倍。
#
# - 您可以按如下代码使用混合精度:
#
# ```python
# # On TPUs, use 'mixed_bfloat16' instead
# mixed_precision.set_global_policy('mixed_float16')
# ```
#
# - 如果您的模型以 softmax 结尾,请确保其类型为 float32。不管您的模型以什么结尾,必须确保输出为 float32。
# - 如果您通过 `mixed_float16` 使用自定义训练循环,则除了上述几行代码外,您还需要使用 `tf.keras.mixed_precision.LossScaleOptimizer` 封装您的优化器。然后调用 `optimizer.get_scaled_loss` 来放大损失,并且调用 `optimizer.get_unscaled_gradients` 来缩小梯度。
# - 如果不会降低计算准确率,则可以将训练批次大小加倍。
# - 在 GPU 上,确保大部分张量维度是 $8$ 的倍数,从而最大限度提高性能
#
# 有关使用 `tf.keras.mixed_precision` API 的混合精度的更多示例,请参阅[官方模型仓库](https://github.com/tensorflow/models/tree/master/official)。大多数官方模型(如 [ResNet](https://github.com/tensorflow/models/tree/master/official/vision/image_classification) 和 [Transformer](https://github.com/tensorflow/models/blob/master/official/nlp/transformer))通过传递 `--dtype=fp16` 来使用混合精度。
#
| site/zh-cn/guide/mixed_precision.ipynb |
;; -*- coding: utf-8 -*-
;; ---
;; jupyter:
;; jupytext:
;; text_representation:
;; extension: .scm
;; format_name: light
;; format_version: '1.5'
;; jupytext_version: 1.14.4
;; kernelspec:
;; display_name: Calysto Scheme 3
;; language: scheme
;; name: calysto_scheme
;; ---
;; ## SICP 习题 (3.7)解题总结: 联合账户
;; SICP 习题 3.7 要求我们在习题3.3,3.4的基础上创建一个“联合账户”的函数,用于支持账号的“套娃”行为,就是一个账号连接另一个账号作为主体。
;;
;; 表面上这个是一个关于联合两个函数的练习,本质上作者在和我们讨论变量的引用,还有什么样的变量可以认为是“同一个”变量。
;;
;; 比如用习题3.4里的(make-account)创建了acc1和acc2,问acc1和acc2是不是同一个?如果没有引入set!函数,那么acc1和acc2是“同一个”,他们的行为一致,我们无法区分他们。但是,引入set!函数以后acc1和acc2分别都有了状态,他们的账号存款也可能不一样了,虽然他们的底层代码是一样的,但他们是“不同的”两个账号。
;;
;; 另外,就本习题的要求,在我们实现了联合账户以后,如果acc2和acc3都指向到了acc1,那么acc2和acc3是不是“同一个”账户呢?这个问题就比较难回答了。从财务上看,从acc2里取了钱,acc3会发现自己账户里的钱变少了,他们感觉上确实是“同一个”账户。但是acc2和acc3的密码又是不同的,他们又不是“同一个”账户。
;;
;; 如果我们看其它语言里面变量的使用,也同样有这样的问题, 如果变量a2和a3都指向a1这个列表,那么a2和a3是不是“同一个”变量呢?
;;
;; a1 = [1,2,3]
;; a2 = a1
;; a3 = a1
;; a2.append(4)
;; ;a3 == ?
;;
;; 做以上思考可以增强我们对变量的理解。
;; 这道题的具体代码实现上就没有什么难度,先是拷贝3.4的代码:
(define (make-account balance account-password)
(define illegal-access-times 0)
(define (withdraw amount)
(if (>= balance amount)
(begin (set! balance (- balance amount))
balance)
"insufficient funds"))
(define (deposit amount)
(set! balance (+ balance amount))
balance)
(define (wrongpassword amount)
(if (< illegal-access-times 7)
(begin (set! illegal-access-times (+ illegal-access-times 1))
"incorrect password")
(begin
(call-the-cops)
"incorrect password (and you don't know that I will call a cop)")))
(define (call-the-cops)
(display " cops are comming"))
(define (rightpassword)
(set! illegal-access-times 0))
(define (check-password amount)
'ok)
(define (dispatch password m)
(if (not (equal? password account-password))
wrongpassword
(begin
(rightpassword)
(cond ((eq? m 'withdraw) withdraw)
((eq? m 'deposit) deposit)
((eq? m 'check-password) check-password)
(else (error "unknow request -- Make-Account"
m))))))
dispatch)
;; 然后是定义(make-joint)代码,这里没有重用3.4的代码,几乎所有功能都是重新实现了一次,主要用于打包一个account,在打包的这个account里检查新的joint-account密码:
(define (make-joint original-account original-password joint-password)
(define illegal-access-times 0)
(define (withdraw amount)
((original-account original-password 'withdraw) amount))
(define (deposit amount)
((original-account original-password 'deposit) amount))
(define (call-the-cops)
(display " cops are comming"))
(define (rightpassword)
(set! illegal-access-times 0))
(define (check-password amount)
'ok)
(define (dispatch password m)
(if (not (equal? password joint-password))
wrongpassword
(begin
(rightpassword)
(cond ((eq? m 'withdraw) withdraw)
((eq? m 'deposit) deposit)
((eq? m 'check-password) check-password)
(else (error "unknow request -- Make-Account"
m))))))
dispatch
)
;; 最后创建一个peter-account, 然后关联给joint-account-1和joint-account-2。
;;
;; 这个时候我们可以看见joint-account-1和joint-account-2是联通的,从joint-account-1里取钱,joint-account-2里的存款也会变少:
(define peter-account (make-account 1000 'peter-password))
(define joint-account-1 (make-joint peter-account 'peter-password 'joint-account-p1))
(define joint-account-2 (make-joint peter-account 'peter-password 'joint-account-p2))
((joint-account-1 'joint-account-p1 'withdraw) 10)
((joint-account-2 'joint-account-p2 'withdraw) 10)
| cn/.ipynb_checkpoints/sicp-3-07-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/jacheung/auto-curator/blob/master/autocurator_CNN_v1_5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="WBEOUJuANrQK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="10d44f18-c46a-422f-acfa-fc2fba8b4d83"
# Import necessary libraries and set up image libraries in Google drive
import numpy as np
import scipy.io
import tensorflow as tf
from tensorflow import keras
from sklearn.utils import class_weight
import sklearn.model_selection as ms
from sklearn.metrics import roc_curve
from google.colab import drive
import glob
import matplotlib.pyplot as plt
from psutil import virtual_memory
import sys
import time
import dill
import shelve
from keras.preprocessing.image import ImageDataGenerator
# Mount google drive and set up directories
drive.mount('/content/gdrive')
base_dir = "/content/gdrive/My Drive/Colab data/trialFramesNPY/"
model_save_dir = "/content/gdrive/My Drive/Colab data/model_iterations/"
# grab images and labels names
frame_ind_files = glob.glob(base_dir + "*_frameIndex.mat")
T_class = glob.glob(base_dir + "*touchClass.mat")
frames = glob.glob(base_dir + "*dataset.mat")
base_dir = "/content/gdrive/My Drive/Colab data/aug50_real1_100realImsPerFile2/"
aug_files = glob.glob(base_dir + "*.h5")
# 4/3gEp7zWFwsvd7MRstqM73mJrNwFWCqTFm2a5j3xVvJCGzVG68Ne2Dto
# + [markdown] id="dJzQ9d1ZzXyX" colab_type="text"
# # 1) Data cleaning
#
# + id="hr5IFpVOwh10" colab_type="code" colab={}
mem = virtual_memory()
mem_free = np.round(mem.free/1024**3, 2)
tot_mem = np.round(mem.total/1024**3, 2)
print(str(mem_free) + ' of ' + str(tot_mem) + ' GB of mem')
aug_files
# + [markdown] id="d0XkvNfc1-yf" colab_type="text"
# ## 1.1 Matching frames and labels
# + id="aXFWPKw6on2f" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="e72b6d4a-b57f-4ed4-afb0-7fb28a7c4bac"
# Trim frames and labels names to make the names the same to match them
frame_nums = []
T_class_nums = []
frame_inds = []
for i in range(len(frames)):
frame_nums.append(frames[i][1:-11])
for i in range(len(T_class)):
T_class_nums.append(T_class[i][1:-14])
for i in range(len(frame_ind_files)):
frame_inds.append(frame_ind_files[i][1:-14])
print(frame_inds)
# + id="nHk8StOgAPbm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="c76c8910-1b6a-439f-d993-8ee82f88d030"
# # rearrange frame_nums so that they are in order just for convenience
import re
def extract_between(s, str1, str2):
result = re.search(str1 + '(.*)' + str2, s)
return result.group(1)
tmp1 = [int(extract_between(k, '000000-', '_')) for k in frame_nums]
sorted_inds = np.argsort(tmp1)
print(np.max(sorted_inds))
print(np.shape(frame_nums))
frame_nums = [frame_nums[sorted_inds[k]] for k, strings in enumerate(frame_nums)]
frames = [frames[sorted_inds[k]] for k, strings in enumerate(frames)]
tmp1 = [int(extract_between(k, '000000-', '_')) for k in frame_nums]
print(tmp1)
tmp1 = [int(extract_between(k, '000000-', '_')) for k in frames]
print(tmp1)
# print(frame_nums)
# print(frames)
# for k, strings in enumerate(frame_nums):
# print(k)
# print(strings)
# print(sorted_inds[k])
# frame_nums = strings[sorted_inds[k]]
# + id="L_JezT3C_6EE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="e32d2bfc-1b7e-4fae-fae8-645f74996e60"
# Match and reorder all frames and label files
indices = []
T_class_reordered = []
for k in range(len(frame_nums)):
indices.append([i for i, s in enumerate(T_class_nums) if frame_nums[k] == s])
indices = [x for x in indices if x != []]
for k in range(len(indices)):
T_class_reordered.append(T_class[indices[k][0]])
#################
indices = []
frame_inds_reordered = []
for k in range(len(frame_nums)):
indices.append([i for i, s in enumerate(frame_inds) if frame_nums[k] == s])
indices = [x for x in indices if x != []]
for k in range(len(indices)):
frame_inds_reordered.append(frame_ind_files[indices[k][0]])
#############
# test that this is matched
tmp1 = [int(extract_between(k, '000000-', '_')) for k in T_class_reordered]
print(tmp1)
tmp1 = [int(extract_between(k, '000000-', '_')) for k in frame_nums]
print(tmp1)
tmp1 = [int(extract_between(k, '000000-', '_')) for k in frame_inds_reordered]
print(tmp1)
tmp1 = [int(extract_between(k, '000000-', '_')) for k in frames]
print(tmp1)
# + id="xGem5Qao1cYz" colab_type="code" colab={}
# load in labels
# each file represents a trial so for us this could be anywhere between 1 and 4000 data points.
# most often somewhere between 200-600
raw_Y_set = []
frame_num_in_Y_set = []
for cnt1 in range(len(frames)):
tmp2 = scipy.io.loadmat(T_class_reordered[cnt1])
raw_Y_set.append(tmp2['touchClass'])
frame_num_in_Y_set.append(len(raw_Y_set[cnt1]))
# + [markdown] id="nX1TKm3-yQrB" colab_type="text"
# ## 1.2 Build Keras Image Generator
#
# + id="JSWi7dJzeIDy" colab_type="code" colab={}
# # test_img = dmatx[50, :, :, :]
# # num_aug_ims = 100
# # tmp1 = fux_wit_imgs(num_aug_ims, test_img)
# def fux_wit_imgs(num_aug_ims, test_img):
# datagen = ImageDataGenerator(rotation_range=360, #
# width_shift_range=.07, #
# height_shift_range = .07, #
# shear_range = 30,#
# zoom_range = .24,
# brightness_range=[0.75,1.25])#
# samples = np.expand_dims(test_img, 0)
# # prepare iterator
# it = datagen.flow(samples, batch_size=1)
# all_augment = samples
# for i in range(num_aug_ims):##
# # generate batch of images
# batch = it.next()
# # convert to unsigned integers for viewing
# image = batch[0].astype('uint8')
# # print(np.shape(all_augment))
# # print(np.shape(np.expand_dims(image, 0)))
# all_augment = np.append(all_augment, np.expand_dims(image, 0), 0)
# np.shape(all_augment)
# return all_augment
# + id="429cn4jieNgQ" colab_type="code" colab={}
class My_Custom_Generator(keras.utils.Sequence) :
def __init__(self, file_trial_list, file_Y_list, num_in_each, batch_size, to_fit) :
cnt = 0
extract_inds = []
# num_in_each contains the number of frames in each file I am loading, ie
# for trial/file 1 there are 200 frames , trial/file 2 has 215 frames etc
for k, elem in enumerate(num_in_each) :
tot_frame_nums = sum(num_in_each[cnt: k+1]) # used to test if the number of frames in
# all these files exceded the "batch_size" limit
if tot_frame_nums>batch_size or len(num_in_each)-1 == k: # condition met, these files together
# meet the max requirment to load together as a batch
extract_inds.append([cnt, k+1])
cnt = k+1 # reset to the current iter
if np.diff(extract_inds[-1]) > 1: # if there is more than one file then we want to take off the last file
# because it excedes the set number of frames
extract_inds[-1][-1] = extract_inds[-1][-1]-1
cnt = cnt-1
file_list_chunks = []
file_Y_list_chunks = []
for i, ii in enumerate(extract_inds):
file_list_chunks.append(file_trial_list[ii[0]:ii[1]])
file_Y_list_chunks.append(file_Y_list[ii[0]:ii[1]])
self.to_fit = to_fit #set to True to return XY and False to return X
self.file_trial_list = file_trial_list
self.file_Y_list = file_Y_list
self.batch_size = batch_size
self.extract_inds = extract_inds
self.num_in_each = num_in_each
self.file_list_chunks = file_list_chunks
self.file_Y_list_chunks = file_Y_list_chunks
def __len__(self) :
return len(self.extract_inds)
def __getitem__(self, num_2_extract) :
# raw_X, raw_Y = self._build_data(self.file_list_chunks[num_2_extract],
# self.file_Y_list_chunks[num_2_extract])
raw_X = self._generate_X(self.file_list_chunks[num_2_extract])
rgb_batch = np.repeat(raw_X[..., np.newaxis], 3, -1)
IMG_SIZE = 96 # All images will be resized to 160x160. This is the size of MobileNetV2 input sizes
rgb_tensor = tf.cast(rgb_batch, tf.float32) # convert to tf tensor with float32 dtypes
rgb_tensor = (rgb_tensor/127.5) - 1 # /127.5 = 0:2, -1 = -1:1 requirement for mobilenetV2
rgb_tensor = tf.image.resize(rgb_tensor, (IMG_SIZE, IMG_SIZE)) # resizing
self.IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
rgb_tensor_aug = rgb_tensor
# print(len(raw_Y))
# for i, ims in enumerate(rgb_tensor):
# # print(i)
# tmp1 = fux_wit_imgs(20, ims)
# rgb_tensor_aug = np.append(rgb_tensor_aug, tmp1, 0)
if self.to_fit:
raw_Y = self._generate_Y(self.file_Y_list_chunks[num_2_extract])
return rgb_tensor_aug, raw_Y
else:
return rgb_tensor_aug
# def _getitem__tmp(self, touch_aug_num, no_touch_aug_num)
def get_single_trials(self, num_2_extract) :
# raw_X, raw_Y = self._build_data([self.file_trial_list[num_2_extract]],
# [self.file_Y_list[num_2_extract]])
raw_X = self._generate_X(self.file_list_chunks[num_2_extract])
raw_Y = self._generate_Y(self.file_Y_list_chunks[num_2_extract])
frame_index = scipy.io.loadmat(self.frame_ind_list[num_2_extract])
frame_index = frame_index['relevantIdx']
frame_index = frame_index[0]
rgb_batch = np.repeat(raw_X[..., np.newaxis], 3, -1)
IMG_SIZE = 96 # All images will be resized to 160x160. This is the size of MobileNetV2 input sizes
rgb_tensor = tf.cast(rgb_batch, tf.float32) # convert to tf tensor with float32 dtypes
rgb_tensor = (rgb_tensor/127.5) - 1 # /127.5 = 0:2, -1 = -1:1 requirement for mobilenetV2
rgb_tensor = tf.image.resize(rgb_tensor, (IMG_SIZE, IMG_SIZE)) # resizing
self.IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
rgb_tensor_aug = rgb_tensor
# print(len(raw_Y))
# for i, ims in enumerate(rgb_tensor):
# print(i)
# tmp1 = fux_wit_imgs(20, ims)
# rgb_tensor_aug = np.append(rgb_tensor_aug, tmp1, 0)
return rgb_tensor_aug, raw_Y
# return rgb_tensor, raw_Y, frame_index#, trial_file_num
# Function to generate an image tensor and corresponding label array
def _build_data(self, x_files, y_files) :
"""Phils original build data structure used to generate X and Y together. It has been broken down into _generate_X and _generate_Y. Delete ASAP"""
cnt1 = -1;
for k in range(len(y_files)):
cnt1 = cnt1 + 1
tmp1 = scipy.io.loadmat(x_files[cnt1])
tmp2 = scipy.io.loadmat(y_files[cnt1])
Xtmp = tmp1['finalMat']
Ytmp = tmp2['touchClass']
if cnt1==0:
raw_X = Xtmp
raw_Y = Ytmp
else:
raw_X = np.concatenate((raw_X,Xtmp), axis=0)
raw_Y = np.concatenate((raw_Y,Ytmp), axis=0)
return raw_X, raw_Y
def _generate_X(self, x_files) :
cnt1 = -1;
for k in range(len(x_files)):
cnt1 = cnt1 + 1
tmp1 = scipy.io.loadmat(x_files[cnt1])
Xtmp = tmp1['finalMat']
if cnt1==0:
raw_X = Xtmp
else:
raw_X = np.concatenate((raw_X,Xtmp), axis=0)
return raw_X
def _generate_Y(self, y_files) :
cnt1 = -1;
for k in range(len(y_files)):
cnt1 = cnt1 + 1
tmp2 = scipy.io.loadmat(y_files[cnt1])
Ytmp = tmp2['touchClass']
if cnt1==0:
raw_Y = Ytmp
else:
raw_Y = np.concatenate((raw_Y,Ytmp), axis=0)
return raw_Y
def plot_batch_distribution(self):
# randomly select a batch and generate images and labels
batch_num = np.random.choice(np.arange(0, len(self.file_list_chunks)))
samp_x, samp_y = self.__getitem__(batch_num)
# look at the distribution of classes
plt.pie([1 - np.mean(samp_y), np.mean(samp_y)],
labels=['non-touch frames', 'touch frames'], autopct='%1.1f%%', )
plt.title('class distribution from batch ' + str(batch_num))
plt.show()
# generate indices for positive and negative classes
images_to_sample = 20
neg_class = [i for i, val in enumerate(samp_y) if val == 0]
pos_class = [i for i, val in enumerate(samp_y) if val == 1]
neg_index = np.random.choice(neg_class, images_to_sample)
pos_index = np.random.choice(pos_class, images_to_sample)
# plot sample positive and negative class images
plt.figure(figsize=(10, 10))
for i in range(images_to_sample):
plt.subplot(5, 10, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
_ = plt.imshow(image_transform(samp_x[neg_index[i]]))
plt.xlabel('0')
plt.subplot(5, 10, images_to_sample + i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image_transform(samp_x[pos_index[i]]))
plt.xlabel('1')
plt.suptitle('sample images from batch ' + str(batch_num))
plt.show()
# + id="Qa2BV1BoRDJM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="94fc7af7-f287-476a-fc41-23cd4cf0fe47"
# Data splits to train/test/validation sets
# *** need to get actual data split here
batch_size = 2000
validate_fraction = .3
# for now we will split based on num files not number of frames becasue its easier and shouldnt make
# too much of a difference -- can fix later if we care to
mixed_inds = np.random.choice(len(frames), len(frames), replace=False)
validate_count = round(validate_fraction*len(frames))
T_inds = mixed_inds[validate_count+1:-1]
# T_inds = [frames[k] for k in T_inds]
v_inds = mixed_inds[0:validate_count]
# v_inds = [frames[k] for k in v_inds]
my_training_batch_generator = My_Custom_Generator([frames[k] for k in T_inds],
[T_class_reordered[k] for k in T_inds],
[frame_num_in_Y_set[k] for k in T_inds],
batch_size,
to_fit = True)
my_validation_batch_generator = My_Custom_Generator([frames[k] for k in v_inds],
[T_class_reordered[k] for k in v_inds],
[frame_num_in_Y_set[k] for k in v_inds],
batch_size,
to_fit = True)
my_test_batch_generator = My_Custom_Generator([frames[k] for k in v_inds],
[],
[frame_num_in_Y_set[k] for k in v_inds],
batch_size,
to_fit = False)
print(len(frames))
# + [markdown] id="uAQfCvkC2uPD" colab_type="text"
# #2) Exploratory Data Analysis
#
# We're going to take a look at the distribution of classes and some sample images in randomly selected batches to ensure data quality.
# + id="HX-pqjwr6wrz" colab_type="code" colab={}
# image transform from [-1 1] back to [0 255] for imshow
def image_transform(x):
image = tf.cast((x + 1) * 127.5, tf.uint8)
return image
# + id="ISto1VtxD5sR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 264} outputId="10adde14-a79f-405b-9713-dd51d6114a9e"
# population distribution of data
total_touch_frames = np.sum([np.sum(k) for k in raw_Y_set])
total_non_touch_frames = np.sum([np.sum(k==0) for k in raw_Y_set])
total_frames = np.sum(frame_num_in_Y_set)
population = np.array([total_non_touch_frames,total_touch_frames]) / total_frames
plt.pie(population,
labels=['non-touch frames', 'touch frames'], autopct='%1.1f%%',)
plt.title('class distribution across population (n=' + str(total_frames) + ' frames)')
plt.show()
# + id="mZgvKS5GQtAi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 772} outputId="2541b968-38e1-4325-96cd-f01976551991"
# look at distribution of data and some sample images
my_training_batch_generator.plot_batch_distribution()
# + [markdown] id="TQKE1y9YBVyQ" colab_type="text"
# # 3) Feature engineering?
#
# + id="L0LNCqs96cO2" colab_type="code" colab={}
# Y vectorization and class weight calculation
to_del = 0
start = time.time()
cnt1 = -1;
mem_free = 9999
y_files = my_training_batch_generator.file_Y_list
for k in range(len(y_files)):
cnt1 = cnt1 + 1
tmp2 = scipy.io.loadmat(y_files[cnt1])
Ytmp = tmp2['touchClass']
if cnt1==0:
raw_Y_2 = Ytmp
else:
raw_Y_2 = np.concatenate((raw_Y_2,Ytmp), axis=0)
# + [markdown] id="JIWvXzxw37a7" colab_type="text"
# # 4) Deploy and selection of base model
# In this section we're going to use MobileNetV2 as the base model.
# We're going to run two variations of the model.
# a. basemodel with frozen layers and output classifer changes
# b. basemodel with final 100 layers unfrozen to optimize prediction
#
# + id="xOfFjdZbqXoS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 377} outputId="199ad65a-cbd3-460f-c4ec-65542823899a"
# Create base model
# First, instantiate a MobileNet V2 model pre-loaded with weights trained on ImageNet. By specifying the include_top=False argument,
# you load a network that doesn't include the classification layers at the top, which is ideal for feature extraction
# Create the base model from the pre-trained model MobileNet V2
IMG_SIZE = 96 # All images will be resized to 96x96. This is the size of MobileNetV2 input sizes
IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
feature_batch = base_model.output
print(feature_batch.shape)
# Adding Classification head
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print(feature_batch_average.shape)
prediction_layer = tf.keras.layers.Dense(1, activation='sigmoid')
prediction_batch = prediction_layer(feature_batch_average)
print(prediction_batch.shape)
# Model Stacking
model = tf.keras.Sequential([
base_model,
global_average_layer,
prediction_layer
])
print(model.summary())
# Compile model with specific metrics
# Metrics below are for evaluating imbalanced datasets
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name = 'auc')
]
base_learning_rate = 0.0001
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=METRICS)
# + colab_type="code" id="gt0G74FsB6_i" colab={"base_uri": "https://localhost:8080/", "height": 768} outputId="99a49e0c-5399-43f6-cd57-48c82acd7cdd"
start = time.time()
# Fit model with a couple parameters
EPOCHS = 40
# Class imbalance weighting
rebalance = class_weight.compute_class_weight('balanced',
[0, 1], raw_Y_2.flatten())
class_weights = {i : rebalance[i] for i in range(2)}
# Early stopping
callbacks = [keras.callbacks.EarlyStopping (monitor = 'val_loss',
patience = 2)]
history = model.fit(my_training_batch_generator, epochs=EPOCHS,
validation_data= my_validation_batch_generator,
callbacks = callbacks,
class_weight = class_weights)
total_seconds = time.time() - start
print('total run time :' + str(round(total_seconds/60)), ' minutes')
todays_version = time.strftime("%Y%m%d", time.gmtime())
end_dir = model_save_dir + '/' + 'cp-final-' + todays_version +'.ckpt'
model.save_weights(end_dir)
# + id="Z0H_uQQFU8Zo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fdac0a26-d5ae-4080-e43b-11398642b24f"
latest = tf.train.latest_checkpoint(model_save_dir)
model.load_weights(latest)
# model.save('/content/gdrive/My Drive/Colab data/model_200906_400_000_imgs_2.h5')
# model.load_weights('/content/gdrive/My Drive/Colab data/model_200906_400_000_imgs.h5')
# + [markdown] id="ob4GCg-USBl_" colab_type="text"
# ## 4.1) Model learning evaluation
#
# Here we'll look at metrics of loss, AUC, precision, and recall across epochs of learning
#
#
# + id="TMGd5RW2VDaz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 509} outputId="c0a02a76-b3b3-4834-ffd5-0b8f85379365"
# Overall model evaluation
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
fig = plt.figure(figsize=(8, 7))
plt.rcParams.update({'font.size':12})
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1.1])
plt.legend()
plt.tight_layout()
plot_metrics(history)
# + id="Z2ei_oK9x3SV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 460} outputId="054403e6-60a3-47d6-b2e6-4394e1a5a88f"
# Confusion matrix last epoch
def plot_confusion_matrix(history, epoch):
fig = plt.figure(figsize = (6,6))
plt.rcParams.update({'font.size':14})
plt.tight_layout()
total_samples= history.history['tp'][epoch] + history.history['fp'][epoch] + history.history['tn'][epoch] + history.history['fn'][epoch]
values = np.array([[history.history['tp'][epoch], history.history['fp'][epoch]],
[history.history['fn'][epoch], history.history['tn'][epoch]]]) / total_samples
for i in range(2):
for j in range(2):
text = plt.text(j, i, round(values[i, j],2),
ha="center", va="center", color="w")
im = plt.imshow(values,cmap='bone',vmin=0, vmax=1)
plt.yticks([0,1],labels=['Pred O', 'Pred X'])
plt.xticks([0,1],labels = ['True O', 'True X'],rotation=45)
plt.title('Final epoch performance')
plt.show
return values
plot_confusion_matrix(history,-1)
# + id="2wvNNdWv3BeC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 367} outputId="aed33888-51a6-4893-8087-176445bda135"
# load all validation data and get distribution of probability differences
accumulator = np.array([])
for batch_num in np.arange(0,len(my_training_batch_generator.file_list_chunks)):
dmatx, dmaty = my_training_batch_generator.__getitem__(batch_num)
# mem_breakdown_and_big_vars(locals()) #check to see memory consumption
# predict using the trained model and calculate difference from target
# prob_diff: positive values are false negatives, negative values are false positives
predy = model.predict(dmatx)
prob_diff = dmaty - predy
accumulator = np.concatenate((accumulator,prob_diff[:,0]))
plt.figure(figsize=(6,5))
plt.xlim([-1, 1])
plt.xticks(np.linspace(-1,1,5))
plt.xlabel('FP -------TN--------TP-------- FN')
plt.ylabel('Number of images')
plt.title('Training set predictions')
l = plt.hist(accumulator,bins=np.linspace(-1,1,17))
for k in np.array([-.5,0,.5]):
plt.plot([k, k],[0, max(l[0])*1.2],color='k',linestyle='--')
plt.ylim([0, max(l[0])*1.2])
plt.show
# + id="z6Ksp9A2WTb7" colab_type="code" colab={}
# ROC Analysis
def calculate_roc(batch_generator):
y_pred = np.array([])
y_true = np.array([])
for batch_num in np.arange(0,len(batch_generator.file_list_chunks)):
dmat_x, dmat_y = batch_generator.__getitem__(batch_num)
# predict using the trained model and calculate difference from target
# prob_diff: positive values are false negatives, negative values are false positives
pred_y = model.predict(dmat_x)
y_true = np.concatenate((y_true, dmat_y[:,0]))
y_pred = np.concatenate((y_pred,pred_y[:,0]))
fpr, tpr, thresholds = roc_curve(y_true, y_pred)
return fpr, tpr, thresholds
train_fp, train_tp, train_thresh = calculate_roc(my_training_batch_generator)
val_fp, val_tp, val_thresh = calculate_roc(my_validation_batch_generator)
fig,axs = plt.subplots(2,1,figsize=(4,6))
axs[0].plot(train_fp,train_tp,color = 'b',label = 'train')
axs[0].plot(val_fp,val_tp, color = 'b', linestyle="--", label = 'val')
axs[0].set_xlabel('False positive rate (%)')
axs[0].set_ylabel('True positive rate (%)')
axs[1].plot(train_fp,train_tp,color = 'b',label = 'train')
axs[1].plot(val_fp,val_tp, color = 'b', linestyle="--", label = 'val')
axs[1].set_ylim(.9, 1.01)
axs[1].set_xlim(0, .1)
axs[1].set_xticks([0, .3])
axs[1].set_yticks([.7, 1])
axs[1].set_title('zoomed')
axs[1].set_xlabel('False positive rate (%)')
axs[1].set_ylabel('True positive rate (%)')
plt.tight_layout()
plt.legend()
plt.show
# + [markdown] id="C4qjuZs-T_9Z" colab_type="text"
# ## 4.2) Model evaluation of failed images
#
# Here we'll plot 10 images of the False Positive and False Negative cases in each batch. This will help us see what the extreme cases of false negatives and false positives are.
# + id="ZvyS7RL2T-VZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 631} outputId="fa1f09d3-1f8c-49b3-8379-f4efd54533a9"
# load a batch of data
batch_num = np.random.choice(np.arange(0,len(my_validation_batch_generator.file_list_chunks)))
dmatx, dmaty = my_validation_batch_generator.__getitem__(batch_num)
# predict using the trained model and calculate difference from target
# prob_diff: positive values are false negatives, negative values are false positives
predy = model.predict(dmatx)
prob_diff = dmaty - predy
# sorted indices and values for plotting
idx = np.argsort(prob_diff.flatten())
values = np.sort(prob_diff.flatten()).round(2)
images_to_sample = 16
plt.figure(figsize=(10,10))
for i in range(images_to_sample):
plt.subplot(4,images_to_sample/2,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
_ = plt.imshow(image_transform(dmatx[idx[i]]))
plt.xlabel('FP ' + str(values[i]))
plt.subplot(4,images_to_sample/2,images_to_sample+i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image_transform(dmatx[idx[-i -1]]))
plt.xlabel('FN ' + str(values[-i -1]))
plt.suptitle('Validation batch number ' + str(batch_num))
plt.show()
# + [markdown] id="FEIdWwHA5ELy" colab_type="text"
# # 5) Hyperparameter tuning
# Here we'll loosen up a couple of the top layers for training to see if we can boost performance
#
# "In most convolutional networks, the higher up a layer is, the more specialized it is. The first few layers learn very simple and generic features that generalize to almost all types of images. As you go higher up, the features are increasingly more specific to the dataset on which the model was trained. The goal of fine-tuning is to adapt these specialized features to work with the new dataset, rather than overwrite the generic learning"
# + colab_type="code" id="gHqPAt_944CI" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="e53671ac-9040-4748-9bd0-0a4171b3692d"
# Fine-tuning model by unfreezing layers and allowing them to be trainable
model.trainable = True
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(model.layers))
# Fine-tune from this layer onwards
fine_tune_at = 50
# Freeze all the layers before the `fine_tune_at` layer
for layer in model.layers[:fine_tune_at]:
layer.trainable = False
# Compile model with specific metrics
# Metrics below are for evaluating imbalanced datasets
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name = 'auc')
]
# compile model with a much slower learning rate
base_learning_rate = 0.0001
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=base_learning_rate/10),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=METRICS)
# + colab_type="code" id="7mTLXuRr44CO" colab={"base_uri": "https://localhost:8080/", "height": 158} outputId="26dfd2f6-9bc0-4945-908d-e19dba6ba665"
start = time.time()
# Fit model with a couple parameters
EPOCHS = 20
# Class imbalance weighting
rebalance = class_weight.compute_class_weight('balanced',
[0, 1], raw_Y_2.flatten())
class_weights = {i : rebalance[i] for i in range(2)}
# Early stopping
callbacks = [keras.callbacks.EarlyStopping (monitor = 'val_loss',
patience = 2)]
history = model.fit(my_training_batch_generator, epochs=EPOCHS,
validation_data= my_validation_batch_generator,
callbacks = callbacks,
class_weight = class_weights)
total_seconds = time.time() - start
print('total time took ' + str(round(total_seconds/60)), ' minutes')
# + [markdown] id="wwlEuSCRBASZ" colab_type="text"
# # 6) Test set on model
#
# + id="GfjFdk08BGBo" colab_type="code" colab={}
predictions = model.predict(my_test_batch_generator)
# + id="-UrwUNIeCA1m" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="17252447-b7ad-4fbd-ead7-08e94b0bbca9"
# + [markdown] id="fR4oHmidBh-X" colab_type="text"
# # ---- DONE ----
# + id="HNTIl3CGQR8t" colab_type="code" colab={}
test_img = dmatx[50:52, :, :, :]
test_img = (test_img+1)/2*255
# example of random rotation image augmentation
from keras.preprocessing.image import ImageDataGenerator
# load the image
data = test_img
# # convert to numpy array
# data = img_to_array(img)
# expand dimension to one sample
print(np.shape(data))
samples = np.expand_dims(data, 0)
print(np.shape(samples))
samples = data
print(np.shape(samples))
# create image data augmentation generator
datagen = ImageDataGenerator(rotation_range=360, #
width_shift_range=.07, #
height_shift_range = .07, #
shear_range = 30,#
zoom_range = .24,
brightness_range=[0.75,1.25])#
# prepare iterator
it = datagen.flow(samples, batch_size=1)
# generate samples and plot
plt.figure(figsize=(20,20))
for i in range(50):
# define subplot
plt.subplot(5, 10, i+1)
# generate batch of images
batch = it.next()
# convert to unsigned integers for viewing
image = batch[0].astype('uint8')
# plot raw pixel data
plt.imshow(image)
# show the figure
plt.show()
np.shape(image)
# + id="cLjTjbWhOcQM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 385} outputId="a2b93c0f-354d-47c1-df5c-d8610a8d7f44"
# print(np.shape())
# print(96/2)
test_img = dmatx[39, :, :, :]
test_img = (test_img+1)/2
# print(np.max(test_img))
# print(np.min(test_img))
images_to_sample = 12
plt.figure(figsize=(10,10))
for i in range(images_to_sample):
plt.subplot(4,images_to_sample/2,i+1)
tmp1 = tf.keras.preprocessing.image.random_rotation(
test_img,1, row_axis=48, col_axis=48, channel_axis=0, fill_mode='nearest', cval=0.0,
interpolation_order=1
)
if i == 0:
plt.imshow(test_img)
else:
plt.imshow(tmp1)
# + id="EB4nUtViNqeL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 285} outputId="403df52f-3e9a-4509-d55c-94bcc3e4b814"
plt.imshow(test_img)
# + id="H6LbxFBmJOZ8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 232} outputId="5ca0dc12-93f0-4ee9-96c4-5b21c2552be2"
data_augmentation = keras.Sequential([
layers.experimental.preprocessing.RandomRotation(0.25),
])
# augmented_image = data_augmentation(tf.expand_dims(img, 0), training=True)
# show(img,augmented_image[0].numpy())
# + id="Xp7gZPwWT7lJ" colab_type="code" colab={}
class_predict = []
is_val_data = []
for n, trial_name in enumerate(my_validation_batch_generator.file_trial_list):
# print(n)
dmatx, dmaty = my_validation_batch_generator.get_single_trials(n)
tmp1= model.predict(dmatx)
class_predict.append(tmp1)
is_val_data.append(1)
for n, trial_name in enumerate(my_training_batch_generator.file_trial_list):
# print(n)
dmatx, dmaty = my_training_batch_generator.get_single_trials(n)
tmp1= model.predict(dmatx)
class_predict.append(tmp1)
is_val_data.append(0)
# + id="TJv1H6lzT_Ky" colab_type="code" colab={}
all_files = my_validation_batch_generator.file_trial_list + my_training_batch_generator.file_trial_list
tmp1 = [all_files, class_predict, is_val_data]
scipy.io.savemat('/content/gdrive/My Drive/Colab data/all_pred_200828_1.mat', mdict={'my_list': tmp1})
# + id="e2QOurWewemK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 415} outputId="2473cbf1-4fe9-4ccf-98fd-01c134c19ee7"
filename='/content/gdrive/My Drive/Colab data/allSaveData200828_2.out'
my_shelf = shelve.open(filename,'n') # 'n' for new
# dont_save_vars = ['exit', 'get_ipython']
dont_save_vars = ['In', 'Out', '_', '_1', '__', '___', '__builtin__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', '_dh', '_i', '_i1', '_i2', '_i3', '_i4', '_ih', '_ii', '_iii', '_oh', '_sh', 'exit', 'get_ipython', 'quit']
for key in dir():
if all([key != k for k in dont_save_vars]):
try:
my_shelf[key] = globals()[key]
except TypeError:
#
# __builtins__, my_shelf, and imported modules can not be shelved.
#
print('ERROR shelving: {0}'.format(key))
# print('ERROR')
my_shelf.close()
# + id="lMWoIMLMXjBl" colab_type="code" colab={}
#pip install dill --user
filename = '/content/gdrive/My Drive/Colab data/globalsave.pkl'
dill.dump_session(filename)
# and to load the session again:
dill.load_session(filename)
# + id="YJ9Hizfy_bkH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 375} outputId="4dcfb022-8f1d-4944-f635-34a1519a0d56"
filename = 'globalsave.pkl'
dill.dump_session(filename)
# and to load the session again:
dill.load_session(filename)
# + id="V0t9nMCUmmcr" colab_type="code" colab={}
model.save('/content/gdrive/My Drive/Colab data/model_200828_1.h5')
# + id="zzV8kS5BZo3n" colab_type="code" colab={}
class My_Custom_Generator(keras.utils.Sequence) :
def __init__(self, file_trial_list, file_Y_list, num_in_each, batch_size, to_fit) :
cnt = 0
extract_inds = []
# num_in_each contains the number of frames in each file I am loading, ie
# for trial/file 1 there are 200 frames , trial/file 2 has 215 frames etc
for k, elem in enumerate(num_in_each) :
tot_frame_nums = sum(num_in_each[cnt: k+1]) # used to test if the number of frames in
# all these files exceded the "batch_size" limit
if tot_frame_nums>batch_size or len(num_in_each)-1 == k: # condition met, these files together
# meet the max requirment to load together as a batch
extract_inds.append([cnt, k+1])
cnt = k+1 # reset to the current iter
if np.diff(extract_inds[-1]) > 1: # if there is more than one file then we want to take off the last file
# because it excedes the set number of frames
extract_inds[-1][-1] = extract_inds[-1][-1]-1
cnt = cnt-1
file_list_chunks = []
file_Y_list_chunks = []
for i, ii in enumerate(extract_inds):
file_list_chunks.append(file_trial_list[ii[0]:ii[1]])
file_Y_list_chunks.append(file_Y_list[ii[0]:ii[1]])
self.to_fit = to_fit #set to True to return XY and False to return X
self.file_trial_list = file_trial_list
self.file_Y_list = file_Y_list
self.batch_size = batch_size
self.extract_inds = extract_inds
self.num_in_each = num_in_each
self.file_list_chunks = file_list_chunks
self.file_Y_list_chunks = file_Y_list_chunks
def __len__(self) :
return len(self.extract_inds)
def __getitem__(self, num_2_extract) :
# raw_X, raw_Y = self._build_data(self.file_list_chunks[num_2_extract],
# self.file_Y_list_chunks[num_2_extract])
raw_X = self._generate_X(self.file_list_chunks[num_2_extract])
rgb_batch = np.repeat(raw_X[..., np.newaxis], 3, -1)
IMG_SIZE = 96 # All images will be resized to 160x160. This is the size of MobileNetV2 input sizes
rgb_tensor = tf.cast(rgb_batch, tf.float32) # convert to tf tensor with float32 dtypes
rgb_tensor = (rgb_tensor/127.5) - 1 # /127.5 = 0:2, -1 = -1:1 requirement for mobilenetV2
rgb_tensor = tf.image.resize(rgb_tensor, (IMG_SIZE, IMG_SIZE)) # resizing
self.IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
rgb_tensor_aug = rgb_tensor
# print(len(raw_Y))
# for i, ims in enumerate(rgb_tensor):
# # print(i)
# tmp1 = fux_wit_imgs(20, ims)
# rgb_tensor_aug = np.append(rgb_tensor_aug, tmp1, 0)
if self.to_fit:
raw_Y = self._generate_Y(self.file_Y_list_chunks[num_2_extract])
return rgb_tensor_aug, raw_Y
else:
return rgb_tensor_aug
# def _getitem__tmp(self, touch_aug_num, no_touch_aug_num)
def get_single_trials(self, num_2_extract) :
# raw_X, raw_Y = self._build_data([self.file_trial_list[num_2_extract]],
# [self.file_Y_list[num_2_extract]])
raw_X = self._generate_X(self.file_list_chunks[num_2_extract])
raw_Y = self._generate_Y(self.file_Y_list_chunks[num_2_extract])
frame_index = scipy.io.loadmat(self.frame_ind_list[num_2_extract])
frame_index = frame_index['relevantIdx']
frame_index = frame_index[0]
rgb_batch = np.repeat(raw_X[..., np.newaxis], 3, -1)
IMG_SIZE = 96 # All images will be resized to 160x160. This is the size of MobileNetV2 input sizes
rgb_tensor = tf.cast(rgb_batch, tf.float32) # convert to tf tensor with float32 dtypes
rgb_tensor = (rgb_tensor/127.5) - 1 # /127.5 = 0:2, -1 = -1:1 requirement for mobilenetV2
rgb_tensor = tf.image.resize(rgb_tensor, (IMG_SIZE, IMG_SIZE)) # resizing
self.IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
rgb_tensor_aug = rgb_tensor
# print(len(raw_Y))
# for i, ims in enumerate(rgb_tensor):
# print(i)
# tmp1 = fux_wit_imgs(20, ims)
# rgb_tensor_aug = np.append(rgb_tensor_aug, tmp1, 0)
return rgb_tensor_aug, raw_Y
# return rgb_tensor, raw_Y, frame_index#, trial_file_num
# Function to generate an image tensor and corresponding label array
def _build_data(self, x_files, y_files) :
"""Phils original build data structure used to generate X and Y together. It has been broken down into _generate_X and _generate_Y. Delete ASAP"""
cnt1 = -1;
for k in range(len(y_files)):
cnt1 = cnt1 + 1
tmp1 = scipy.io.loadmat(x_files[cnt1])
tmp2 = scipy.io.loadmat(y_files[cnt1])
Xtmp = tmp1['finalMat']
Ytmp = tmp2['touchClass']
if cnt1==0:
raw_X = Xtmp
raw_Y = Ytmp
else:
raw_X = np.concatenate((raw_X,Xtmp), axis=0)
raw_Y = np.concatenate((raw_Y,Ytmp), axis=0)
return raw_X, raw_Y
def _generate_X(self, x_files) :
cnt1 = -1;
for k in range(len(x_files)):
cnt1 = cnt1 + 1
tmp1 = scipy.io.loadmat(x_files[cnt1])
Xtmp = tmp1['finalMat']
if cnt1==0:
raw_X = Xtmp
else:
raw_X = np.concatenate((raw_X,Xtmp), axis=0)
return raw_X
def _generate_Y(self, y_files) :
cnt1 = -1;
for k in range(len(y_files)):
cnt1 = cnt1 + 1
tmp2 = scipy.io.loadmat(y_files[cnt1])
Ytmp = tmp2['touchClass']
if cnt1==0:
raw_Y = Ytmp
else:
raw_Y = np.concatenate((raw_Y,Ytmp), axis=0)
return raw_Y
def plot_batch_distribution(self):
# randomly select a batch and generate images and labels
batch_num = np.random.choice(np.arange(0, len(self.file_list_chunks)))
samp_x, samp_y = self.__getitem__(batch_num)
# look at the distribution of classes
plt.pie([1 - np.mean(samp_y), np.mean(samp_y)],
labels=['non-touch frames', 'touch frames'], autopct='%1.1f%%', )
plt.title('class distribution from batch ' + str(batch_num))
plt.show()
# generate indices for positive and negative classes
images_to_sample = 20
neg_class = [i for i, val in enumerate(samp_y) if val == 0]
pos_class = [i for i, val in enumerate(samp_y) if val == 1]
neg_index = np.random.choice(neg_class, images_to_sample)
pos_index = np.random.choice(pos_class, images_to_sample)
# plot sample positive and negative class images
plt.figure(figsize=(10, 10))
for i in range(images_to_sample):
plt.subplot(5, 10, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
_ = plt.imshow(image_transform(samp_x[neg_index[i]]))
plt.xlabel('0')
plt.subplot(5, 10, images_to_sample + i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image_transform(samp_x[pos_index[i]]))
plt.xlabel('1')
plt.suptitle('sample images from batch ' + str(batch_num))
plt.show()
# + id="CufU_utlZv9u" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="066de619-a2c1-49e0-af63-44eb184cd03c"
my_training_batch_generator = My_Custom_Generator([frames[k] for k in T_inds],
[T_class_reordered[k] for k in T_inds],
[frame_num_in_Y_set[k] for k in T_inds],
batch_size,
to_fit = True)
test_batch_generator = My_Custom_Generator([frames[k] for k in T_inds],
[],
[frame_num_in_Y_set[k] for k in T_inds],
batch_size,
to_fit = False)
# + [markdown] id="8IHRXxQOef4G" colab_type="text"
# # Phil's old generator
# + id="Ls0klmrRNy0r" colab_type="code" colab={}
# Function to generate an image tensor and corresponding label array
def build_data(x_files, y_files) :
to_del = 0
start = time.time()
cnt1 = -1;
mem_free = 9999
for k in range(len(y_files)):
cnt1 = cnt1 + 1
tmp1 = scipy.io.loadmat(x_files[cnt1])
tmp2 = scipy.io.loadmat(y_files[cnt1])
Xtmp = tmp1['finalMat']
Ytmp = tmp2['touchClass']
if cnt1==0:
raw_X = Xtmp
raw_Y = Ytmp
else:
raw_X = np.concatenate((raw_X,Xtmp), axis=0)
raw_Y = np.concatenate((raw_Y,Ytmp), axis=0)
# if ((time.time() - start) > 10000) or cnt1>=len(x_files)-1:# update every 10 seconds or when loop ends
# print(len(x_files))
# mem = virtual_memory()
# mem_free = mem.free/1024**3;
# start = time.time()
# print('free mem = ' + str(mem_free))
return raw_X, raw_Y
# + [markdown] id="Qasu3Qg73pcL" colab_type="text"
# make a custom class to help load in the data to prevent crashing due to over using RAM
# This class will
# - chunk the files based on the total frames contained in them based on "batch_size" variable
#
# + id="OkN3VJIA5XEV" colab_type="code" colab={}
class My_Custom_Generator(keras.utils.Sequence) :
def __init__(self, file_trial_list, file_Y_list, num_in_each, batch_size, to_fit) :
cnt = 0
extract_inds = []
# num_in_each contains the number of frames in each file I am loading, ie
# for trial/file 1 there are 200 frames , trial/file 2 has 215 frames etc
for k, elem in enumerate(num_in_each) :
tot_frame_nums = sum(num_in_each[cnt: k+1]) # used to test if the number of frames in
# all these files exceded the "batch_size" limit
if tot_frame_nums>batch_size or len(num_in_each)-1 == k: # condition met, these files together
# meet the max requirment to load together as a batch
extract_inds.append([cnt, k+1])
cnt = k+1 # reset to the current iter
if np.diff(extract_inds[-1]) > 1: # if there is more than one file then we want to take off the last file
# because it excedes the set number of frames
extract_inds[-1][-1] = extract_inds[-1][-1]-1
cnt = cnt-1
file_list_chunks = []
file_Y_list_chunks = []
for i, ii in enumerate(extract_inds):
file_list_chunks.append(file_trial_list[ii[0]:ii[1]])
file_Y_list_chunks.append(file_Y_list[ii[0]:ii[1]])
self.to_fit = to_fit #set to True to return XY and False to return X
self.file_trial_list = file_trial_list
self.file_Y_list = file_Y_list
self.batch_size = batch_size
self.extract_inds = extract_inds
self.num_in_each = num_in_each
self.file_list_chunks = file_list_chunks
self.file_Y_list_chunks = file_Y_list_chunks
def __len__(self) :
return len(self.extract_inds)
def __getitem__(self, num_2_extract) :
raw_X, raw_Y = build_data(self.file_list_chunks[num_2_extract],
self.file_Y_list_chunks[num_2_extract])
rgb_batch = np.repeat(raw_X[..., np.newaxis], 3, -1)
IMG_SIZE = 96 # All images will be resized to 160x160. This is the size of MobileNetV2 input sizes
rgb_tensor = tf.cast(rgb_batch, tf.float32) # convert to tf tensor with float32 dtypes
rgb_tensor = (rgb_tensor/127.5) - 1 # /127.5 = 0:2, -1 = -1:1 requirement for mobilenetV2
rgb_tensor = tf.image.resize(rgb_tensor, (IMG_SIZE, IMG_SIZE)) # resizing
self.IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
rgb_tensor_aug = rgb_tensor
# print(len(raw_Y))
# for i, ims in enumerate(rgb_tensor):
# # print(i)
# tmp1 = fux_wit_imgs(20, ims)
# rgb_tensor_aug = np.append(rgb_tensor_aug, tmp1, 0)
if self.to_fit:
return rgb_tensor_aug, raw_Y
else:
return rgb_tensor_aug
# def _getitem__tmp(self, touch_aug_num, no_touch_aug_num)
def get_single_trials(self, num_2_extract) :
raw_X, raw_Y = build_data([self.file_trial_list[num_2_extract]],
[self.file_Y_list[num_2_extract]])
frame_index = scipy.io.loadmat(self.frame_ind_list[num_2_extract])
frame_index = frame_index['relevantIdx']
frame_index = frame_index[0]
rgb_batch = np.repeat(raw_X[..., np.newaxis], 3, -1)
IMG_SIZE = 96 # All images will be resized to 160x160. This is the size of MobileNetV2 input sizes
rgb_tensor = tf.cast(rgb_batch, tf.float32) # convert to tf tensor with float32 dtypes
rgb_tensor = (rgb_tensor/127.5) - 1 # /127.5 = 0:2, -1 = -1:1 requirement for mobilenetV2
rgb_tensor = tf.image.resize(rgb_tensor, (IMG_SIZE, IMG_SIZE)) # resizing
self.IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3)
rgb_tensor_aug = rgb_tensor
# print(len(raw_Y))
# for i, ims in enumerate(rgb_tensor):
# print(i)
# tmp1 = fux_wit_imgs(20, ims)
# rgb_tensor_aug = np.append(rgb_tensor_aug, tmp1, 0)
return rgb_tensor_aug, raw_Y
# return rgb_tensor, raw_Y, frame_index#, trial_file_num
def plot_batch_distribution(self):
# randomly select a batch and generate images and labels
batch_num = np.random.choice(np.arange(0, len(self.file_list_chunks)))
samp_x, samp_y = self.__getitem__(batch_num)
# look at the distribution of classes
plt.pie([1 - np.mean(samp_y), np.mean(samp_y)],
labels=['non-touch frames', 'touch frames'], autopct='%1.1f%%', )
plt.title('class distribution from batch ' + str(batch_num))
plt.show()
# generate indices for positive and negative classes
images_to_sample = 20
neg_class = [i for i, val in enumerate(samp_y) if val == 0]
pos_class = [i for i, val in enumerate(samp_y) if val == 1]
neg_index = np.random.choice(neg_class, images_to_sample)
pos_index = np.random.choice(pos_class, images_to_sample)
# plot sample positive and negative class images
plt.figure(figsize=(10, 10))
for i in range(images_to_sample):
plt.subplot(5, 10, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
_ = plt.imshow(image_transform(samp_x[neg_index[i]]))
plt.xlabel('0')
plt.subplot(5, 10, images_to_sample + i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image_transform(samp_x[pos_index[i]]))
plt.xlabel('1')
plt.suptitle('sample images from batch ' + str(batch_num))
plt.show()
| whacc/model_checkpoints/autocurator_CNN_v1_5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Todo
# - linear regression using diamond dataset
import pandas as pd
import seaborn as sns
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
df = pd.read_csv('diamonds.csv')
df['cut'] = preprocessing.LabelEncoder().fit_transform(df['cut'])
df['color'] = preprocessing.LabelEncoder().fit_transform(df['color'])
df['clarity'] = preprocessing.LabelEncoder().fit_transform(df['clarity'])
df.drop(['depth', 'table'], axis=1, inplace=True)
# +
X = df.drop('price', axis=1)
y = df.price
# -
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
X_train
model2= RandomForestRegressor()
model2 = model2.fit(X_train, y_train)
predictions2 = model2.predict(X_test)
predictions2
| supervised regression/10_prototype.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Linear Systems - Iterative Methods
#
# As for nonlinear methods, we will start from an educated guess $x^{(0)}$. We obtain the following result by multiplying the previous one by a matrix $B$ called **iteration matrix**, and summing a vector $g$ to it. Those two members should satisfy the relation
#
# $$x = Bx +g$$
#
# We define the error at step $k$ as
#
# $$e^{(k)} = x - x^{(k)} \implies e^{(k + 1)} = Be^{(k)}$$
#
# (implication derives from taking equation (2) and subtracting it from equation (1) (slide 2).
#
# The fundamental (although not sufficent) condition for the matrix to reduce the error is that **B should have a spectral radius which is smaller than 1**, because being multiplied for the error this reduces it gradually. Also, if the spectral radius is closer to 1 from below, the convergence is slower.
#
# $$\rho(B) = max |\lambda_i(B)| < 1$$
#
# A general way of setting up an iterative method is based of the decomposition of A by using a **preconditioner** $P$ which satisfies
#
# $$A = P - (P - A)$$
#
# Hence,
#
# $$Ax = b \implies Px = (P - A)x + b$$
#
# Usually, preconditioner are acting on some properties of the system (e.g. in fluid dynamics they depend on pressure) and are not known most of the time. We can translate B and g as
#
# $$B = P^{-1}(P - A) = I - P^{-1}A$$
# $$g = P^{-1}b$$
#
# The **residual** at iteration k is defined as
#
# $$r^{(k)} = b - Ax^{(k)} = P(x^{(k + 1)}-x^{(k)})$$
#
# If we generalize this formula adding a variable $\alpha$ which can be both static or dynamic to change the residuals, we optain a family of methods called **Richardson's method**.
#
# $$P(x^{(k + 1)}-x^{(k)}) = \alpha_kr^{(k)}$$
#
# $\alpha$ cannot be 0, nor change the sign of r.
#
# ## Jacobi method
#
# In the Jacobi method, we have that $P = D = diag(a_{11}, a_{22}, \dots, a_{nn})$. The Jacobi methods uses $\alpha_k = 1$ and it is slow since it does not take in account the current iteration work.
#
# ## Gauss-Seidel method
#
# The preconditioner is this case is a lower diagonal matrix $P = D - E$ and $\alpha_k = 1$ where E is the lower triangular matrix without the diagonal, multiplied by -1 of A ($E_{ij} = -a_{ij}$ if i > j, = 0 elsewhere). It is faster since it uses the currently computed results in the formula (see the x+1 factor present, which wasn't there in the previous case).
#
# The convergence of these methods is present if $A$ is strictly diagonally dominant by row. If $A$ is symmetric positive definite, then Gauss-Seidel converges. If A is a tridiagonla non-singular matrix without null diagonal elements. Then the two methods are **both divergent or convergent**, but if they diverge we have that $\rho(B_j)^2 = \rho(B_{GS})$.
#
# ## Richardson method
#
# In the Richardson method, $\alpha$ could be either **stationary preconditioned** or **dynamic preconditioned** (in the latter case, $\alpha $ varies during iterations).
#
# In order to choose $\alpha$, if A and P are symmetric positive definite, we have two optimal criteria:
#
# * **Stationary case**: $\alpha_k = \frac{2}{\lambda_{min} + \lambda_{max}}$
#
# * **Dynamic case**:
#
# $$\alpha_k = \frac{(z^{(k)})^Tr^{(k)}}{(z^{(k)})^TAz^{(k)}}$$
#
# Where $z^{(k)} = P^{-1}r^{(k)}$. This is called the **preconditioned gradient method** because the solution is equal to the one giving minimum energy, which is encoded by a gradient equal to 0. This shows how numerical analysis has some implications in physics.
#
# If P = I, we replace z with r in the dynamic case.
#
# See slide 17 to define the steps of Richardson method. $P$ **should make the resolution easy in order to make the whole iterative process computationally doable**.
#
# In the relationship 8 in slide 18 we resume the considerations which in the past were about 8 methods, which are now condensed in a single definition.
#
# $z^{(k)}$ is the **preconditioned error**, that is the error of the system after applying the precondition matrix to A.
#
# We are able to minimize the coefficient $\alpha$ even though it acts on the error which could not be obtained, thanks to optimization capabilities.
| gsarti_notes/lesson_13.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false}
# %matplotlib inline
import os
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import random
from scMVP.dataset import scMVP_dataloader
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Load customize joint profiling data
# + pycharm={"is_executing": false, "name": "#%%\n"}
data_path = "demo_data/"
# provide your own files
data_name_dict = {
"demo_gene_barcode.txt":"gene_barcodes",\
"demo_gene_count.txt": "gene_expression",\
"demo_gene_name.txt": "gene_names",\
"demo_peak_barcode.txt":"atac_barcodes",\
"demo_peak_expression.txt":"atac_expression",\
"demo_peaks.txt":"atac_names"}
dataset = scMVP_dataloader(data_name_dict, save_path = data_path)
print("ok!")
# -
| demos/scMVP_dataloader.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
from IPython.display import HTML
# +
from sklearn.metrics import f1_score, roc_auc_score, average_precision_score, precision_score, recall_score
import pandas
import numpy as np
import papermill as pm
import json
import matplotlib.pyplot as plt
import os
import uuid
from config import config
from db import Result
import ast
import math
import pickle
from clinical_data_models import features_data
import scrapbook as sb
pandas.options.display.float_format = '{:,.3f}'.format
# -
from evaluate import plot_learning_curve, plot_accuracy_curve, load, get_results, get_labels, transform_binary_probabilities, transform_binary_predictions, calculate_accuracy_loss, plot_confusion_matrix, plot_precision_recall, plot_roc_curve, calculate_pr_auc, calculate_confusion_matrix_stats, calculate_confusion_matrix, plot_precision_recall
from data_gen import data
# + tags=["parameters"]
UUID = "2718a5c3-50cf-4b0c-aa44-8cbf5cd26136"
# -
MODEL = "{}/models/{}_features.sav".format(config.OUTPUT, UUID)
result = Result.query.filter(Result.uuid == UUID).first()
train, validation, test = data(seed=uuid.UUID(result.split_seed), label_form=result.label_form, input_form=result.input_form, train_shuffle=False, test_shuffle=False, validation_shuffle=False, train_augment=False, validation_augment=False, test_augment=False)
print("training N:", len(train))
print("validation N:", len(validation))
print("test N:", len(test))
class_inv = {v: k for k, v in train.class_indices.items()}
print("training {}:".format(class_inv[1]), sum(train.classes))
print("validation {}:".format(class_inv[1]), sum(validation.classes))
print("test {}:".format(class_inv[1]), sum(test.classes))
model = pickle.load(open(MODEL, 'rb'))
model
train_set, train_labels, val_set, val_labels, test_set, test_labels = features_data(train, validation, test)
# # Train
# +
probabilities=model.predict_proba(train_set).tolist()
probabilities = [i[1] for i in probabilities]
predictions=model.predict(train_set).tolist()
labels = get_labels(train)
pm.record("train_labels", list(labels))
pm.record("train_probabilities", probabilities)
pm.record("train_predictions", predictions)
# -
# # Validation
# +
probabilities=model.predict_proba(val_set).tolist()
probabilities = [i[1] for i in probabilities]
predictions=model.predict(val_set).tolist()
labels = get_labels(validation)
pm.record("validation_labels", list(labels))
pm.record("validation_probabilities", probabilities)
pm.record("validation_predictions", predictions)
# -
# # Test
# +
probabilities=model.predict_proba(test_set).tolist()
probabilities = [i[1] for i in probabilities]
predictions=model.predict(test_set).tolist()
labels = get_labels(test)
pm.record("test_labels", list(labels))
pm.record("test_probabilities", list(probabilities))
pm.record("test_predictions", list(predictions))
# -
| evaluate-specific-feature-model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 00 - Predicting letter by letter with One-Hot-Encoding
import tensorflow as tf
import numpy as np
# +
def read_data(file_name):
text = open(file_name, 'r').read()
return text.lower()
class Alphabet:
def __init__(self, text):
from collections import Counter
self._count = Counter(list(text))
self._keys = list(self._count.keys())
self._dict = {}
for idx, key in enumerate(self._keys):
self._dict[key] = idx
def get_count(self):
return self._count
def get_size(self):
return len(self._keys)
def letter_to_index(self, letter):
return self._dict.get( letter, 'err' )
def index_to_letter(self, index):
return self._keys[index]
def one_hot(self, text):
encoded = []
for letter in text:
one_hot = [0] * self.get_size()
one_hot[self.letter_to_index(letter)] = 1
encoded.append(one_hot)
return np.array(encoded)
def to_text(self, one_hots):
indices = np.argmax( one_hots, axis=1 ).tolist()
return "".join([self.index_to_letter(idx) for idx in indices])
def indices_to_text(self, indices):
print("shape")
print(indices.shape)
_indices = indices.tolist()
print(_indices)
return "".join([self.index_to_letter(idx) for idx in _indices])
text = read_data("data/cleaned-rap-lyrics/clean2_pac_.txt")
alphabet = Alphabet(text)
# -
print(f"# unique characters: {alphabet.get_size()}")
# ### One-Hot-Encoding
encoded = np.array(alphabet.one_hot(text))
encoded[:100]
alphabet.to_text(encoded)[:100]
def batch_data(num_data, batch_size):
""" Yield batches with indices until epoch is over.
Parameters
----------
num_data: int
The number of samples in the dataset.
batch_size: int
The batch size used using training.
Returns
-------
batch_ixs: np.array of ints with shape [batch_size,]
Yields arrays of indices of size of the batch size until the epoch is over.
"""
# data_ixs = np.random.permutation(np.arange(num_data))
data_ixs = np.arange(num_data)
ix = 0
while ix + batch_size < num_data:
batch_ixs = data_ixs[ix:ix+batch_size]
ix += batch_size
yield batch_ixs
# ### Multiclass Classification
#
# What we are building in this first instance is a 36-class classifier. Based on the previous characters, our model will predict the next upcoming character!
#
# #### Network Architecture:
# 1. **RNN cell**: LSTM with hidden_layer_size
# 2. **linear output layer**: maps to scores for 36 classes
def sample(predicted, temperature=0.9):
'''
helper function to sample an index from a probability array
our model will output scores for each class
we normalize those outputs and create a probability distribution out of them to sample from
'''
exp_predicted = np.exp(predicted/temperature)
predicted = exp_predicted / np.sum(exp_predicted)
probabilities = np.random.multinomial(1, predicted, 1)
return probabilities
class RNN:
def __init__(self, name):
self.name = name
self.weights = []
self.biases = []
def build(self, hidden_layer_size, vocab_size, time_steps, l2_reg=0.0):
self.time_steps = time_steps
self.vocab_size = vocab_size
self.X = tf.placeholder(tf.float32, shape=[None, time_steps, vocab_size], name="data")
self.Y = tf.placeholder(tf.int16, shape=[None, vocab_size], name="labels")
_X = tf.transpose(self.X, [1, 0, 2])
_X = tf.reshape(_X, [-1, vocab_size])
_X = tf.split(_X, time_steps, 0)
with tf.variable_scope(self.name, reuse=tf.AUTO_REUSE):
# 1x RNN LSTM Cell
self.rnn_cell = tf.nn.rnn_cell.LSTMCell(hidden_layer_size)
self.outputs, _ = tf.contrib.rnn.static_rnn(self.rnn_cell, _X, dtype=tf.float32)
# 1x linear output layer
W_out = tf.Variable(tf.truncated_normal([hidden_layer_size, vocab_size],
mean=0, stddev=.01))
b_out = tf.Variable(tf.truncated_normal([vocab_size],
mean=0, stddev=.01))
self.weights.append(W_out)
self.biases.append(b_out)
self.last_rnn_output = self.outputs[-1]
self.final_output = self.last_rnn_output @ W_out + b_out
# softmax cross entropy as our loss function (between 36 classes)
self.softmax = tf.nn.softmax_cross_entropy_with_logits_v2(logits=self.final_output,
labels=self.Y)
self.cross_entropy_loss = tf.reduce_mean(self.softmax)
self.loss = self.cross_entropy_loss
self.optimizer = tf.train.AdamOptimizer()
self.train_step= self.optimizer.minimize(self.loss)
self.correct_prediction = tf.equal(tf.argmax(self.Y,1), tf.argmax(self.final_output, 1))
self.accuracy = tf.reduce_mean(tf.cast(self.correct_prediction, tf.float32))*100
def train(self, train_data, train_labels, alphabet, epochs=20, batch_size=128):
train_losses = []
train_accs = []
self.session = tf.Session()
session = self.session
with session.as_default():
session.run(tf.global_variables_initializer())
tr_loss, tr_acc = session.run([self.loss, self.accuracy],
feed_dict={self.X: train_data,
self.Y: train_labels})
train_losses.append(tr_loss)
train_accs.append(tr_acc)
for epoch in range(epochs):
if(epoch + 1) % 1 == 0:
print(f"\n\nEpoch {epoch + 1}/{epochs}")
print(f"Loss: \t {tr_loss}")
print(f"Accuracy:\t {tr_acc}")
for batch_ixs in batch_data(len(train_data), batch_size):
_ = session.run(self.train_step,
feed_dict={
self.X: train_data[batch_ixs],
self.Y: train_labels[batch_ixs],
})
tr_loss, tr_acc = session.run([self.loss, self.accuracy],
feed_dict={self.X: train_data,
self.Y: train_labels
})
train_losses.append(tr_loss)
train_accs.append(tr_acc)
#get on of training set as seed
seed = train_data[:1:]
#to print the seed 40 characters
seed_chars = ''
for each in seed[0]:
seed_chars += alphabet._keys[np.where(each == max(each))[0][0]]
print ("Seed:" + seed_chars)
#predict next 500 characters
for i in range(500):
if i > 0:
remove_fist_char = seed[:,1:,:]
seed = np.append(remove_fist_char, np.reshape(probabilities, [1, 1, self.vocab_size]), axis=1)
predicted = session.run([self.final_output], feed_dict = {self.X:seed})
predicted = np.asarray(predicted[0]).astype('float64')[0]
probabilities = sample(predicted)
predicted_chars = alphabet._keys[np.argmax(probabilities)]
seed_chars += predicted_chars
print ('Result:'+ seed_chars)
self.hist = {
'train_losses': np.array(train_losses),
'train_accuracy': np.array(train_accs)
}
text = read_data('data/cleaned-rap-lyrics/clean2_pac_.txt')
# +
step = 1
HIDDEN = 256
VOCAB_SIZE = 37
TIME_STEPS = 20
EPOCHS = 20
def making_one_hot(text, alphabet):
'''
'''
unique_chars = alphabet._keys
len_unique_chars = len(unique_chars)
input_chars = []
output_char = []
for i in range(0, len(text) - TIME_STEPS, step):
input_chars.append(text[i:i+TIME_STEPS])
output_char.append(text[i+TIME_STEPS])
train_data = np.zeros((len(input_chars), TIME_STEPS, len_unique_chars))
target_data = np.zeros((len(input_chars), len_unique_chars))
for i , each in enumerate(input_chars):
for j, char in enumerate(each):
train_data[i, j, unique_chars.index(char)] = 1
target_data[i, unique_chars.index(output_char[i])] = 1
return train_data, target_data, unique_chars, len_unique_chars
# -
tr_data, tr_labels, unique_chars, len_unique = making_one_hot(text, alphabet)
basicRNN = RNN(name = "basic")
basicRNN.build(HIDDEN, VOCAB_SIZE, TIME_STEPS)
basicRNN.train(tr_data, tr_labels, alphabet, epochs=EPOCHS)
# ## Learnings
# Some of words slowly begin to make sense, but it is mostly gibberish. Instead of predicting letters, we should try to predict word by word!
#
# Have a look at **02-word-embedding** for that!
| 00-no-embedding.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### <NAME>
#
# # Projectile using class
#
# - **Maximum Height**
# \begin{equation}
# H=\frac{{u^2}{\sin^2\theta}}{2g}
# \end{equation}
#
# - **Horizontal Range**
# \begin{equation}
# R=\frac{{u^2}{\sin2\theta}}{g}
# \end{equation}
#
#
#
# - **Time of Flight**
# \begin{equation}
# T=\frac{{2u}{\sin\theta}}{g}
# \end{equation}
import numpy as np # from numpy import* not required to write np
import pandas as pd # from pandas import*
import json as json
import matplotlib.pyplot as plt # from matplotlib import*
# %matplotlib inline
### (a)
class projectile(object):
def __init__(self, g, u, angle):
self.pi = np.pi
self.g = g # acceleration due to gravity
self.u = u # projecting velocity
self.angle = angle # projecting angle
def find_range(self):
RA = self.u**2*(np.sin(2*self.pi*self.angle/180))/self.g # horizontal range
return RA
def find_height(self):
MH = self.u**2*(np.sin(self.pi*self.angle/180)**2)/(2*self.g)# maximum height
return MH
def find_time(self):
TF = 2*self.u*np.sin(self.pi*self.angle/180)/self.g # Time of flight
return TF
P1 =projectile(g=9.8, u=100, angle = 45)
P1.g ,P1.u
P1.angle,P1.find_range(),P1.find_height(),P1.find_time()
### (b)
class projectile(object):
def __init__(self, g, u):
self.pi = np.pi
self.g = g # acceleration due to gravity
self.u = u # projecting velocity
#self.angle = angle # projecting angle
def find_range(self,angle):
RA = self.u**2*(np.sin(2*self.pi*angle/180))/self.g # horizontal range
return RA
def find_height(self,angle):
MH = self.u**2*(np.sin(self.pi*angle/180)**2)/(2*self.g)# maximum height
return MH
def find_time(self,angle):
TF = 2*self.u*np.sin(self.pi*angle/180)/self.g # Time of flight
return TF
P2 =projectile(g=9.8, u=100)
P2.g ,P2.u,P2.find_range(60),P2.find_height(60),P2.find_time(60)
g=9.8 # m/s^2
u = 100
angle = [] # list of angle
RA = [] # list of range
MH = [] # list of max height
TF = [] #list of time of flight
for i in range(0,90+1,5):
ra = P2.find_range(i)
mh = P2.find_height(i)
tf = P2.find_time(i)
angle.append(i)
RA.append(ra) #add element in list RA
MH.append(mh) #add element in list MH
TF.append(tf) #add element in list TF
#Multiplots/Subplots
plt.figure(figsize=[10,4])
plt.subplot(1,2,1)
plt.plot(angle,RA,'rs--' ,label='Range')
plt.plot(angle,MH ,'g-o', label='Max height')
plt.xlabel('Angle(deg)')
plt.ylabel('Distance(m)')
plt.title('Projectile Motion')
plt.legend()
plt.subplot(1,2,2)
plt.plot(angle,TF,'k^:' ,label='Time of flight')
plt.xlabel('Angle(deg)')
plt.ylabel('Time(sec)')
plt.title('Projectile Motion')
plt.legend()
plt.savefig('plot/projective.png')
#plt.savefig('projective.eps')
#plt.show()
data={} # to save data in dictionary
data.update({"Angle":angle,"Range":RA ,"Max.Height": MH,"Time of flight":TF})
#print(data)
df = pd.DataFrame(data)
df.head()
with open("data/projectile.json", 'w')as f: #to load data in json file
json.dump(data,f)
# to upload json file
with open("data/projectile.json", 'r')as f:
uploaded_data = json.load(f)
df.to_csv("data/projectile.csv") #save data in csv format in excel
uploaded_data = pd.read_csv("data/projectile.csv")
uploaded_data.head(3)
| projectile_class.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.3 64-bit (conda)
# name: python383jvsc74a57bd0c9065c558bee4c4db638e2f8f7496f90f39fc7fab4c41fb820e5cc84613df2a7
# ---
from PIL import Image, ImageFilter
img = Image.open('FH3_1_cvsr.png')
img_blur = img.filter(ImageFilter.GaussianBlur(1))
img_blur.show()
| gaussian_blur.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Make sure helpers functionality can be imported
import os
import sys
project_path, _ = os.path.split(os.getcwd())
if project_path not in sys.path:
sys.path.insert(0, project_path)
# +
# Dependencies
# pip install numpy
# pip install pandas
# pip install matplotlib
# Ignore warnings
import warnings; warnings.simplefilter("ignore")
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# -
# ## Load an experimental data
# +
# Load an example dataset
from sklearn.datasets import load_wine
dataset = load_wine()
X = dataset.data
y = dataset.target
feature_names = dataset.feature_names
print(dataset.get('DESCR'))
# -
# ## Plot the box-violin graph
# ### 1. Plot single graph
# +
from helpers.exploration.visualization import plot_box_violin
# Create temporary DataFrame
df = pd.DataFrame({feature_names[i]: X[:, i] for i in range(len(feature_names))})
# Add the target variable into the DataFrame
df["class"] = y
# Get example feature
x_temp = "class"
y_temp = feature_names[0]
# Plot the box-violin graph
plot_box_violin(x_temp,
y_temp,
df,
fig_size=(6, 6),
fig_show=True,
save_as=None,
y_label=y_temp)
# -
# ### 2. Plot multiple graphs
# +
from helpers.exploration.visualization import plot_box_violin
# Create temporary DataFrame
df = pd.DataFrame({feature_names[i]: X[:, i] for i in range(len(feature_names))})
# Add the target variable into the DataFrame
df["class"] = y
# Prepare the figure
fig = plt.figure(figsize=(18, 16))
# Plot the box-violin graph for the first 12 example features
for vol, i in enumerate(range(12), 1):
# Get example feature
x_temp = "class"
y_temp = feature_names[i]
# Add the subplot (create axes)
ax = fig.add_subplot(4, 3, vol)
# Plot the box-violin graph
plot_box_violin(x_temp,
y_temp,
df,
ax=ax,
fig_size=(6, 6),
fig_show=False,
save_as=None,
y_label=y_temp)
# -
# ## Plot missing values
# +
from helpers.exploration.visualization import plot_missing_values
# Create temporary DataFrame
df = pd.DataFrame({feature_names[i]: X[:, i] for i in range(len(feature_names))})
df = df.iloc[1:20, :]
# Define the number of NaNs per feature
num_nans = 5
# Add some features with missing data
n = np.random.randn(df.shape[0], 1)
n.ravel()[np.random.choice(n.size, num_nans, replace=False)] = np.nan
df["f1"] = n
n = np.random.randn(df.shape[0], 1)
n.ravel()[np.random.choice(n.size, num_nans, replace=False)] = np.nan
df["f2"] = n
n = np.random.randn(df.shape[0], 1)
n.ravel()[np.random.choice(n.size, num_nans, replace=False)] = np.nan
df["f3"] = n
# Plot the missing values
plot_missing_values(df, save_as=None)
| notebooks/exploration_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + deletable=true editable=true
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
import numpy as np
from datetime import datetime
from yahoo_finance import Share
# %matplotlib inline
stock = Share('YHOO')
print(stock.get_open())
# + deletable=true editable=true
print(stock.get_price())
# + deletable=true editable=true
print(stock.get_trade_datetime())
# + deletable=true editable=true
stock_hist = pd.DataFrame(stock.get_historical('2014-04-01', '2017-01-31'))
'''[{u'Volume': u'28720000', u'Symbol': u'YHOO',
u'Adj_Close': u'35.83', u'High': u'35.89',
u'Low': u'34.12', u'Date': u'2014-04-29',
u'Close': u'35.83', u'Open': u'34.37'},
{u'Volume': u'30422000', u'Symbol': u'YHOO',
u'Adj_Close': u'33.99', u'High': u'35.00',
u'Low': u'33.65', u'Date': u'2014-04-28',
u'Close': u'33.99', u'Open': u'34.67'},
{u'Volume': u'19391100', u'Symbol': u'YHOO',
u'Adj_Close': u'34.48', u'High': u'35.10',
u'Low': u'34.29', u'Date': u'2014-04-25',
u'Close': u'34.48', u'Open': u'35.03'}]'''
# + deletable=true editable=true
stock_hist['Date'] = stock_hist['Date'].astype(datetime)
stock_hist['Close'] = stock_hist['Close'].astype(float)
stock_hist['Open'] = stock_hist['Open'].astype(float)
# + deletable=true editable=true
stock_hist.head()
# + [markdown] deletable=true editable=true
# ## Project Scope
#
# The beginning of this project will aim to investigate at the difference between the previous day's close and the following day's open. Then a predictive algorithm will predict whether the stock price will be lower or higher.
#
# This will be a basic categorical approach to stock price prediction.
#
# #### Additional Work
#
# Secondary work will include magnitude of changes, ensemble approaches, and possibly principal component analysis to produce more accurate results.
# + deletable=true editable=true
#stock_hist.plot.line(x='Date',y='Close')
#plt.xticks(rotation=90)
# + deletable=true editable=true
#stock.get_percent_change_from_200_day_moving_average()
# + deletable=true editable=true
#stock.get_year_range()
# + deletable=true editable=true
#Create new column with previous close data to easily
#calculate difference between open and close
stock_hist['prevClose'] = stock_hist['Close'].shift(1)
stock_hist['CloseCategory'] = np.where((stock_hist['Open'] - stock_hist['Close']) >= 0, 'POS','NEG')
stock_hist['OpenCloseDiff'] = stock_hist['Open']-stock_hist['Close']
# + deletable=true editable=true
stock_hist['openDiff'] = stock_hist['Open']-stock_hist['prevClose']
# + deletable=true editable=true
#create a categorical output
# openCategory
# prevCloseCategory
stock_hist['openCategory'] = np.where(stock_hist['openDiff'] >= 0, 'POS','NEG')
# + deletable=true editable=true
stock_hist.head()
# + deletable=true editable=true
from __future__ import print_function
import statsmodels.api as sm
from patsy import dmatrices
y, X = dmatrices('openDiff ~ OpenCloseDiff', data=stock_hist, return_type='dataframe')
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
# +
import matplotlib
# %matplotlib inline
fig, ax = plt.subplots()
ax.scatter(stock_hist['OpenCloseDiff'], stock_hist['openDiff'])
x_0 = stock_hist['OpenCloseDiff'].min()
y_0 = stock_hist['openDiff'].min()
x_1 = stock_hist['OpenCloseDiff'].max()
y_1 = 1.0327 * (x_1 - x_0) - 0.0108
# Draw these two points with big triangles to make it clear
# where they lie
ax.scatter([x_0, x_1], [y_0, y_1], marker='^', s=150, c='r')
# And now connect them
ax.plot([x_0, x_1], [y_0, y_1], c='r')
# -
| stock_general.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ````
# AESM1450 - Geophysical Prospecting -- Controlled-Source ElectroMagnetic (CSEM) Modelling
# ````
# # 4. 3D Modelling
#
# In this tutorial we start with a 1D model and compute it with a 1D modeller and a 3D modeller. We then start to reduce the extent of the target layer successively, moving from a 1D model to a 2D model, and observe when the 1D results start to fail predicting the responses.
import emg3d
import empymod
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib notebook
plt.style.use('ggplot')
# ### Define Model
#
# - We assume a deep sea model, so we completely ignore the air.
# - Water resistivity of 0.3 Ohm.m, background has 1 Ohm.m.
# - 100 m target layer of 100 Ohm.m at 2 km below the seafloor.
# +
# Depth model
depth = [-2000, -4000, -4100]
# Corresponding resistivity models
res_bg = [0.3, 1, 1, 1] # Background model
res_tg = [0.3, 1, 100, 1] # Resistive model
# -
# ### Define Survey
#
# - Source at the origin, receivers up to 10 km.
# - All inline Exx.
# - We use 3 distinct frequencies.
#
# Note that we now use `empymod.bipole` instead of `empymod.dipole`. In this way the source and receiver input are the same for `empymod` and `emg3d`. The important difference is: Instead of an `ab`-configuration as before we have to provide azimuth and dip information. So `ab=11`, hence $E_{xx}$, corresponds to azimuth $\theta=0$ and dip $\varphi=0$.
# +
# Source and receivers
# [x, y, z, azimuth, dip]
src = [0, 0, -1950, 0, 0]
off = np.arange(5, 101)*100
rec = [off, off*0, -2000, 0, 0]
# Frequencies
freq = np.array([0.1, 0.5, 1])
# -
# ### Calculate 1D responses and plot them
resp_bg = empymod.bipole(src, rec, depth, res_bg, freq)
resp_tg = empymod.bipole(src, rec, depth, res_tg, freq)
# +
plt.figure(figsize=(9, 4))
ls = ['-', '--', ':']
ax1 = plt.subplot(121)
plt.title('Amplitude')
for i, f in enumerate(freq):
plt.plot(off/1e3, resp_bg[i, :].amp(), 'k', ls=ls[i], label=f"{f:.2f} Hz")
plt.plot(off/1e3, resp_tg[i, :].amp(), 'C0', ls=ls[i])
plt.legend()
plt.xlabel('Offset (km)')
plt.ylabel('$|E_x|$ (V/m)')
plt.yscale('log')
ax2 = plt.subplot(122)
plt.title('Phase')
for i, f in enumerate(freq):
plt.plot(off/1e3, resp_bg[i, :].pha(), 'k', ls=ls[i], label=f"{f:.2f} Hz")
plt.plot(off/1e3, resp_tg[i, :].pha(), 'C0', ls=ls[i])
plt.legend()
plt.xlabel('Offset (km)')
plt.ylabel('$\phi(E_x)$ (deg)')
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position("right")
plt.tight_layout()
plt.show()
# -
# # Introduction to `emg3d`
#
# `emg3d` is a 3D modeller, and 3D modellers are complex things. There are many parameters that play a crucial role in them:
# - Computational mesh extend (big enough to avoid boundary effects).
# - Mesh: Cells must be small enough to yield accurate results, yet as big as possible to reduce runtime and memory usage.
# - How to put your model onto the mesh, how to retrieve your responses.
#
# In this initial example we provide you with all the required inputs, and you should only have to adjust
# - the frequency (0.1, 0.5, 1 Hz), and
# - the lateral extent in x-direction of the target zone.
#
# For lots of examples have a look at the gallery: https://empymod.github.io/emg3d-gallery
#
# ## Create a mesh
#
# The mesh we create has at its core cells of 100x100x100 m, where core means:
# - x: -100, 0, 100, ..., 9900, 10000, 10100 (edge locations)
# - x: -50, 50, 150, ..., 9850, 9950, 10050 (centre locations)
# - y: -100, 0, 100 (edge locations)
# - z: -4100, -4000, ..., -2000, -1900 (edge locations)
#
# The $E_x$-field is located at the x-centre, but on the y- and z-nodes. So this discretization ensures that our sources and receivers are exactly at these locations. This is not a requirement, but it yields the most precise results. However, the three-dimensional cubic spline included in `emg3d` does a pretty good job at interpolating if your source and receivers are not exactly at these locations.
#
# Outside of this core-domain we increase the cell-size by a factor 1.5 with each cell, to quickly get to large distances.
# +
cs = 100 # Base cell size
nx = 10200/cs # Center points from 0 to 10 km
ny = 2 # 2D, so we start stretching straight away
nz = 2150/cs # Regular up to target plus one in the other dir.
npadx = 12
npady = 14
npadz = 10
alpha = 1.5 # Stretching outside
mesh = emg3d.construct_mesh(
frequency=0.5, # The frequency we are looking at
center=(src[0], src[1]-100, rec[2]),
properties=[res_tg[0], res_tg[2], res_tg[1], res_tg[0]],
domain=([-50, 10050], [-100, 100], [-4100, -1900]),
min_width_limits=100,
max_buffer=50000,
)
mesh
# -
# To look at cell centers and cell edges use the following commands:
#
# mesh.vectorCCx # Cell centers in x-direction
# mesh.vectorNx # Cell nodes in x-direction
#
# and the same for `y` and `z`.
# ## Put the resistivity model on the mesh
#
# We write a little function here that we can reuse then to adjust our model.
# +
def create_model(xmin, xmax, plot=False):
# Initiate with the resistivity of water
res = np.ones(mesh.nC)*res_tg[0]
# Put the background resistivity to all cells below depth[0]
res[mesh.gridCC[:, 2] < depth[0]] = res_tg[1]
# Include the target
target_inds = (
(mesh.gridCC[:, 0] > xmin) & (mesh.gridCC[:, 0] < xmax) & # Indices based on x-coordinate
(mesh.gridCC[:, 2] < depth[1]) & (mesh.gridCC[:, 2] > depth[2]) # AND depth
)
res[target_inds] = res_tg[2]
# Create a emg3d-Model instance
model = emg3d.Model(mesh, res)
# QC our model
if plot:
mesh.plot_3d_slicer(
np.log10(model.property_x), clim=[-1, 2],
xlim=[-100, 10100], ylim=[-500, 500], zlim=[-4500, 100],
)
return model
# Get the 1D model
model = create_model(-np.inf, np.inf, plot=True)
# -
# ## Create the source field
fi = 1 # Current frequency index
sfield = emg3d.get_source_field(mesh, src, freq[fi])
print(f"Current frequency: {freq[fi]} Hz")
# ## Calculate the field
solve_inp = {
'grid': mesh,
'sfield': sfield,
'sslsolver': True,
'semicoarsening': True,
'linerelaxation': True,
}
efield = emg3d.solve(model=model, verb=-1, **solve_inp)
# ## Obtain the responses at the receiver
egd_resp = emg3d.get_receiver_response(mesh, efield, (*rec, ))
# ## Plot it
# +
def plot_it(fi, resp):
"""Define a plot-function to re-use."""
plt.figure(figsize=(9, 4))
ax1 = plt.subplot(121)
plt.title(f"Amplitude; f={freq[fi]}Hz")
plt.plot(off/1e3, resp_bg[fi, :].amp(), 'k', label=f"empymod, background")
plt.plot(off/1e3, resp_tg[fi, :].amp(), 'C0-', label="empymod, target")
plt.plot(off/1e3, resp.amp(), 'C1-.', label='emg3d, target')
plt.legend()
plt.xlabel('Offset (km)')
plt.ylabel('$|E_x|$ (V/m)')
plt.yscale('log')
ax2 = plt.subplot(122)
plt.title(f"Phase; f={freq[fi]}Hz")
plt.plot(off/1e3, resp_bg[fi, :].pha(), 'k')
plt.plot(off/1e3, resp_tg[fi, :].pha(), 'C0-')
plt.plot(off/1e3, resp.pha(), 'C1-.')
plt.xlabel('Offset (km)')
plt.ylabel('$\phi(E_x)$ (deg)')
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position("right")
plt.tight_layout()
plt.show()
plot_it(fi, egd_resp)
# -
# ## Adjusting the model
#
# Now everything is in place, and we can simply start adjusting the `model` by keeping the `mesh` and the `sfield` the same.
# Adjust the x-extent of the target
model = create_model(xmin=0, xmax=8000, plot=True)
# Re-calculate
efield = emg3d.solve(model=model, verb=-1, **solve_inp)
egd_resp = emg3d.get_receiver_response(mesh, efield, (*rec, ))
# Plot it
plot_it(fi, egd_resp)
# # Task
#
# - Move the x-dimension of the target and see its effect.
# - Observe that at offsets where the receivers are no longer over the target the amplitude curve becomes parallel to the background amplitude curve.
# - What can you say about the phase behaviour for those offsets?
# - Try to figure out when the 1D assumption breaks down and you have to calculate 3D models. Can you come up with a rule of thumb?
# - Does your rule of thumb also work for the other two frequencies? Or do you have to include frequency as a parameter into your rule of thumb?
# # Further tasks
#
# - If you are fast you can also try to adjust the notebook to include an analysis of the target-width in y-direction. In this case you have to carefully adjust the `mesh`-creation for the y-direction as well.
# - Even further you could investigate the importance of target thickness or target depth. Again, carefully when meshing, this time in z-direction.
empymod.Report([emg3d, 'discretize'])
| 4-3D-Modelling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Pentode Modeling
# * Model Parameter Extraction
# * Model Parameter Verification
#
# This experiment uses data extracted from a vacuum tube datasheet and scipy.optimize to calculate the [Child-Langmuir](http://www.john-a-harper.com/tubes201/) parameters used for circuit simulation.
#
# $$I_a = K (V_{g1k} + D_{g2}V_{g2k} + D_aV_{ak})^\frac{3}{2}$$
#
# Now we're adding the [van der Veen K modifier](http://www.amazon.com/gp/product/0905705904),
#
# $$\alpha = \alpha_0\left(\frac{2}{\pi}\arctan \left(\frac{V_{ak}}{V_{g2k}}\right)\right)^\frac{1}{n}$$
#
# $$I_a = \alpha K (V_{g1k} + D_{g2}V_{g2k} + D_aV_{ak})^\frac{3}{2}$$
#
# $$I_a = \alpha_0\left(\frac{2}{\pi}\arctan\left(\frac{V_{ak}}{V_{g2k}}\right)\right)^\frac{1}{n} K \left(V_{g1k} + D_{g2}V_{g2k} + D_aV_{ak}\right)^\frac{3}{2}$$
#
#
#
# We are going to use curve fitting to determine $$K, D_a, D_{g2},\alpha_0, \text{ and } n$$
#
# Then we can use [Leach's pentode](users.ece.gatech.edu/mleach/papers/tubeamp/tubeamp.pdf) SPICE model with the van der Veen modifier.
# + slideshow={"slide_type": "skip"}
import scipy
from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
from math import pi,atan,log,pow,exp
# + [markdown] slideshow={"slide_type": "slide"}
# Starting with the [Philips EL34 data sheet](data/el34-philips-1958.pdf), create a PNG of the
# 
# import this image into [engauge](https://github.com/markummitchell/engauge-digitizer)
# + [markdown] slideshow={"slide_type": "slide"}
# Create 9 curves then use 'curve point tool' to add points to each curve
# 
# + [markdown] slideshow={"slide_type": "slide"}
# Change export options, "Raw Xs and Ys" and "One curve on each line", otherwise engauge will do some interrupting of your points
# 
# export a csv file
# + slideshow={"slide_type": "slide"}
# %cat data/el34-philips-1958-360V.csv
# + [markdown] slideshow={"slide_type": "subslide"}
# Need to create scipy array like this
#
# x = scipy.array( [[360, -0.0, 9.66], [360, -0.0, 22.99], ...
#
# y = scipy.array( [0.17962, 0.26382, 0.3227, 0.37863, ...
#
# Vaks = scipy.array( [9.66, 22.99, 41.49, 70.55, 116.61, ...
#
# from the extracted curves
# + slideshow={"slide_type": "subslide"}
fname = "data/el34-philips-1958-360V.csv"
f = open(fname,'r').readlines()
deltaVgk = -4.0
n = 1.50
VgkVak = []
Iak = []
Vaks = []
f = open(fname,'r').readlines()
vg2k = 360
for l in f:
l = l.strip()
if len(l): # skip blank lines
if l[0] == 'x':
vn = float(l.split("Curve")[1]) - 1.0
Vgk = vn * deltaVgk
continue
else:plt.xkcd()
(Vak,i) = l.split(',')
VgkVak.append([vg2k,float(Vgk),float(Vak)])
Iak.append(float(i))
Vaks.append(float(Vak))
x = scipy.array(VgkVak)
y = scipy.array(Iak)
Vaks = scipy.array(Vaks)
# + slideshow={"slide_type": "slide"}
# %matplotlib inline
def func(x,K,Da,Dg2,a0,n):
rv = []
for VV in x:
Vg2k = VV[0]
Vg1k = VV[1]
Vak = VV[2]
t = Vg1k + Dg2 * Vg2k + Da * Vak
if t > 0:
a = a0 * ((2/pi) * atan(Vak/Vg2k))**(1/n)
Ia = a * K * t**n
else:
Ia = 0
# print "func",Vg2k,Vg1k,Vak,t,K,Da,Dg2,a0,n
rv.append(Ia)
return rv
popt, pcov = curve_fit(func, x, y,p0=[0.5,0.05,0.05,0.02,5])
#print popt,pcov
(K,Da,Dg2,a0,n) = popt
print "K =",K
print "Da =",Da
print "Dg2 =",Dg2
print "a0 =",a0
print "n =",n
# + slideshow={"slide_type": "slide"}
Vg2k = x[0][0]
def IaCalc(Vg1k,Vak):
t = Vg1k + Dg2 * Vg2k + Da * Vak
if t > 0:
a = a0 * ((2/pi) * atan(Vak/Vg2k))**(1/n)
Ia = a * K * t**n
else:
Ia = 0
# print "IaCalc",Vgk,Vak,t,Ia
return Ia
Vgk = np.linspace(0,-32,9)
Vak = np.linspace(0,400,201)
vIaCalc = np.vectorize(IaCalc,otypes=[np.float])
Iavdv = vIaCalc(Vgk[:,None],Vak[None,:])
plt.figure(figsize=(14,6))
for i in range(len(Vgk)):
plt.plot(Vak,Iavdv[i],label=Vgk[i])
plt.scatter(Vaks,y,marker="+")
plt.legend(loc='upper left')
plt.suptitle('EL34@%dV Child-Langmuir-Compton-VanDerVeen Curve-Fit K/Da/Dg2 Model (Philips 1949)'%Vg2k, fontsize=14, fontweight='bold')
plt.grid()
plt.ylim((0,0.5))
plt.xlim((0,400))
plt.show()
# -
# Trying the [Koren's triode](http://www.normankoren.com/Audio/Tubemodspice_article.html) phenomenological model.
#
# $$E_1 = \frac{E_{G2}}{k_P} log\left(1 + exp^{k_P (\frac{1}{u} + \frac{E_{G1}}{E_{G2}})}\right)$$
#
# $$I_P = \left(\frac{{E_1}^X}{k_{G1}}\right) \left(1+sgn(E_1)\right)atan\left(\frac{E_P}{k_{VB}}\right)$$
#
# Need to fit $X, k_{G1}, k_P, k_{VB}$
#
#
# +
mu = 11.0
def sgn(val):
if val >= 0:
return 1
if val < 0:
return -1
def funcKoren(x,X,kG1,kP,kVB):
rv = []
for VV in x:
EG2 = VV[0]
EG1 = VV[1]
EP = VV[2]
if kP < 0:
kP = 0
#print EG2,EG1,EP,kG1,kP,kVB,exp(kP*(1/mu + EG1/EG2))
E1 = (EG2/kP) * log(1 + exp(kP*(1/mu + EG1/EG2)))
if E1 > 0:
IP = (pow(E1,X)/kG1)*(1 + sgn(E1))*atan(EP/kVB)
else:
IP = 0
rv.append(IP)
return rv
popt, pcov = curve_fit(funcKoren,x,y,p0=[1.3,1000,40,20])
#print popt,pcov
(X,kG1,kP,kVB) = popt
print "X=%.8f kG1=%.8f kP=%.8f kVB=%.8f"%(X,kG1,kP,kVB)
# koren's values 12AX7 mu=100 X=1.4 kG1=1060 kP=600 kVB=300
# -
# <pre>
# SPICE model
# see http://www.normankoren.com/Audio/Tubemodspice_article_2.html#Appendix_A
# .SUBCKT 6550 1 2 3 4 ; P G1 C G2 (PENTODE)
# + PARAMS: MU=7.9 EX=1.35 KG1=890 KG2=4200 KP=60 KVB=24
# E1 7 0 VALUE={V(4,3)/KP*LOG(1+EXP((1/MU+V(2,3)/V(4,3))*KP))}
# G1 1 3 VALUE={(PWR(V(7),EX)+PWRS(V(7),EX))/KG1*ATAN(V(1,3)/KVB)}
# G2 4 3 VALUE={(EXP(EX*(LOG((V(4,3)/MU)+V(2,3)))))/KG2}
# </pre>
# +
EG2 = x[0][0]
def IaCalcKoren(EG1,EP):
global X,kG1,kP,kVB,mu
E1 = (EG2/kP) * log(1 + exp(kP*(1/mu + EG1/EG2)))
if E1 > 0:
IP = (pow(E1,X)/kG1)*(1 + sgn(E1))*atan(EP/kVB)
else:
IP = 0
return IP
Vgk = np.linspace(0,-32,9)
Vak = np.linspace(0,400,201)
vIaCalcKoren = np.vectorize(IaCalcKoren,otypes=[np.float])
Iakoren = vIaCalcKoren(Vgk[:,None],Vak[None,:])
plt.figure(figsize=(14,6))
for i in range(len(Vgk)):
plt.plot(Vak,Iakoren[i],label=Vgk[i])
plt.scatter(Vaks,y,marker="+")
plt.legend(loc='upper left')
plt.suptitle('EL34@%dV Child-Langmuir-Compton-Koren Curve-Fit Model (Philips 1949)'%Vg2k, fontsize=14, fontweight='bold')
plt.grid()
plt.ylim((0,0.5))
plt.xlim((0,400))
plt.show()
# +
plt.figure(figsize=(14,6))
for i in range(len(Vgk)):
plt.plot(Vak,Iavdv[i],label=Vgk[i],color='red')
plt.plot(Vak,Iakoren[i],label=Vgk[i],color='blue')
plt.scatter(Vaks,y,marker="+")
plt.legend(loc='upper left')
plt.suptitle('EL34@%dV CLCVDV & CLCK Curve-Fit Model (Philips 1949)'%Vg2k, fontsize=14, fontweight='bold')
plt.grid()
plt.ylim((0,0.5))
plt.xlim((0,400))
plt.show()
# -
| experiments/02-modeling/pentode/pentode-modeling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:neural_prophet]
# language: python
# name: conda-env-neural_prophet-py
# ---
# [](https://colab.research.google.com/github/ourownstory/neural_prophet/blob/master/example_notebooks/autoregression_yosemite_temps.ipynb)
# # DeepAR
#
# This is a DeepAR model module usage example. We implemented this model under the same API as NeuralProphet, for easy comparison of the results of NeuralProphet and SOTA models.
#
# We used as the base the implementation of DeepAR from Pytorch Forecasting library. The model parameters are inherited automatically from the dataset structure, if from_dataset is set to True.
#
# For more detail on hyperparameters, please follow https://github.com/jdb78/pytorch-forecasting/blob/master/pytorch_forecasting/models/nbeats/__init__.py
from neuralprophet.forecaster_additional_models import DeepAR
import pandas as pd
# +
if 'google.colab' in str(get_ipython()):
# !pip install git+https://github.com/adasegroup/neural_prophet.git # may take a while
# #!pip install neuralprophet # much faster, but may not have the latest upgrades/bugfixes
data_location = "https://raw.githubusercontent.com/ourownstory/neural_prophet/master/"
else:
data_location = "../"
df = pd.read_csv(data_location + "example_data/yosemite_temps.csv")
df.head(3)
freq = '5min'
df = df.iloc[:1000]
# -
deepar = DeepAR(
context_length=60,
prediction_length=20,
batch_size = 32,
epochs = 10,
num_gpus = 0,
patience_early_stopping = 10,
early_stop = True,
learning_rate=3e-4,
auto_lr_find=True,
num_workers=3,
loss_func = 'normaldistributionloss',
hidden_size=10,
rnn_layers=2,
dropout=0.1,
)
deepar.fit(df, freq = freq)
future = deepar.make_future_dataframe(df, freq, periods=10, n_historic_predictions=10)
forecast = deepar.predict(future)
forecast.iloc[-15:]
f = deepar.plot(forecast)
| example_notebooks/DeepAR_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # 3.2 Model-Based Predictive Controls
# Prepared by (C) <NAME>
#
# The objective here is not to teach you optimization or even what is model-based predictive controls. There are just too many excellent references out there, I would not want to embarrass myself by producing low grade explanations.
#
# You can start by reading [this webpage](http://www.me.berkeley.edu/~yzhwang/MPC/optimization.html) made by grad students from Berkeley's [MPC lab](http://www.mpc.berkeley.edu), [this document](http://kom.aau.dk/~mju/downloads/otherDocuments/MPCusingCVX.pdf) demonstrates a simple application that uses [CVX](http://cvxr.com/cvx/) which is in part developed by Prof. <NAME> from Stanford. He also has [his lectures on convex optimization](https://lagunita.stanford.edu/courses/Engineering/CVX101/Winter2014/about) posted online -- free -- along with his co-authored [book](http://www.stanford.edu/~boyd/cvxbook/), which is excellent -- also free.
# You can also take a look at [this article](http://www.sciencedirect.com/science/article/pii/S0378778811004105) from the OptiControl team of Switzerland.
#
# What is demo-ed in this notebook is not the "state of the art", it, however, provides an easy way to try a few ideas. If your objective is speed, I suggest you look into using compiled solvers tailored to the problem, *e.g.* [CVXGEN](http://cvxgen.com/docs/index.html), for non-commercial use.
#
# I will start by showing what an optimization looks like since too many throw around that word not knowing what they're talking about. Running 10000 simulations with different parameters and returning the best is **not** an optimization; it's more of a design exploration. In that sense, genetic algorithms cannot guarantee an *optimal* result more so than a very good design. Genetic algorithms are explorative algorithms, whereas optimization is exploitative. A hybrid is possible -- see memetic evolutionary algorithms.
#
# Right after, we will go through 2 simple examples to see how MPC can be used to minimize energy use in a zone. The first example is a simple room with an ideal HVAC system that adds/removes heat directly. The second example uses the same room and, additionally, has a radiant slab system.
#
# ### General optimization formulation
# $\begin{align}
# minimize~ & f(x) \\
# subject~to~~ & h_i(x) = 0, & i = 1,...,m \\
# & g_j(x) >= 0, & j = 1,...,n
# \end{align}$
#
# The function $f(x)$ will be minimized while the equality constraints of $h_i(x)$ and inequality constraints of $g_j(x)$ must be satisfied. The function $f(x)$ is refered to as the cost function. The constraints can also be included in the cost function as soft constraints. As soft constraints, they may be violated, but at a high cost; whereas as hard constraints, if a given solution violates the constraints, it is completely rejected.
#
# The solution $x^*$ is called the optimal solution *iff* $f(x^*) \leq f(x)~\forall~x$. (If the equality holds, then more than one optimal exists and $x^*$ is refered to as a soft minimum; otherwise, if there is only one optimal, then it is a hard minimum.)
#
# ### Optimization formulation for MPC
# Taken from [Oldewurtel *et al.* (2012)](http://www.sciencedirect.com/science/article/pii/S0378778811004105)
#
# <img src="Figures/mpc_eq.png" width=400 align="left"/>
# <img src="Figures/mpc_cost.png" width=450 align="left"/>
# <img src="Figures/mpc_cons.png" width=450 align="left"/>
# ---
# ## Example 1: Simple glazed room, MPC to control ideal HVAC system
# Here we are considering a south-facing room with a modest window (WWR: 40%):
#
# <img src="Figures/mpc_room.png" width=250 align="left"/>
# As we've done in previous chapters, we will model the room as a thermal network:
#
# <img src="Figures/mpc_network.png" width=350 align="left"/>
# The HVAC system is the only variable we can control. We could use On-Off controls, Proportional control, PI, PID, or use a schedule, but they don't explicitely consider thermal mass -- which we will see in the next example.
#
# Let's begin!
#
# ### MPC Formulation
# What I particularly like about MPC is that the controls are explicit to the user. What is the objective? If it's to reduce the energy dispenditure, make that the cost function. Is it to guarantee a tight temperature control? Make that the cost function. Or maybe it's both? No problem, put them both!
#
# Next come the constraints. You don't want to cycle the heat pump? Make it a constraint! Careful though you don't want to over constrain the optimization so that it fails to find feasible solutions. You can always *soften* constraints into penalties towards the cost function and so violating them just a tad won't lead to a failure in the optimization.
#
# **Cost Function**
# $\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
# J = \norm{ Q_{\text{HVAC}} }_2$
#
# **Constraints: System Dynamics**
# ${\sum}_{j}{[U_{ij}^{v+t+1} (T_{j}^{v+t+1}-T_{i}^{v+t+1})]}+{\sum}_{k}{[U_{ik}^{v+t+1} (T_{k}^{v+t+1}-T_{i}^{v+t+1})]} - \cfrac{C}{\Delta t} (T_{i}^{v+t+1}-T_{v+i}^{t}) + \dot{Q}_{i}^{v+t+1} = 0, \,\,\, v\in[0,ph-1]$
#
# $t$: simulation timestep
# $v$: timestep within prediction horizon
# $ph$: prediction horizon
#
# **Constraints**
# $T^{v+t+1}_{\text{Room}} - T^{v+t+1}_{\text{Heat, SP}} >= 0, \,\,\, v\in[0,ph-1] \\
# T^{v+t+1}_{\text{Cool, SP}} - T^{v+t+1}_{\text{Room}} >= 0, \,\,\, v\in[0,ph-1] \\
# Q_{\text{Capacity, Heat}} - Q_{\text{HVAC}}^{v+t+1} >= 0, \,\,\, v\in[0,ph-1] \\
# - Q_{\text{Capacity, Cool}} - Q_{\text{HVAC}}^{v+t+1} >= 0, \,\,\, v\in[0,ph-1] \\$
#
# **Other Constraints that can be Considered**
# Limiting the rate of change (slew rate):
# $C_{\text{ROC}} = \norm{ T^{t+1}-T^t }_2$
#
# Peak power:
# $C_{\text{P}} = \$/kW \cdot \text{max}(\dot{Q}_{\text{A}}^+)$
#
# Energy consumption:
# $C_{\text{E}} = \frac{\$/kWh \cdot \Delta t}{3600} \sum_{t=0}^T \lvert \dot{Q} \lvert$
#
# ---------
#
# Onto Python!
#
# ### Load Dependencies
# +
import numpy as np
import matplotlib.pylab as plt
from scipy.optimize import minimize
import time
# from tqdm import tqdm # progress bar
# %matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
from IPython import display
# Helper functions to keep this notebook simple and tidy
from code_base import simfun
# from code_base import optfun
# from code_base import load_data
# -
# ### Simulation Setup
# +
# Steps per hour, number of timesteps, timestep and number of days to simulate
st = 4
nt, dt, days = int(st*24), 3600/st, 1
# Prediction horizon
ph = 8*st # hours * steps/hour
# -
# ### Room Model
# +
# Number of nodes: to solve for, with known temperatures, controllable
nN, nM, nS = 1, 1, 1
nSwhere = 1 # which node receives the controlled variable? 0-indexed according to nN
NS = nN + nS
U = np.zeros((nN,nN)) # K/W
F = np.zeros((nN,nM)) # K/W
C = np.zeros((nN)) # J/K
# Nodel connections: here you define how each node is connected to one another
# Node Number: Object
# 0: room air
# No nodal connections between nodes to solve for (only 1!)
# U[0,n] = ...
# Node Number with known temperatures: Object
# 0: ambient air
# Connection between room air node to ambient air node
A, WWR = 3.*4., 0.4
F[0,0] = ( (26.*WWR*A)**-1 + 1.1/(WWR*A) + (6.*WWR*A)**-1)**-1 + \
((26.*(1-WWR)*A)**-1 + 4/((1-WWR)*A) + (6.*(1-WWR)*A)**-1)**-1
# Nodes with capacitance
C[0] = (A*5.)*1.*1.005*40. # 40x multiplier on room capacitance
# -
# ### Initialize temperature and heat source variables; set initial conditions; set boundary conditions; set limits
T, TK, Q, S = np.zeros((nt*days, nN)), np.zeros((nt*days, nM)), np.zeros((nt*days, nN)), np.zeros((nt*days, nS))
T[:] = np.nan # for clearer plots
S[:] = np.nan # for clearer plots
T[0,] = 17.5
TK[:, 0] = simfun.periodic(-10., 10., 15., 86400., dt, nt, days) # ambient temp
Q[:,0] = simfun.halfperiodic(0.4*A*WWR*800., 12., 86400., dt, nt, days) # solar gains
Q[:,0] += simfun.linearRamp(450., 400., 17., 9., 0., dt, nt, days).flatten() # internal gains + equip
minS = -5000. # cooling capacity
maxS = 5000. # heating capacity
#HeatSP = simfun.periodic(16., 5., 15., 86400., dt, nt, days)
#CoolSP = simfun.periodic(20., 5., 15., 86400., dt, nt, days)
HeatSP = simfun.linearRamp(21., 5., 18., 6., 0., dt, nt, days)
CoolSP = simfun.linearRamp(26.,-5., 18., 6., 0., dt, nt, days)
# ### Constraints
def getConstraints(i, cons):
# Current State
if i == 0:
cons.append({'type': 'eq', 'fun': lambda x: x[0] - T[ct,0]},)
# System Dynamics
cons.append({'type': 'eq', 'fun': lambda x:
F[0,0]*(TK[ct+i+1,0]-x[NS*(i+1)]) -
C[0]*(x[NS*(i+1)]-x[NS*i]) +
x[NS*i+nN] +
Q[ct+i+1,0] },)
# Constraints
cons.append({'type': 'ineq', 'fun': lambda x: CoolSP[ct+i+1] - x[NS*(i+1)]},)
cons.append({'type': 'ineq', 'fun': lambda x: x[NS*(i+1)] - HeatSP[ct+i+1]},)
cons.append({'type': 'ineq', 'fun': lambda x: maxS - x[NS*i+nN]},)
cons.append({'type': 'ineq', 'fun': lambda x: x[NS*i+nN] - minS},)
return cons
# +
timer = time.time()
optRng = range(days*nt-ph)
for ct in optRng:
# Cost function
costfun = lambda x: np.linalg.norm(x[(nN):-(nN+nS):(nN+nS)]) # minimize heat input
# Initial guess for ct=0, warm start with previous optimal for rest
if ct ==0: x0 = np.zeros((ph,nN+nS)).reshape(-1,1)
else: x0 = np.vstack((res.x[(nN+nS)::].reshape(-1,1), np.zeros((nN+nS,1))))
# Constraints; loop through prediction steps and get constraints for every timestep
cons = []
for i in range(ph-1):
getConstraints(i, cons)
cons = tuple(cons)
# Run optimization
res = minimize(costfun, x0, method='SLSQP', constraints=cons,
options={'ftol': 1e-3, 'disp': False, 'maxiter': 50})
# Break on error
if res.status != 0:
print "Optimization Failed!"
print "Timestep: %i, Reason: %i"%(ct,res.status)
break
# Sort and store results
T[ct+1,] = res.x[nN+nS:2*nN+nS]
S[ct+1,] = res.x[nN:nN+nS]
tempT = res.x.reshape(-1,NS)[2:,0:nN]
tempS = res.x.reshape(-1,NS)[1:-1,nN:nN+nS]
del cons
# Plot
ax1 = plt.subplot2grid((6,1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((6,1), (2, 0), rowspan=2, sharex=ax1)
ax3 = plt.subplot2grid((6,1), (4, 0), sharex=ax1)
ax4 = plt.subplot2grid((6,1), (5, 0), sharex=ax1)
ax1.hold(True)
ax1.plot(T,'g')
ax1.plot(range(ct+2,ct+ph),tempT, 'g--')
ax1.axvline(ct+1, color='crimson') # draw control horizon
ax1.axvline(ct+ph, color='lime') # draw prediction horizon
ax1.plot(HeatSP,'r--')
ax1.plot(CoolSP,'b--')
ax1.set_ylim([15,32])
ax1.set_ylabel('Room')
ax2.plot(S,'r')
ax2.plot(range(ct+2,ct+ph),tempS, 'r--')
ax2.axvline(ct+1, color='crimson') # draw control horizon
ax2.axvline(ct+ph, color='lime') # draw prediction horizon
ax2.set_ylabel('HVAC')
ax3.plot(TK, color='navy')
ax3.set_ylabel('TK')
ax4.plot(Q, color='gold')
ax4.set_ylabel('Gains')
plt.subplots_adjust(hspace=0)
display.clear_output(wait=True)
display.display(plt.gcf())
print "Elapsed time: %s" % (time.time()-timer)
# -
# ---
| 3.2 Model-Based Predictive Controls.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python
# name: conda-env-python-py
# ---
# <a href="http://cocl.us/pytorch_link_top">
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/Pytochtop.png" width="750" alt="IBM Product " />
# </a>
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/cc-logo-square.png" width="200" alt="cognitiveclass.ai logo" />
# <h1>Image Datasets and Transforms</h1>
# <h2>Table of Contents</h2>
# <p>In this lab, you will build a dataset objects for images; many of the processes can be applied to a larger dataset. Then you will apply pre-build transforms from Torchvision Transforms to that dataset.</p>
# <ul>
# <li><a href="#auxiliary"> Auxiliary Functions </a></li>
# <li><a href="#Dataset"> Datasets</a></li>
# <li><a href="#Torchvision">Torchvision Transforms</a></li>
# </ul>
# <p>Estimated Time Needed: <strong>25 min</strong></p>
#
# <hr>
# <h2>Preparation</h2>
# Download the dataset and unzip the files in your data directory, **to download faster this dataset has only 100 samples**:
# ! wget https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/datasets/img.tar.gz -P /resources/data
# !tar -xf /resources/data/img.tar.gz
# !wget https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/datasets/index.csv
# We will use this function in the lab:
def show_data(data_sample, shape = (28, 28)):
plt.imshow(data_sample[0].numpy().reshape(shape), cmap='gray')
plt.title('y = ' + data_sample[1])
# The following are the libraries we are going to use for this lab. The <code>torch.manual_seed()</code> is for forcing the random function to give the same number every time we try to recompile it.
# +
# These are the libraries will be used for this lab.
import torch
import matplotlib.pylab as plt
import numpy as np
from torch.utils.data import Dataset, DataLoader
torch.manual_seed(0)
# -
from matplotlib.pyplot import imshow
import matplotlib.pylab as plt
from PIL import Image
import pandas as pd
import os
# <!--Empty Space for separating topics-->
# <h2 id="auxiliary">Auxiliary Functions</h2>
# You will use the following function as components of a dataset object, in this section, you will review each of the components independently.
# The path to the csv file with the labels for each image.
# Read CSV file from the URL and print out the first five samples
directory=""
csv_file ='index.csv'
csv_path=os.path.join(directory,csv_file)
# You can load the CSV file and convert it into a dataframe , using the Pandas function <code>read_csv()</code> . You can view the dataframe using the method head.
data_name = pd.read_csv(csv_path)
data_name.head()
# The first column of the dataframe corresponds to the type of clothing. The second column is the name of the image file corresponding to the clothing. You can obtain the path of the first file by using the method <code> <i>DATAFRAME</i>.iloc[0, 1]</code>. The first argument corresponds to the sample number, and the second input corresponds to the column index.
# Get the value on location row 0, column 1 (Notice that index starts at 0)
#rember this dataset has only 100 samples to make the download faster
print('File name:', data_name.iloc[0, 1])
# As the class of the sample is in the first column, you can also obtain the class value as follows.
# +
# Get the value on location row 0, column 0 (Notice that index starts at 0.)
print('y:', data_name.iloc[0, 0])
# -
# Similarly, You can obtain the file name of the second image file and class type:
# +
# Print out the file name and the class number of the element on row 1 (the second row)
print('File name:', data_name.iloc[1, 1])
print('class or y:', data_name.iloc[1, 0])
# -
# The number of samples corresponds to the number of rows in a dataframe. You can obtain the number of rows using the following lines of code. This will correspond the data attribute <code>len</code>.
# +
# Print out the total number of rows in traing dataset
print('The number of rows: ', data_name.shape[0])
# -
# <h2 id="load_image">Load Image</h2>
# To load the image, you need the directory and the image name. You can concatenate the variable <code>train_data_dir</code> with the name of the image stored in a Dataframe. Finally, you will store the result in the variable <code>image_name</code>
# +
# Combine the directory path with file name
image_name =data_name.iloc[1, 1]
image_name
# -
# we can find the image path:
image_path=os.path.join(directory,image_name)
image_path
# You can then use the function <code>Image.open</code> to store the image to the variable <code>image</code> and display the image and class .
# +
# Plot the second training image
image = Image.open(image_path)
plt.imshow(image,cmap='gray', vmin=0, vmax=255)
plt.title(data_name.iloc[1, 0])
plt.show()
# -
# You can repeat the process for the 20th image.
# +
# Plot the 20th image
image_name = data_name.iloc[19, 1]
image_path=os.path.join(directory,image_name)
image = Image.open(image_path)
plt.imshow(image,cmap='gray', vmin=0, vmax=255)
plt.title(data_name.iloc[19, 0])
plt.show()
# -
# <hr>
# Create the dataset object.
# <h2 id="data_class">Create a Dataset Class</h2>
# In this section, we will use the components in the last section to build a dataset class and then create an object.
# +
# Create your own dataset object
class Dataset(Dataset):
# Constructor
def __init__(self, csv_file, data_dir, transform=None):
# Image directory
self.data_dir=data_dir
# The transform is goint to be used on image
self.transform = transform
data_dircsv_file=os.path.join(self.data_dir,csv_file)
# Load the CSV file contians image info
self.data_name= pd.read_csv(data_dircsv_file)
# Number of images in dataset
self.len=self.data_name.shape[0]
# Get the length
def __len__(self):
return self.len
# Getter
def __getitem__(self, idx):
# Image file path
img_name=os.path.join(self.data_dir,self.data_name.iloc[idx, 1])
# Open image file
image = Image.open(img_name)
# The class label for the image
y = self.data_name.iloc[idx, 0]
# If there is any transform method, apply it onto the image
if self.transform:
image = self.transform(image)
return image, y
# +
# Create the dataset objects
dataset = Dataset(csv_file=csv_file, data_dir=directory)
# -
# Each sample of the image and the class y is stored in a tuple <code> dataset[sample]</code> . The image is the first element in the tuple <code> dataset[sample][0]</code> the label or class is the second element in the tuple <code> dataset[sample][1]</code>. For example you can plot the first image and class.
# +
image=dataset[0][0]
y=dataset[0][1]
plt.imshow(image,cmap='gray', vmin=0, vmax=255)
plt.title(y)
plt.show()
# -
y
# Similarly, you can plot the second image:
# +
image=dataset[9][0]
y=dataset[9][1]
plt.imshow(image,cmap='gray', vmin=0, vmax=255)
plt.title(y)
plt.show()
# -
# <h2 id="Torchvision"> Torchvision Transforms </h2>
#
# You will focus on the following libraries:
import torchvision.transforms as transforms
# We can apply some image transform functions on the dataset object. The iamge can be cropped and converted to a tensor. We can use <code>transform.Compose</code> we learned from the previous lab to combine the two transform functions.
# +
# Combine two transforms: crop and convert to tensor. Apply the compose to MNIST dataset
croptensor_data_transform = transforms.Compose([transforms.CenterCrop(20), transforms.ToTensor()])
dataset = Dataset(csv_file=csv_file , data_dir=directory,transform=croptensor_data_transform )
print("The shape of the first element tensor: ", dataset[0][0].shape)
# -
# We can see the image is now 20 x 20
# <!--Empty Space for separating topics-->
# Let us plot the first image again. Notice we see less of the shoe.
# +
# Plot the first element in the dataset
show_data(dataset[0],shape = (20, 20))
# +
# Plot the second element in the dataset
show_data(dataset[1],shape = (20, 20))
# -
# In the below example, we Vertically flip the image, and then convert it to a tensor. Use <code>transforms.Compose()</code> to combine these two transform functions. Plot the flipped image.
# +
# Construct the compose. Apply it on MNIST dataset. Plot the image out.
fliptensor_data_transform = transforms.Compose([transforms.RandomVerticalFlip(p=1),transforms.ToTensor()])
dataset = Dataset(csv_file=csv_file , data_dir=directory,transform=fliptensor_data_transform )
show_data(dataset[1])
# -
# <!--Empty Space for separating topics-->
# <h3>Practice</h3>
# Try to use the <code>RandomVerticalFlip</code> (vertically flip the image) with horizontally flip and convert to tensor as a compose. Apply the compose on image. Use <code>show_data()</code> to plot the second image (the image as <b>2</b>).
# +
# Practice: Combine vertical flip, horizontal flip and convert to tensor as a compose. Apply the compose on image. Then plot the image
# Type your code here
# -
# Double-click __here__ for the solution.
# <!--
# my_data_transform = transforms.Compose([transforms.RandomVerticalFlip(p = 1), transforms.RandomHorizontalFlip(p = 1), transforms.ToTensor()])
# dataset = Dataset(csv_file=csv_file , data_dir=directory,transform=fliptensor_data_transform )
# show_data(dataset[1])
# -->
# <!--Empty Space for separating topics-->
# <a href="http://cocl.us/pytorch_link_bottom">
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/notebook_bottom%20.png" width="750" alt="PyTorch Bottom" />
# </a>
# <h2>About the Authors:</h2>
#
# <a href="https://www.linkedin.com/in/joseph-s-50398b136/"><NAME></a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
# Other contributors: <a href="https://www.linkedin.com/in/michelleccarey/"><NAME></a>, <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a"><NAME></a>
# <hr>
# Copyright © 2018 <a href="cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu">cognitiveclass.ai</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.
| Coursera/IBM Python 01/Course04/1.3.2_Datasets_and_transforms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="QRYXZnXUjzLv" outputId="045166ce-300d-4391-c63d-6e39344a640d"
# %pylab inline
import numpy as np
from matplotlib.patches import Circle
from shapely.geometry import box, Polygon, Point, LineString
from scipy.spatial import Voronoi, voronoi_plot_2d
mpl.rcParams['font.family'] = 'Open Sans'
# + id="oGPXfiPej92K"
# Population sizes of the regions in the Venn diagram
# Just A
# A - B - C
# just_a = 165
# Just B
# B - C - A
just_b = 165
# Just C
# C - B - A
just_c = 103
# A ^ B
# a_intersection_b = 3
# A ^ C
# a_intersection_c = 190
# B ^ C
b_intersection_c = 190
# A ^ B ^ C
# a_intersection_b_intersection_c = 15
# + id="4OW_LrQhkFF_"
# a_x, a_y, a_r = 0,1,1.2
b_x, b_y, b_r = -.5,0,1.2
c_x, c_y, c_r = .5,0,1.2
# A = Point(a_x, a_y).buffer(a_r)
B = Point(b_x, b_y).buffer(b_r)
C = Point(c_x, c_y).buffer(c_r)
# + id="7jmNZGAYkIkC"
def random_points_within(shapely_poly, num_points, min_distance_from_edge=0.05):
shapely_poly = shapely_poly.buffer(-1*min_distance_from_edge)
min_x, min_y, max_x, max_y = shapely_poly.bounds
points = []
while len(points) < num_points:
random_point = Point([random.uniform(min_x, max_x), random.uniform(min_y, max_y)])
if (random_point.within(shapely_poly)):
points.append(np.array(random_point.coords.xy).T)
points = np.vstack(points)
return points
# + colab={"base_uri": "https://localhost:8080/", "height": 410} id="zoGx3Ne7kJZY" outputId="ac8ce0e5-4b55-4fd0-a000-b388b24d8125"
# plot A
# plt.plot(np.array(A.boundary.coords.xy).T[:,0], np.array(A.boundary.coords.xy).T[:,1], color=(0.855,0.314,0.196,1.0))
# plt.gca().add_patch(Circle((a_x, a_y), a_r, zorder=0, lw=2, edgecolor=(0.855,0.314,0.196,1.0), color=(0.855,0.314,0.196,.3)))
# plt.text(0,2.3,"Spanish", ha='center', color=(.36,.36,.36))
# plot B
plt.plot(np.array(B.boundary.coords.xy).T[:,0], np.array(B.boundary.coords.xy).T[:,1], color=(0.855,0.314,0.196,1.0))
plt.gca().add_patch(Circle((b_x, b_y), b_r, zorder=0, lw=2, edgecolor=(0.855,0.314,0.196,1.0), color=(0.855,0.314,0.196,.3)))
plt.text(-1.6,-0.6,"Besu", ha='right', color=(.36,.36,.36))
# plot C
plt.plot(np.array(C.boundary.coords.xy).T[:,0], np.array(C.boundary.coords.xy).T[:,1], color=(0.855,0.314,0.196,1.0))
plt.gca().add_patch(Circle((c_x, c_y), c_r, zorder=0, lw=2, edgecolor=(0.855,0.314,0.196,1.0), color=(0.855,0.314,0.196,.3)))
plt.text(1.6,-0.6,"Teku", ha='left', color=(.36,.36,.36))
# Plot the population represented by 100 dots
rand_x_range = (-2,2)
rand_y_range = (-1.5,2.5)
scatter_kwargs = {'color': (.36,.36,.36),
's': 5}
# Plot just A
# points = random_points_within(A.difference(B).difference(C), just_a)
# plt.scatter(points[:,0],points[:,1], **scatter_kwargs)
# plot just B
points = random_points_within(B.difference(C), just_b)
plt.scatter(points[:,0],points[:,1], **scatter_kwargs)
# plot just C
points = random_points_within(C.difference(B), just_c)
plt.scatter(points[:,0],points[:,1], **scatter_kwargs)
# plot A ^ B
# points = random_points_within(A.intersection(B).difference(C), a_intersection_b)
# plt.scatter(points[:,0],points[:,1], **scatter_kwargs)
# plot A ^ C
# points = random_points_within(A.intersection(C).difference(B), a_intersection_c)
# plt.scatter(points[:,0],points[:,1], **scatter_kwargs)
# plot B ^ C
points = random_points_within(B.intersection(C), b_intersection_c)
plt.scatter(points[:,0],points[:,1], **scatter_kwargs)
# plot A ^ B ^ C
# points = random_points_within(A.intersection(B).intersection(C), a_intersection_c)
# plt.scatter(points[:,0],points[:,1], **scatter_kwargs)
# Fine tune the presentation of the graph
plt.axes().set_aspect('equal', 'datalim')
plt.gca().axis('off')
plt.xlim(-3.5,3.5)
plt.ylim(-1.5,2.5)
plt.gcf().set_size_inches(6,5)
# plt.title('A level subjects chosen', color=(.36,.36,.36))
# Save the output
plt.savefig('unrelaxed_Venn.png', dpi=600)
# + [markdown] id="lQ3ex0x4lcEr"
# # With bounded Lloyd relaxation
# + id="sH13tleJld5m"
def apply_bounded_lloyd_relaxation(points, boundary, iterations=5):
points_to_use = points.copy()
for i in range(iterations):
vor = Voronoi(np.vstack([points_to_use, boundary]))
relevant_regions = vor.regions
relevant_regions = [a for a in relevant_regions if (-1 not in a) and len(a) > 0]
relevant_regions = [vor.regions[x] for x in vor.point_region[:len(points)]] # Beta code
regions_coordinates = [np.vstack([vor.vertices[x] for x in region]) for region in relevant_regions]
region_centroids = np.array([Polygon(region).centroid.bounds[:2] for region in regions_coordinates])
points_to_use = region_centroids
return(points_to_use)
# + colab={"base_uri": "https://localhost:8080/", "height": 410} id="-5VS_76nlhi6" outputId="003d975d-35d4-4f2f-e338-2746da328e5b"
# plot A
# plt.plot(np.array(A.boundary.coords.xy).T[:,0], np.array(A.boundary.coords.xy).T[:,1], color=(0.855,0.314,0.196,1.0))
# plt.gca().add_patch(Circle((a_x, a_y), a_r, zorder=0, lw=2, edgecolor=(0.855,0.314,0.196,1.0), color=(0.855,0.314,0.196,.3)))
# plt.text(0,2.3,"Spanish", ha='center', color=(.36,.36,.36))
# plot B
plt.plot(np.array(B.boundary.coords.xy).T[:,0], np.array(B.boundary.coords.xy).T[:,1], color=(0.855,0.314,0.196,1.0))
plt.gca().add_patch(Circle((b_x, b_y), b_r, zorder=0, lw=2, edgecolor=(0.855,0.314,0.196,1.0), color=(0.855,0.314,0.196,.3)))
plt.text(-1.6,-0.6,"Besu", ha='right', color=(.36,.36,.36))
# plot C
plt.plot(np.array(C.boundary.coords.xy).T[:,0], np.array(C.boundary.coords.xy).T[:,1], color=(0.855,0.314,0.196,1.0))
plt.gca().add_patch(Circle((c_x, c_y), c_r, zorder=0, lw=2, edgecolor=(0.855,0.314,0.196,1.0), color=(0.855,0.314,0.196,0.3)))
plt.text(1.6,-0.6,"Teku", ha='left', color=(.36,.36,.36))
# Plot the population
rand_x_range = (-2,2)
rand_y_range = (-1.5,2.5)
scatter_kwargs = {'color': (.36,.36,.36),
's': 5}
# Plot just A
# points = random_points_within(A.difference(B).difference(C), just_a)
# boundary = A.difference(B).difference(C).boundary
# boundary_coordinates = np.array(boundary.coords.xy).T
# relaxed_points = apply_bounded_lloyd_relaxation(points, boundary_coordinates, iterations=100)
# plt.scatter(relaxed_points[:,0], relaxed_points[:,1], **scatter_kwargs)
# plot just B
points = random_points_within(B.difference(C), just_b)
boundary = B.difference(C).boundary
boundary_coordinates = np.array(boundary.coords.xy).T
relaxed_points = apply_bounded_lloyd_relaxation(points, boundary_coordinates, iterations=190)
plt.scatter(relaxed_points[:,0], relaxed_points[:,1], **scatter_kwargs)
# plot just C
points = random_points_within(C.difference(B), just_c)
boundary = C.difference(B).boundary
boundary_coordinates = np.array(boundary.coords.xy).T
relaxed_points = apply_bounded_lloyd_relaxation(points, boundary_coordinates, iterations=190)
plt.scatter(relaxed_points[:,0], relaxed_points[:,1], **scatter_kwargs)
# plot A ^ B
# points = random_points_within(A.intersection(B).difference(C), a_intersection_b)
# boundary = A.intersection(B).difference(C).boundary
# boundary_coordinates = np.array(boundary.coords.xy).T
# relaxed_points = apply_bounded_lloyd_relaxation(points, boundary_coordinates, iterations=100)
# plt.scatter(relaxed_points[:,0], relaxed_points[:,1], **scatter_kwargs)
# plot A ^ C
# points = random_points_within(A.intersection(C).difference(B), a_intersection_c)
# boundary = A.intersection(C).difference(B).boundary
# boundary_coordinates = np.array(boundary.coords.xy).T
# relaxed_points = apply_bounded_lloyd_relaxation(points, boundary_coordinates, iterations=190)
# plt.scatter(relaxed_points[:,0], relaxed_points[:,1], **scatter_kwargs)
# plot B ^ C
points = random_points_within(B.intersection(C), b_intersection_c)
boundary = B.intersection(C).boundary
boundary_coordinates = np.array(boundary.coords.xy).T
relaxed_points = apply_bounded_lloyd_relaxation(points, boundary_coordinates, iterations=190)
plt.scatter(relaxed_points[:,0], relaxed_points[:,1], **scatter_kwargs)
# plot A ^ B ^ C
# points = random_points_within(A.intersection(B).intersection(C), a_intersection_b_intersection_c)
# boundary = A.intersection(B).intersection(C).boundary
# boundary_coordinates = np.array(boundary.coords.xy).T
# relaxed_points = apply_bounded_lloyd_relaxation(points, boundary_coordinates, iterations=100)
# plt.scatter(relaxed_points[:,0], relaxed_points[:,1], **scatter_kwargs)
# Fine tune the presentation of the graph
plt.axes().set_aspect('equal', 'datalim')
plt.gca().axis('off')
plt.xlim(-3.5,3.5)
plt.ylim(-1.5,2.5)
plt.gcf().set_size_inches(6,5)
# plt.title('A level subjects chosen', color=(.36,.36,.36))
# Save the output
# plt.savefig('Venn.png', dpi=600)
plt.savefig("Venn.svg")
| python_notebooks/src/populated_venn_diagram.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import warnings
warnings.filterwarnings('ignore')
# +
EPOCHS = 200
LR = 3e-4
TOLERENCE= 1e-1
BATCH_SIZE_TWO = 32
import pandas as pd
import numpy as np
import random
import torch
import torch.nn.functional as F
import torch.nn as nn
import re
import string
from torchinfo import summary
import torch.optim as optim
from torchtext.legacy import data
from torch.utils.data import Dataset, DataLoader
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence
from sklearn.model_selection import train_test_split
# -
'''loading the pretrained embedding weights'''
weights=torch.load('CBOW_NEWS.pth')
pre_trained = nn.Embedding.from_pretrained(weights)
pre_trained.weight.requires_grad=False
# +
def collate_batch(batch):
label_list, text_list, length_list = [], [], []
for (_text,_label, _len) in batch:
label_list.append(_label)
length_list.append(_len)
tensor = torch.tensor(_text, dtype=torch.long)
text_list.append(tensor)
text_list = pad_sequence(text_list, batch_first=True)
label_list = torch.tensor(label_list, dtype=torch.float)
length_list = torch.tensor(length_list)
return text_list,label_list, length_list
class VectorizeData(Dataset):
def __init__(self, file):
self.data = pd.read_pickle(file)
def __len__(self):
return self.data.shape[0]
def __getitem__(self, idx):
X = self.data.vector[idx]
lens = self.data.lengths[idx]
y = self.data.label[idx]
return X,y,lens
training = VectorizeData('variable_level_zero.csv')
dt_load = DataLoader(training, batch_size=BATCH_SIZE_TWO, shuffle=False, collate_fn=collate_batch)
# +
#Part implmentations of Cezanne Camacho's 1d CNN model as found
#@ https://cezannec.github.io/CNN_Text_Classification/
def binary_accuracy(preds, y):
#round predictions to the closest integer
rounded_preds = torch.round(preds)
correct = (rounded_preds == y).float()
acc = correct.sum() / len(correct)
return acc
def create_emb_layer(pre_trained):
num_embeddings = pre_trained.num_embeddings
embedding_dim = pre_trained.embedding_dim
emb_layer = nn.Embedding.from_pretrained(pre_trained.weight.data, freeze=True)
return emb_layer, embedding_dim
class C_DNN(nn.Module):
def __init__(self, pre_trained,num_labels):
super(C_DNN, self).__init__()
self.n_class = num_labels
self.embedding, self.embedding_dim = create_emb_layer(pre_trained)
self.conv1D = nn.Conv2d(1, 100, kernel_size=(3,16), padding=(1,0))
self.label = nn.Linear(100, self.n_class)
self.act = nn.Sigmoid()
def forward(self, x):
embeds = self.embedding(x)
embeds = embeds.unsqueeze(1)
conv1d = self.conv1D(embeds)
relu = F.relu(conv1d).squeeze(3)
maxpool = F.max_pool1d(input=relu, kernel_size=relu.size(2)).squeeze(2)
fc = self.label(maxpool)
sig = self.act(fc)
return sig.squeeze(1)
# +
model = C_DNN(pre_trained=pre_trained, num_labels=1)
optimizer = optim.Adam(model.parameters(), lr=LR)
criterion = nn.BCELoss()
def train(dataloader, model, epoch):
total_epoch_loss = 0
total_epoch_acc = 0
steps = 0
model.train()
for idx, batch in enumerate(dataloader):
text,label,lengths = batch
optimizer.zero_grad()
prediction = model(text)
loss = criterion(prediction, label)
acc = binary_accuracy(prediction, label)
#backpropage the loss and compute the gradients
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1)
#update the weights
optimizer.step()
steps += 1
if steps % 50 == 0:
print(f'Epoch: {epoch}, Idx: {idx+1}, Training Loss: {loss.item():.4f}, Training Accuracy: {acc.item():.2f}%')
total_epoch_loss = loss.item()
if total_epoch_loss <= TOLERENCE:
return True
end_training = False
for epoch in range(1, EPOCHS + 1):
end_training=train(dt_load, model, epoch)
if end_training:
filename = "models/model_"+str(3)+'.pth'
torch.save(model.state_dict(), filename)
break
if not end_training:
filename = "models/model_"+str(3)+'.pth'
torch.save(model.state_dict(), filename)
| C-DNN_Model.ipynb |
# ##### Copyright 2021 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# # integer_programming
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/examples/integer_programming.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a>
# </td>
# <td>
# <a href="https://github.com/google/or-tools/blob/master/examples/python/integer_programming.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a>
# </td>
# </table>
# First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab.
# !pip install ortools
# +
# #!/usr/bin/env python3
# Copyright 2010-2021 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Integer programming examples that show how to use the APIs."""
from ortools.linear_solver import pywraplp
from ortools.init import pywrapinit
def Announce(solver, api_type):
print('---- Integer programming example with ' + solver + ' (' + api_type +
') -----')
def RunIntegerExampleNaturalLanguageAPI(optimization_problem_type):
"""Example of simple integer program with natural language API."""
solver = pywraplp.Solver.CreateSolver(optimization_problem_type)
if not solver:
return
Announce(optimization_problem_type, 'natural language API')
infinity = solver.infinity()
# x1 and x2 are integer non-negative variables.
x1 = solver.IntVar(0.0, infinity, 'x1')
x2 = solver.IntVar(0.0, infinity, 'x2')
solver.Minimize(x1 + 2 * x2)
solver.Add(3 * x1 + 2 * x2 >= 17)
SolveAndPrint(solver, [x1, x2])
def RunIntegerExampleCppStyleAPI(optimization_problem_type):
"""Example of simple integer program with the C++ style API."""
solver = pywraplp.Solver.CreateSolver(optimization_problem_type)
if not solver:
return
Announce(optimization_problem_type, 'C++ style API')
infinity = solver.infinity()
# x1 and x2 are integer non-negative variables.
x1 = solver.IntVar(0.0, infinity, 'x1')
x2 = solver.IntVar(0.0, infinity, 'x2')
# Minimize x1 + 2 * x2.
objective = solver.Objective()
objective.SetCoefficient(x1, 1)
objective.SetCoefficient(x2, 2)
# 2 * x2 + 3 * x1 >= 17.
ct = solver.Constraint(17, infinity)
ct.SetCoefficient(x1, 3)
ct.SetCoefficient(x2, 2)
SolveAndPrint(solver, [x1, x2])
def SolveAndPrint(solver, variable_list):
"""Solve the problem and print the solution."""
print('Number of variables = %d' % solver.NumVariables())
print('Number of constraints = %d' % solver.NumConstraints())
result_status = solver.Solve()
# The problem has an optimal solution.
assert result_status == pywraplp.Solver.OPTIMAL
# The solution looks legit (when using solvers others than
# GLOP_LINEAR_PROGRAMMING, verifying the solution is highly recommended!).
assert solver.VerifySolution(1e-7, True)
print('Problem solved in %f milliseconds' % solver.wall_time())
# The objective value of the solution.
print('Optimal objective value = %f' % solver.Objective().Value())
# The value of each variable in the solution.
for variable in variable_list:
print('%s = %f' % (variable.name(), variable.solution_value()))
print('Advanced usage:')
print('Problem solved in %d branch-and-bound nodes' % solver.nodes())
def RunAllIntegerExampleNaturalLanguageAPI():
RunIntegerExampleNaturalLanguageAPI('GLPK')
RunIntegerExampleNaturalLanguageAPI('CBC')
RunIntegerExampleNaturalLanguageAPI('SCIP')
RunIntegerExampleNaturalLanguageAPI('SAT')
RunIntegerExampleNaturalLanguageAPI('Gurobi')
def RunAllIntegerExampleCppStyleAPI():
RunIntegerExampleCppStyleAPI('GLPK')
RunIntegerExampleCppStyleAPI('CBC')
RunIntegerExampleCppStyleAPI('SCIP')
RunIntegerExampleCppStyleAPI('SAT')
RunIntegerExampleCppStyleAPI('Gurobi')
RunAllIntegerExampleNaturalLanguageAPI()
RunAllIntegerExampleCppStyleAPI()
| examples/notebook/examples/integer_programming.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/tekmologi8/Data-Analytics/blob/main/Pandas_iloc()_and_loc().ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="9zJmufxgbZ39"
# **loc()** : loc() is label based data selecting method which means that we have to pass the name of the row or column which we want to select. This method includes the last element of the range passed in it, unlike iloc(). loc() can accept the boolean data unlike iloc() . Many operations can be performed using the loc() method like.
# + colab={"base_uri": "https://localhost:8080/", "height": 331} id="QdgOl3F8YSdh" outputId="b5f30273-c342-487b-e5be-57e0dc78ace0"
# importing the module
import pandas as pd
# creating a sample dataframe
data = pd.DataFrame({'Brand' : ['Mercedes', 'Hyundai', 'Tata',
'Kia', 'Mazda', 'Hyundai',
'Renault', 'ProBox', 'Mercedes'],
'Year' : [2012, 2014, 2011, 2015, 2012,
2016, 2014, 2018, 2019],
'Kms/Driven' : [50000, 30000, 60000,
25000, 10000, 46000,
31000, 15000, 12000],
'City' : ['Kampala', 'Nairobi', 'Mombasa',
'Dodoma', 'Mumias', 'Dodoma',
'Mombasa','Chemelil', 'Garissa'],
'Mileage' : [28, 27, 25, 26, 28,
29, 24, 21, 24]})
# displaying the DataFrame
display(data)
# + colab={"base_uri": "https://localhost:8080/", "height": 81} id="4QniEmJnZM2n" outputId="5564130a-1b5b-48f4-dc5b-46f76325a436"
# Selecting data according to some conditions :
# selecting cars with brand 'Mercedes' and Mileage > 25
display(data.loc[(data.Brand == 'Mercedes') & (data.Mileage > 25)])
# + colab={"base_uri": "https://localhost:8080/", "height": 174} id="R0oB0rFUZ3jo" outputId="0eeb25f7-438c-4040-82ec-47cd7d9f7f51"
# Selecting a range of rows from the DataFrame :
# selecting range of rows from 2 to 5
display(data.loc[2 : 5])
# + colab={"base_uri": "https://localhost:8080/", "height": 331} id="TBjoR6VQb_5-" outputId="6b40b9cf-1fe9-4262-b6fa-73b66e136803"
#Updating the value of any column
# updating values of Mileage if Year < 2015
data.loc[(data.Year < 2015), ['Mileage']] = 22
display(data)
# + [markdown] id="_uXgPNGVd9Ej"
# iloc() : iloc() is a indexed based selecting method which means that we have to pass integer index in the method to select specific row/column. This method does not include the last element of the range passed in it unlike loc(). iloc() does not accept the boolean data unlike loc(). Operations performed using iloc() are:
# + colab={"base_uri": "https://localhost:8080/", "height": 174} id="PChg0a-Ad7dU" outputId="2dac1fb6-5087-476c-d11a-bdeb8c6b53cb"
#Selecting rows using integer indices:
# selecting 0th, 2th, 4th, and 7th index rows
display(data.iloc[[0, 2, 4, 7]])
# + colab={"base_uri": "https://localhost:8080/", "height": 174} id="soFReEstesjK" outputId="c83390bd-b813-4a76-cb17-041dc94627d3"
#Selecting a range of columns and rows simultaneously:
# selecting rows from 1 to 4 and columns from 2 to 4
display(data.iloc[1 : 5, 2 : 5])
| Pandas_iloc()_and_loc().ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
url="C:\plasma_data.csv"
df_plasma=pd.read_csv(url)
headers=['name','Age','Weight','Blood_type','Blood_A','Blood_B','Blood_O','Eligibility','Intentions','PRERT']
df_plasma.columns=headers
df_plasma.head()
# # Descriptive Analysis
df_plasma.dtypes
df_plasma.describe(include="all")
df_plasma.dropna(axis=0)
# # Exploratory Analysis
import matplotlib as plt
from matplotlib import pyplot
# +
#Histogram
plt.pyplot.hist(df_plasma["Age"],color="green")
plt.pyplot.xlabel("Age")
plt.pyplot.ylabel("Count")
# -
#Correlation
from scipy import stats
#DROPPING VARIABLE "Blood_type"
df_plasma.drop('Blood_type',axis=1)
# Correlation of "Eligibility" with different variables
df_plasma.corr()["Eligibility"].sort_values()
# # ANOVA
anova_age=stats.f_oneway(df_plasma['Eligibility'],df_plasma['Age'])
anova_weight=stats.f_oneway(df_plasma['Eligibility'],df_plasma['Weight'])
anova_intent=stats.f_oneway(df_plasma['Eligibility'],df_plasma['Intentions'])
anova_A=stats.f_oneway(df_plasma['Eligibility'],df_plasma['Blood_A'])
anova_B=stats.f_oneway(df_plasma['Eligibility'],df_plasma['Blood_B'])
anova_O=stats.f_oneway(df_plasma['Eligibility'],df_plasma['Blood_O'])
anova_prert=stats.f_oneway(df_plasma['Eligibility'],df_plasma['PRERT'])
print('anova_age:',anova_age,
'anova_weight:',anova_weight,
'anova_intent:',anova_intent,
'anova_A:',anova_A,
'anova_B:',anova_B,
'anova_,O:',anova_O,
'anova_prert:',anova_prert)
# OF THE ABOVE VARIABLES ONLY "INTENTION" & "PRERT" ARE STATSISTICALLY UNSIGNIFICANT BECAUSE OF LOW F-STAT VALUE AND HIGH P-VALUE,SO BOTH WILL NOT BE CONSIDER FOR THE MODEL DEVELOPMENT
#
# # REGRESSION
from sklearn.linear_model import LinearRegression
lm=LinearRegression()
z=df_plasma[['Age','Weight','Blood_A','Blood_B','Blood_O']]
y=df_plasma['Eligibility']
#TRAIN THE MODEL
lm.fit(z,y)
yhat=lm.predict(z)
#INTERCEPT AND CO-FFICIENTS
lm.intercept_,lm.coef_
# # Regression Plots
import seaborn as sns
sns.regplot(x=df_plasma['Weight'],y=y,data=df_plasma)
axl=sns.distplot(y,hist=False,color='r',label="Actual value")
sns.distplot(yhat,hist=False,ax=axl,color='b',label='Fitted')
# IN ACCORDANCE WITH OUR ANLYSIS UPTIL NOW , LINERA MODEL IS NOT THE BEST FIT FOR OUR DATASET ,NEED TO TEST OTHER REGRESSION MODELS SUCH AS POLYNOMIAL , RIDGE REGRESSION ETC AND SOME OF THE VARIABLES MIGHT NEED LOG TRANSFORMATION .
| Plasma .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Importing Libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import scipy.io
import re
from math import *
from sklearn import svm
# +
import nltk
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
# Initializing the PorterStemmer
ps = PorterStemmer()
# Downloading the punkt model
#nltk.download('punkt')
# -
sns.set_style('whitegrid')
# %matplotlib inline
# # Functions
# +
def readFile(fileText):
try:
# Read The text file
file = open(fileText, 'r')
fileContent = file.read()
# Closing stream after reading it
file.close()
# Returing file Content
return { "status": True, "content": fileContent, "msg": '' }
except FileNotFoundError as e:
# File can't be found
print(e)
# Returning empty string
return { "status": False, "content": " ", "msg": e }
def getVocabList():
# Reading VocabList
file = readFile('vocab.txt')
if(file["status"]):
# Getting content of the file
fileContent = file["content"]
# Replacing Numbers with ' '
numberPattern = "(\d+)"
fileContent = re.sub(numberPattern, ' ', fileContent)
# Remove any non alphanumeric characters
nonWordPattern = '[^a-zA-Z0-9]'
fileContent = re.sub( nonWordPattern, ' ', fileContent)
# Replace multiple spaces with single space
spacePattern = "[ ]+"
fileContent = re.sub( spacePattern ,' ', fileContent)
# Tokenize words
try:
# Tokenize all of the words
words = word_tokenize(fileContent)
return words
# Error Occured
except:
print("Some Error Occured in Stemming Process")
return ['']
else:
# reading file has some problems
print("We have some problems in Reading File")
print(file["msg"])
def processEmail(fileName):
# Read The text file
file = readFile(fileName)
if(file["status"]):
# Getting content of the file
fileContent = file["content"]
# Convert string to lowercase
fileContent = fileContent.lower()
# Strip HTML
htmlPattern = "<[^>]*>"
fileContent = re.sub(htmlPattern,' ', fileContent)
# Normalize URLs
urlPattern = "(http|ftp|https)://([\w_-]+(?:(?:\.[\w_-]+)+))([\w.,@?^=%&:/~+#-]*[\w@?^=%&/~+#-])?"
fileContent = re.sub(urlPattern,'httpaddr', fileContent)
# Normalize Numbers
numberPattern = "(\d+)"
fileContent = re.sub(numberPattern, 'number', fileContent)
# Normalize Email Address
emailPattern = r'[\w\.-]+@[\w\.-]+'
fileContent = re.sub(emailPattern, 'emailaddr', fileContent)
# Normalize Dollars
dollarPattern = '[$]+'
fileContent = re.sub(dollarPattern, 'dollar', fileContent)
# Remove any non alphanumeric characters
nonWordPattern = '[^a-zA-Z0-9]'
fileContent = re.sub( nonWordPattern, ' ', fileContent)
# Replace multiple spaces with single space
spacePattern = "[ ]+"
fileContent = re.sub( spacePattern ,' ', fileContent)
# Words Stemming
try:
# Tokenize all of the words
words = word_tokenize(fileContent)
# Word Stemming
words = [ps.stem(x) for x in words]
except:
print("Some Error Occured in Stemming Process")
# Initialzing word_indices
word_indices = []
for w in words:
# Constructing Word_indices
try:
idx = vocab.index(w)
word_indices.append(idx)
except ValueError as e:
# Words doesn't exist in Vobabulary
continue
return word_indices
else:
# reading file has some problems
print("We have some problems in Reading File")
print(file["msg"])
def emailFeatures(word_indices):
# Total number of words in the dictionary
n = 1900
# creating feature vector
matrix = np.zeros((n,1))
# Mapping word_indices to feature vector
matrix[word_indices] = 1
return matrix
def findBestModel(X,y, Xval, yval):
# Initializing the Possible values for both C and Sigma
pValues = np.array([0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30]);
# Creating matrix for holding the error of each model
error = np.zeros((len(pValues) ** 2,1))
# Computing model error for each permutation of the sigma and C
for i in range(len(pValues)):
for j in range(len(pValues)):
# Initializing The Model
model = svm.SVC(C=pValues[i] ,kernel= 'rbf' ,gamma= 2 * ( pValues[j] ** 2 ))
# Fitting Data to The Model
model.fit(X,y)
# Computing error of the Model on the Cross Validation Dataset
error[ i * len(pValues) + j ] = 1 - model.score(Xval, yval)
# Getting the minimum value index in error matrix
idx = np.argmin(error)
# Finding C, sigma for model with minimum error
i = np.floor(idx / len(pValues))
j = idx - i * len(pValues)
C = pValues[int(i)]
sigma = pValues[int(j)]
return { "C": C,
"sigma": sigma }
# -
# # Spam Classifier
# ## Load Data
# +
mat = scipy.io.loadmat('spamTrain.mat')
X = mat["X"][0:3400]
y = mat["y"].T[0][0:3400]
Xval = mat["X"][3400:4000]
yval = mat["y"].T[0][3400:4000]
# -
# ## Train The SVM
findBestModel(X,y,Xval,yval)
# +
# Initializing The Model
model = svm.SVC(C=10 ,kernel= 'rbf' ,gamma= 2 * ( 0.3 ** 2 ))
# Fitting Data to The Model
model.fit(X,y)
# -
model.score(Xval,yval)
# ## Find Best Model With Sklearn
from sklearn.grid_search import GridSearchCV
param_grid = { 'C' : [ 0.1, 0.4, 0.8, 2, 5, 10, 20, 40, 100, 200, 400, 1000], 'gamma' : [ 1, 0.1, 0.01, 0.001, 0.0001,]}
grid = GridSearchCV(svm.SVC(), param_grid, verbose= 3)
grid.fit(X,y)
model = svm.SVC(C=5, gamma=0.01, kernel='rbf')
model.fit(X,y)
model.score(Xval,yval)
| Coursera ML Course - AndrewNG/week 7/ex6/Spam Classifier ( Python ).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import json
from matplotlib.pyplot import imshow
import IPython.display
from PIL import Image
# ML Testing Steps
# Original Python Notebook with latest model and test images
# Rune execution within a notebook
# Runefile review etc
# git lfs for the data
# Runefile + TF Lite + Data/
# !cat Runefile.yml
# +
# !cargo rune graph Runefile.yml | dot -Tpng > style_transfer.png
IPython.display.Image('style_transfer.png')
# -
IPython.display.Image('content.jpg')
IPython.display.Image('style.jpg')
# !cargo r --package rune-cli --release -- build Runefile.yml
# RES = !cargo r --package rune-cli --release -- run ./style_transfer.rune --image /home/michael/Pictures/hotg.png --image style.jpg
# +
log_message = next(line for line in RES if 'Serial: ' in line)
*_, raw = log_message.split('Serial: ')
output = json.loads(raw)
arr = np.asarray(output['elements']) * 255
arr = arr.astype('uint8')
arr = np.reshape(arr, (384, 384, 3))
img = Image.fromarray(arr, 'RGB')
imshow(img);
# -
| examples/style_transfer/Style Predict.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Mike-Xie/DS-Unit-4-Sprint-3-Deep-Learning/blob/master/Facial_Recognition_Spring_Break_Workshop.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="qtSg39PFqgdm" colab_type="code" colab={}
# https://github.com/ageitgey/face_recognition
# !git clone https://github.com/davisking/dlib.git
# + id="NTGlrEaXsEm6" colab_type="code" colab={}
import os
os.mkdir('dlib')
# + id="tK9d2BIss_ya" colab_type="code" colab={}
os.mkdir('build')
# + id="x8ltqclMtRzr" colab_type="code" colab={}
os.chdir('build')
# + id="heNM_eNgtTKY" colab_type="code" colab={}
# !cmake ..
# + id="DX2zXtgUtWOH" colab_type="code" colab={}
# !cmake --build .
# + id="gVceOKrBtcis" colab_type="code" colab={}
os.chdir('..')
# + id="2t1T-1w8qsjg" colab_type="code" colab={}
# !setup.py install
# + id="fSO3WaoXrLUU" colab_type="code" colab={}
import dlib
# + id="3gS0pMcNtmp5" colab_type="code" colab={}
# !pip install face_recognition
# + id="uhzsJljluN-j" colab_type="code" colab={}
import face_recognition
known_image = face_recognition.load_image_file("biden.jpg")
unknown_image = face_recognition.load_image_file("unknown.jpg")
biden_encoding = face_recognition.face_encodings(known_image)[0]
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
results = face_recognition.compare_faces([biden_encoding], unknown_encoding)
| Facial_Recognition_Spring_Break_Workshop.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Melbourne households
# Collect the number of households and population in each postcodes
import numpy as np
import pandas as pd
# +
### Load population
# -
# Place of residence
poa_residence = pd.read_csv("../data/raw/2016 Census GCP Postal Areas for VIC/2016Census_G03_VIC_POA.csv", low_memory=False)
poa_residence.shape
poa_residence.head()
poa_residence.POA_CODE_2016.nunique()
poa_residence.columns
poa_residence = poa_residence[["POA_CODE_2016", "Total_Total"]]
### Family composition
poa_family = pd.read_csv("../data/raw/2016 Census GCP Postal Areas for VIC/2016Census_G25_VIC_POA.csv", low_memory=False)
poa_family.shape
poa_family.head()
poa_family.columns
poa_family = poa_family[["POA_CODE_2016", "Total_F"]]
poa_family = poa_family.merge(poa_residence, how="left", left_on="POA_CODE_2016", right_on="POA_CODE_2016")
poa_family.shape
poa_family.head(10)
poa_family.rename(columns={"POA_CODE_2016":"POA_CODE16","Total_F":"Total Households", "Total_Total": "Population"}, inplace=True)
# Remove POA in the name of POA_CODE
poa_family.POA_CODE16 = poa_family.POA_CODE16.apply(lambda x: x[-4:])
poa_family["Total Households"].describe()
poa_family[poa_family["Total Households"] == 0].shape
poa_family["Population"].describe()
poa_family[poa_family["Population"] == 0].shape
poa_family.to_csv("../data/processed/poa_households.csv", index=False)
| notebooks/04_Melbourne_households.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # 注意力提示
# :label:`sec_attention-cues`
#
# 感谢你对本书的关注,因为你的注意力是一种稀缺的资源:
# 此刻你正在阅读本书(而忽略了其他的书),
# 因此你的注意力是用机会成本(与金钱类似)来支付的。
# 为了确保你现在投入的注意力是值得的,
# 我们尽全力(全部的注意力)创作一本好书。
#
# 自经济学研究稀缺资源分配以来,我们正处在“注意力经济”时代,
# 即人类的注意力被视为可以交换的、有限的、有价值的且稀缺的商品。
# 许多商业模式也被开发出来去利用这一点:
# 在音乐或视频流媒体服务上,我们要么消耗注意力在广告上,要么付钱来隐藏广告;
# 为了在网络游戏世界的成长,我们要么消耗注意力在游戏战斗中,
# 从而帮助吸引新的玩家,要么付钱立即变得强大。
# 总之,注意力不是免费的。
#
# 注意力是稀缺的,而环境中的干扰注意力的信息却并不少。
# 比如我们的视觉神经系统大约每秒收到$10^8$位的信息,
# 这远远超过了大脑能够完全处理的水平。
# 幸运的是,我们的祖先已经从经验(也称为数据)中认识到
# “并非感官的所有输入都是一样的”。
# 在整个人类历史中,这种只将注意力引向感兴趣的一小部分信息的能力,
# 使我们的大脑能够更明智地分配资源来生存、成长和社交,
# 例如发现天敌、找寻食物和伴侣。
#
# ## 生物学中的注意力提示
#
# 注意力是如何应用于视觉世界中的呢?
# 我们从当今十分普及的*双组件*(two-component)的框架开始讲起:
# 这个框架的出现可以追溯到19世纪90年代的威廉·詹姆斯,
# 他被认为是“美国心理学之父” :cite:`James.2007`。
# 在这个框架中,受试者基于*非自主性提示*和*自主性提示*
# 有选择地引导注意力的焦点。
#
# 非自主性提示是基于环境中物体的突出性和易见性。
# 想象一下,假如你面前有五个物品:
# 一份报纸、一篇研究论文、一杯咖啡、一本笔记本和一本书,
# 就像 :numref:`fig_eye-coffee`。
# 所有纸制品都是黑白印刷的,但咖啡杯是红色的。
# 换句话说,这个咖啡杯在这种视觉环境中是突出和显眼的,
# 不由自主地引起人们的注意。
# 所以你把视力最敏锐的地方放到咖啡上,
# 如 :numref:`fig_eye-coffee`所示。
#
# 
# :width:`400px`
# :label:`fig_eye-coffee`
#
# 喝咖啡后,你会变得兴奋并想读书。
# 所以你转过头,重新聚焦你的眼睛,然后看看书,
# 就像 :numref:`fig_eye-book`中描述那样。
# 与 :numref:`fig_eye-coffee`中由于突出性导致的选择不同,
# 此时选择书是受到了认知和意识的控制,
# 因此注意力在基于自主性提示去辅助选择时将更为谨慎。
# 受试者的主观意愿推动,选择的力量也就更强大。
#
# 
# :width:`400px`
# :label:`fig_eye-book`
#
# ## 查询、键和值
#
# 自主性的与非自主性的注意力提示解释了人类的注意力的方式,
# 下面我们看看如何通过这两种注意力提示,
# 用神经网络来设计注意力机制的框架,
#
# 首先,考虑一个相对简单的状况,
# 即只使用非自主性提示。
# 要想将选择偏向于感官输入,
# 我们可以简单地使用参数化的全连接层,
# 甚至是非参数化的最大汇聚层或平均汇聚层。
#
# 因此,“是否包含自主性提示”将注意力机制与全连接层或汇聚层区别开来。
# 在注意力机制的背景下,我们将自主性提示称为*查询*(query)。
# 给定任何查询,注意力机制通过*注意力汇聚*(attention pooling)
# 将选择引导至*感官输入*(sensory inputs,例如中间特征表示)。
# 在注意力机制中,这些感官输入被称为*值*(value)。
# 更通俗的解释,每个值都与一个*键*(key)配对,
# 这可以想象为感官输入的非自主提示。
# 如 :numref:`fig_qkv`所示,我们可以设计注意力汇聚,
# 以便给定的查询(自主性提示)可以与键(非自主性提示)进行匹配,
# 这将引导得出最匹配的值(感官输入)。
#
# 
# :label:`fig_qkv`
#
# 鉴于上面所提框架在 :numref:`fig_qkv`中的主导地位,
# 因此这个框架下的模型将成为本章的中心。
# 然而,注意力机制的设计有许多替代方案。
# 例如,我们可以设计一个不可微的注意力模型,
# 该模型可以使用强化学习方法 :cite:`Mnih.Heess.Graves.ea.2014`进行训练。
#
# ## 注意力的可视化
#
# 平均汇聚层可以被视为输入的加权平均值,
# 其中各输入的权重是一样的。
# 实际上,注意力汇聚得到的是加权平均的总和值,
# 其中权重是在给定的查询和不同的键之间计算得出的。
#
# + origin_pos=3 tab=["tensorflow"]
import tensorflow as tf
from d2l import tensorflow as d2l
# + [markdown] origin_pos=4
# 为了可视化注意力权重,我们定义了`show_heatmaps`函数。
# 其输入`matrices`的形状是
# (要显示的行数,要显示的列数,查询的数目,键的数目)。
#
# + origin_pos=5 tab=["tensorflow"]
#@save
def show_heatmaps(matrices, xlabel, ylabel, titles=None, figsize=(2.5, 2.5),
cmap='Reds'):
"""显示矩阵热图"""
d2l.use_svg_display()
num_rows, num_cols = matrices.shape[0], matrices.shape[1]
fig, axes = d2l.plt.subplots(num_rows, num_cols, figsize=figsize,
sharex=True, sharey=True, squeeze=False)
for i, (row_axes, row_matrices) in enumerate(zip(axes, matrices)):
for j, (ax, matrix) in enumerate(zip(row_axes, row_matrices)):
pcm = ax.imshow(matrix.numpy(), cmap=cmap)
if i == num_rows - 1:
ax.set_xlabel(xlabel)
if j == 0:
ax.set_ylabel(ylabel)
if titles:
ax.set_title(titles[j])
fig.colorbar(pcm, ax=axes, shrink=0.6);
# + [markdown] origin_pos=6
# 下面我们使用一个简单的例子进行演示。
# 在本例子中,仅当查询和键相同时,注意力权重为1,否则为0。
#
# + origin_pos=7 tab=["tensorflow"]
attention_weights = tf.reshape(tf.eye(10), (1, 1, 10, 10))
show_heatmaps(attention_weights, xlabel='Keys', ylabel='Queries')
# + [markdown] origin_pos=8
# 在后面的章节中,我们将经常调用`show_heatmaps`函数来显示注意力权重。
#
# ## 小结
#
# * 人类的注意力是有限的、有价值和稀缺的资源。
# * 受试者使用非自主性和自主性提示有选择性地引导注意力。前者基于突出性,后者则依赖于意识。
# * 注意力机制与全连接层或者汇聚层的区别源于增加的自主提示。
# * 由于包含了自主性提示,注意力机制与全连接的层或汇聚层不同。
# * 注意力机制通过注意力汇聚使选择偏向于值(感官输入),其中包含查询(自主性提示)和键(非自主性提示)。键和值是成对的。
# * 我们可以可视化查询和键之间的注意力权重。
#
# ## 练习
#
# 1. 在机器翻译中通过解码序列词元时,其自主性提示可能是什么?非自主性提示和感官输入又是什么?
# 1. 随机生成一个$10 \times 10$矩阵并使用`softmax`运算来确保每行都是有效的概率分布,然后可视化输出注意力权重。
#
# + [markdown] origin_pos=11 tab=["tensorflow"]
# [Discussions](https://discuss.d2l.ai/t/5765)
#
| tensorflow/chapter_attention-mechanisms/attention-cues.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NBA Game Outcome Projections
# The goal of this project is to project the outcomes of NBA games. Having sucessfully wrangled the data, I will transform it into more meaningful variables using my domain knowledge, and then do some EDA to make sure the data is valid and normally distributed before moving on to modeling.
import pandas as pd
import seaborn as sns
import numpy as np
import warnings
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
df = pd.read_csv("clean.csv")
# make a 'game_id' column for index
df = df.rename(mapper={"Unnamed: 0":"game_id"},axis=1)
df = df.set_index(keys='game_id', drop=True)
# drop teams' first game of the season when they have no stats from which to predict
to_drop = df.loc[(df['away_win'] == 0) & (df['away_loss'] == 0)].index.values
to_drop2 = df.loc[(df['home_win'] == 0) & (df['home_loss'] == 0)].index.values
drop = np.concatenate((to_drop,to_drop2))
df = df.drop(index=drop)
# make individual stat columns more valuable/scaled
df_new = pd.DataFrame()
df_new['home_win'] = df.HOME_TEAM_WINS
df_new['h_fg2%'] = df.home_FG2M / df.home_FG2A
df_new['a_fg2%'] = df.away_FG2M / df.away_FG2A
df_new['h_fg3%'] = df.home_FG3M / df.home_FG3A
df_new['a_fg3%'] = df.away_FG3M / df.away_FG3A
df_new['h_2:3ratio'] = df.home_FG2A / df.home_FG3A
df_new['a_2:3ratio'] = df.away_FG2A / df.away_FG3A
df_new['h_fta/g'] = df.home_FTA / (df.home_win + df.home_loss)
df_new['a_fta/g'] = df.away_FTA / (df.away_win + df.away_loss)
df_new['h_ft%'] = df.home_FTM / df.home_FTA
df_new['a_ft%'] = df.away_FTM / df.away_FTA
df_new['h_ast/g'] = df.home_AST / (df.home_win + df.home_loss)
df_new['a_ast/g'] = df.away_AST / (df.away_win + df.away_loss)
df_new['h_to/g'] = df.home_TO / (df.home_win + df.home_loss)
df_new['a_to/g'] = df.away_TO / (df.away_win + df.away_loss)
df_new['h_oreb/g'] = df.home_OREB / (df.home_win + df.home_loss)
df_new['a_oreb/g'] = df.away_OREB / (df.away_win + df.away_loss)
df_new['h_dreb/g'] = df.home_DREB / (df.home_win + df.home_loss)
df_new['a_dreb/g'] = df.away_DREB / (df.away_win + df.away_loss)
df_new['h_blk/g'] = df.home_BLK / (df.home_win + df.home_loss)
df_new['a_blk/g'] = df.away_BLK / (df.away_win + df.away_loss)
df_new['h_stl/g'] = df.home_STL / (df.home_win + df.home_loss)
df_new['a_stl/g'] = df.away_STL / (df.away_win + df.away_loss)
df_new['h_pf/g'] = df.home_PF / (df.home_win + df.home_loss)
df_new['a_pf/g'] = df.away_PF / (df.away_win + df.away_loss)
df_new['h_bayeswin%'] = (1+df.home_win) / (2+ df.home_win + df.home_loss)
df_new['a_bayeswin%'] = (1+df.away_win) / (2+ df.away_win + df.away_loss)
df_new.describe().T
# make a new df that takes the difference between each competing
# team's stats and extracts that as the potentially predictive
# variable
df_compare = pd.DataFrame()
df_compare['fg2%'] = df_new['h_fg2%'] - df_new['a_fg2%']
df_compare['fg3%'] = df_new['h_fg3%'] - df_new['a_fg3%']
df_compare['2:3ratio'] = df_new['h_2:3ratio'] - df_new['a_2:3ratio']
df_compare['fta/g'] = df_new['h_fta/g'] - df_new['a_fta/g']
df_compare['ft%'] = df_new['h_ft%'] - df_new['a_ft%']
df_compare['ast'] = df_new['h_ast/g'] - df_new['a_ast/g']
df_compare['to'] = df_new['h_to/g'] - df_new['a_to/g']
df_compare['oreb'] = df_new['h_oreb/g'] - df_new['a_oreb/g']
df_compare['dreb'] = df_new['h_dreb/g'] - df_new['a_dreb/g']
df_compare['blk'] = df_new['h_blk/g'] - df_new['a_blk/g']
df_compare['stl'] = df_new['h_stl/g'] - df_new['a_stl/g']
df_compare['pf'] = df_new['h_pf/g'] - df_new['a_pf/g']
df_compare['bayes_win%'] = df_new['h_bayeswin%'] - df_new['a_bayeswin%']
df_compare['home_win'] = df_new['home_win']
df_compare.describe().T
# before using this df for modeling, I want to check that the
# distributions of all of our variables are roughly
# normally distributed
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['fg2%'])
_ = plt.xlim(-0.2,0.2)
_ = plt.ylabel('Games')
_ = plt.title('Distribution of 2-Point Field Goal Percentage Differences (Home - Away)')
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['fg3%'])
_ = plt.xlim(-0.4,0.4)
_ = plt.ylabel('Games')
_ = plt.title('Distribution of 3-Point Field Goal Percentage Differences (Home - Away)')
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['2:3ratio'])
_ = plt.xlim(-12,12)
_ = plt.ylabel('Games')
_ = plt.title('Distribution of 2:3 Point Shot Attempts Ratio Differences (Home - Away)')
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['fta/g'])
_ = plt.ylabel('Games')
_ = plt.title('Distribution of Free Throw Attempts per Game Differences (Home - Away)')
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['ft%'])
_ = plt.ylabel('Games')
_ = plt.title('Distribution of Free Throw Percentage Differences (Home - Away)')
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['ast'])
_ = plt.ylabel('Games')
_ = plt.title('Distribution of Assists per Game Differences (Home - Away)')
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['to'])
_ = plt.xlim(-15,15)
_ = plt.ylabel('Games')
_ = plt.title('Distribution of Turnover per Game Differences (Home - Away)')
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['oreb'])
_ = plt.xlim(-15,15)
_ = plt.ylabel('Games')
_ = plt.title('Distribution of Offensive Rebounds per Game Differences (Home - Away)')
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['dreb'])
_ = plt.xlim(-20,20)
_ = plt.ylabel('Games')
_ = plt.title('Distribution of Defensive Rebounds per Game Differences (Home - Away)')
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['blk'])
_ = plt.ylabel('Games')
_ = plt.title('Distribution of Blocks per Game Differences (Home - Away)')
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['stl'])
_ = plt.ylabel('Games')
_ = plt.title('Distribution of Steals per Game Differences (Home - Away)')
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['pf'])
_ = plt.xlim(-18,18)
_ = plt.ylabel('Games')
_ = plt.title('Distribution of Personal Fouls per Game Differences (Home - Away)')
_ = plt.figure(figsize=(15,10))
_ = sns.histplot(df_compare['bayes_win%'])
_ = plt.ylabel('Games')
_ = plt.title('Distribution of Bayesian Win Percentage Differences (Home - Away)')
# All of our variables distributions look very close to normally distributed. There are a few outliers, which likely represent very early season games when both teams have accumulated less stats, leading to more highly variable differences between the two teams. I will not take these out of the dataset, since they are valid datapoints that are rare enough that I don't think they will have an outsized effect on the dataset.
# heatmap, to check data relationships
_ = plt.figure(figsize=(15,10))
_ = sns.heatmap(df_compare.corr(),cmap="YlGnBu",annot=True)
_ = plt.title("Variable Correlations")
plt.savefig("heatmap.png")
# Since these are the differences between the home and away team's stats entering each individual game, the row/column we are interested in is the bottom/left-most one, checking for correlation with our binary classification response variable, home_win.
#
# So, the following stats have the highest correlation with the home team winning, when a home team enters a game with any of the following statistics greater than their opponents
# (in order of importance)
# 1. Winning %
# 2. 2-point FG%
# 3. Assists per Game
# 4. Defensive Rebounds per Game
# 5. 3-point FG%
# 6. Blocks per Game
#
# I will also do some PCA, or Lasso to verify those results and disentangle any multicolinearity in future project steps. These results are super interesting to me though! The winning percentage being predictive is expected, but I would not have expected assists to be so predictive.
melt = pd.melt(df_compare, "home_win", var_name="measurement")
f, ax = plt.subplots()
_ = plt.title("Variable's Relationships to Home Wins")
_ = sns.stripplot(x="value", y="measurement", hue="home_win",
data=melt, alpha=.25, zorder=1)
_ = sns.pointplot(x="value", y="measurement", data=melt, hue='home_win',join=False)
sns.despine(bottom=True, left=True)
handles, labels = ax.get_legend_handles_labels()
_ = ax.legend(title="Home Win")
_ = plt.savefig("pointplot.png")
sns.set_theme(style="darkgrid")
_ = plt.figure(figsize=(30,16.875))
_ = plt.title("Variable's Relationships to Home Wins")
_ = sns.pointplot(x="value", y="measurement", data=melt, hue='home_win',join=False)
sns.despine(bottom=True, left=True)
_ = plt.savefig("pointplot.png")
# transform compare dataframe to standardize variable strengths
X = df_compare.drop(['home_win'], axis=1)
scaler = StandardScaler().fit(X)
X = pd.DataFrame(scaler.transform(X))
X['home_win'] = df_compare['home_win']
X.columns = df_compare.columns
# melt standardized variable relationships df to make new point plot reflective of relationship strengths
normal_melt = pd.melt(X, "home_win", var_name="measurement")
sns.set_theme(style="darkgrid")
_ = plt.figure(figsize=(15,10))
_ = plt.title("Normalized Variable's Relationships to Home Wins")
_ = sns.pointplot(x="value", y="measurement", data=normal_melt, hue='home_win',join=False)
sns.despine(bottom=True, left=True)
_ = plt.savefig("Normalized_Pointplot.png")
# export to csv
df_compare.to_csv("compare.csv",index=False)
| 3_EDA/.ipynb_checkpoints/NBA Projections EDA-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Agenda
# - Vokabular
# - Reichhaltigkeit
# - Filtern nach Länge
# - Frequency distribution
# - Bigrams
# - Concordance
# - Ähnlichkeit
# - Dispersionsplot
# - Wordcloud 2.0
import nltk
from nltk.book import *
# # Vokabular
# - Wie groß ist das Vokabular von <NAME>?
len(set(text1))
# wie viele verschiedene Wörter sind in dem Text (set(text1)), das heisst Wörter werden
# nur einmal gezählt
# - Das von Sense and Sensibility von <NAME>?
len(set(text2))
# +
from nltk.corpus import udhr
import nltk
nltk.download('punkt')
sample = (" ").join(udhr.words("German_Deutsch-Latin1"))
tokens = nltk.word_tokenize(sample)
menschenrechtserklaerungs_text = nltk.Text(tokens)
len(set(menschenrechtserklaerungs_text))
# -
# # Reichhaltigkeit
# - Welcher text ist am reichhaltigesten d.h. das größte Vokabular pro Wort?
# - Maximaler Quotient ist 1: 100 verschiedene wörter / 100 wörter insgesammt = 1
# <NAME>
len(set(text1)) / float(len(text1))
# <NAME>
len(set(text2)) / float(len(text2))
# Menschenrechtserklärung
len(set(menschenrechtserklaerungs_text)) / float(len(menschenrechtserklaerungs_text))
# Obama inauguration
len(set("2009-Obama.txt")) / float(len("2009-Obama.txt"))
# Bush jun Inauguration
len(set("2005-Bush.txt")) / float(len("2005-Bush.txt"))
len(set("1961-Kennedy.txt")) / float(len("1961-Kennedy.txt"))
# Anscheinend ist die Menschenrechtserklärung am reichhaltigsten oder am kompliziertesten? (Es ist kein fairer Vergleich :)
# # Filtern
# - nach Wörtern bestimmter länge
# <NAME>
V = set(text1)
long_words = [w for w in V if len(w) > 15]
sorted(long_words)
# Menschenrechtserklärung
V = set(menschenrechtserklaerungs_text)
long_words = [w for w in V if len(w) > 15]
sorted(long_words)
# - nach Länge und Frequenz
# Wörter die in Moby Dick mind. 12 buchstaben lang sind und mind. 8 mal vorkommen
fdist1 = FreqDist(text1)
sorted(w for w in set(text1) if len(w) > 12 and fdist1[w] > 8)
# +
# Frequency distribution: Wie oft kommt ein Wort im Text vor
fdist1 = FreqDist(text1)
fdist1
# +
#
# -
# # Verteilung
# - Welches Wort kommt in Moby Dick am häufigsten vor?
from IPython.display import Image
Image("frequency.png")
fdist = FreqDist(text1)
fdist
# +
#fdist.plot(30, cumulative=True)
# -
# - Jedes zusätzliche Wort kommt immer weniger oft vor (diminishing returns)
# - Welches Wort kommt in meinem Satz am häufigsten vor?
sample = '''Ich war heute im wald spazieren und sah ein reh. Wobeich ich nicht sicher war ob es ein Reh war oder ein Geist.'''
tokens = nltk.word_tokenize(sample)
text = nltk.Text(tokens)
fdist = FreqDist(text)
fdist
psalm = '''Trittst im Morgenrot daher,
Seh'ich dich im Strahlenmeer,
Dich, du Hocherhabener, Herrlicher!
Wenn der Alpenfirn sich rötet,
Betet, freie Schweizer, betet!
Eure fromme Seele ahnt
Eure fromme Seele ahnt
Gott im hehren Vaterland,
Gott, den Herrn, im hehren Vaterland. Kommst im Abendglühn daher,
Find'ich dich im Sternenheer,
Dich, du Menschenfreundlicher, Liebender!
In des Himmels lichten Räumen
Kann ich froh und selig träumen!
Denn die fromme Seele ahnt
Denn die fromme Seele ahnt
Gott im hehren Vaterland,
Gott, den Herrn, im hehren Vaterland. Ziehst im Nebelflor daher,
Such'ich dich im Wolkenmeer,
Dich, du Unergründlicher, Ewiger!
Aus dem grauen Luftgebilde
Tritt die Sonne klar und milde,
Und die fromme Seele ahnt
Und die fromme Seele ahnt
Gott im hehren Vaterland,
Gott, den Herrn, im hehren Vaterland. Fährst im wilden Sturm daher,
Bist du selbst uns Hort und Wehr,
Du, allmächtig Waltender, Rettender!
In Gewitternacht und Grauen
Lasst uns kindlich ihm vertrauen!
Ja, die fromme Seele ahnt,
Ja, die fromme Seele ahnt,
Gott im hehren Vaterland,
Gott, den Herrn, im hehren Vaterland.'''
tokens = nltk.word_tokenize(psalm)
text = nltk.Text(tokens)
fdist = FreqDist(text)
fdist
# ### Aufgabe
# - Erzeugt eigenen Text oder kopiert ihn aus dem Internet und zählt welche Wörter am häufigsten vorkommen, sind die ersten 3 Positionen bei Euch ähnlich?
# # Bigrams
# - Welche Wörter kommen häufig zusammen vor?
nltk.download('stopwords')
text1.collocations()
menschenrechtserklaerungs_text.collocations()
# ### Advanced Trigrams
# - Bigramme mit Frequenz in der Menschenrechtserklärung
# +
from nltk.collocations import *
bigram_measures = nltk.collocations.BigramAssocMeasures()
trigram_measures = nltk.collocations.TrigramAssocMeasures()
finder = BigramCollocationFinder.from_words(menschenrechtserklaerungs_text)
finder.apply_freq_filter(3) # reduce to bigrams that appeared at least 3 times
finder.nbest(bigram_measures.pmi, 10)
# -
# - Trigramme in der Menschenrechtserklärung
# - mehr Beispiele unter http://www.nltk.org/howto/collocations.html
# +
finder = TrigramCollocationFinder.from_words(menschenrechtserklaerungs_text)
finder.apply_freq_filter(3) # reduce to bigrams that appeared at least 3 times
finder.nbest(trigram_measures.pmi, 10)
# -
# # Übereinstimmungen oder Konkordanz
# - In welchem Kontext steht überall "monstrous" im Text bei <NAME>?
text1.concordance("monstrous")
# - In welchem Kontext kommen Menschenrechte in der Menschenrechtserklärung vor?
menschenrechtserklaerungs_text.concordance("Menschenrechte")
# ## Ähnlichkeit: Welche Wörter benutzen den gleichen Kontext?
# - Bei <NAME>?
text1.similar("monstrous")
# - Bei <NAME>?
text2.similar("monstrous")
# - In der Menschenrechtserkläreung?
menschenrechtserklaerungs_text.similar("Menschenrechte")
# ## Dispersionsplot
# - Welche Wörter kommen wann gemeinsam vor?
# %matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(15,8))
text1.dispersion_plot(["monstrous", "whale", "ship", "wind", "water", "Ahab"])
plt.figure(figsize=(15,8))
menschenrechtserklaerungs_text.dispersion_plot(["Menschenrechte", "Arbeit", "Freiheit", "Mensch"])
# ## Aufgabe:
# - Probiert es mal mit einem Dispersionsplot für einen Text eurer Wahl.
# %matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(15,8))
text1.dispersion_plot(["whalers", "Moby", "ship", "dead", "water", "Ahab"])
text3 = ("2009-Obama.txt")
tokens = nltk.word_tokenize(text3)
text = nltk.Text(tokens)
fdist = FreqDist(text)
fdist
inaugural.fileids()
| Vertiefungstage3_4/13 Text Teil 1/1.2 Descriptives.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Just-in-time compilation (JIT)
# ====
#
# For programmer productivity, it often makes sense to code the majority of your application in a high-level language such as Python and only optimize code bottlenecks identified by profiling. One way to speed up these bottlenecks is to compile the code to machine executables, often via an intermediate C or C-like stage. There are two common approaches to compiling Python code - using a Just-In-Time (JIT) compiler and using Cython for Ahead of Time (AOT) compilation.
#
# This notebook mostly illustrates the JIT approach.
# **References**
#
# - [Numba](http://numba.pydata.org)
# - [The need for speed without bothering too much: An introduction to numba](http://nbviewer.jupyter.org/github/akittas/presentations/blob/master/pythess/numba/numba.ipynb?utm_source=newsletter_mailer&utm_medium=email&utm_campaign=weekly)
#
# Tips for speeding up [`numba`](https://numba.pydata.org/numba-doc/latest/user/performance-tips.html)
# %matplotlib inline
import matplotlib.pyplot as plt
# **Utility function for timing functions**
#
# We write decorators to help in timing as an alternative to `timeit`.
import time
from numpy.testing import assert_almost_equal
def timer(f, *args, **kwargs):
start = time.time()
ans = f(*args, **kwargs)
return ans, time.time() - start
def report(fs, *args, **kwargs):
ans, t = timer(fs[0], *args, **kwargs)
print('%s: %.1f' % (fs[0].__name__, 1.0))
for f in fs[1:]:
ans_, t_ = timer(f, *args, **kwargs)
print('%s: %.1f' % (f.__name__, t/t_))
# Using `numexpr`
# ----
#
# One of the simplest approaches is to use [`numexpr`](https://github.com/pydata/numexpr) which takes a `numpy` expression and compiles a more efficient version of the `numpy` expression written as a string. If there is a simple expression that is taking too long, this is a good choice due to its simplicity. However, it is quite limited.
import numpy as np
a = np.random.random(int(1e6))
b = np.random.random(int(1e6))
c = np.random.random(int(1e6))
# %timeit -r3 -n3 b**2 - 4*a*c
import numexpr as ne
# %timeit -r3 -n3 ne.evaluate('b**2 - 4*a*c')
# Using `numba`
# ----
#
# When it works, the JIT `numba` can speed up Python code tremendously with minimal effort.
#
# [Documentation for `numba`](http://numba.pydata.org/numba-doc/0.12.2/index.html)
# ### Example 1
# #### Plain Python version
def matrix_multiply(A, B):
m, n = A.shape
n, p = B.shape
C = np.zeros((m, p))
for i in range(m):
for j in range(p):
for k in range(n):
C[i,j] += A[i,k] * B[k, j]
return C
A = np.random.random((30, 50))
B = np.random.random((50, 40))
# #### Numba jit version
import numba
from numba import jit
@jit
def matrix_multiply_numba(A, B):
m, n = A.shape
n, p = B.shape
C = np.zeros((m, p))
for i in range(m):
for j in range(p):
for k in range(n):
C[i,j] += A[i,k] * B[k, j]
return C
# We can remove the cost of indexing a matrix in the inner loop
@jit
def matrix_multiply_numba2(A, B):
m, n = A.shape
n, p = B.shape
C = np.zeros((m, p))
for i in range(m):
for j in range(p):
d = 0.0
for k in range(n):
d += A[i,k] * B[k, j]
C[i,j] = d
return C
# %timeit -r3 -n3 matrix_multiply(A, B)
# %timeit -r3 -n3 matrix_multiply_numba(A, B)
# %timeit -r3 -n3 matrix_multiply_numba2(A, B)
# #### Numpy version
def matrix_multiply_numpy(A, B):
return A.dot(B)
# #### Check that outputs are the same
assert_almost_equal(matrix_multiply(A, B), matrix_multiply_numba(A, B))
assert_almost_equal(matrix_multiply(A, B), matrix_multiply_numpy(A, B))
# %timeit -r3 -n3 matrix_multiply_numba(A, B)
report([matrix_multiply, matrix_multiply_numba, matrix_multiply_numba2, matrix_multiply_numpy], A, B)
# ### Pre-compilation by giving specific signature
@jit('double[:,:](double[:,:], double[:,:])')
def matrix_multiply_numba_1(A, B):
m, n = A.shape
n, p = B.shape
C = np.zeros((m, p))
for i in range(m):
for j in range(p):
d = 0.0
for k in range(n):
d += A[i,k] * B[k, j]
C[i,j] = d
return C
# %timeit -r3 -n3 matrix_multiply_numba2(A, B)
# %timeit -r3 -n3 matrix_multiply_numba_1(A, B)
# ### Example 2: Using nopython
# #### Vectorized Python version
def mc_pi(n):
x = np.random.uniform(-1, 1, (n,2))
return 4*np.sum((x**2).sum(1) < 1)/n
n = int(1e6)
mc_pi(n)
# %timeit mc_pi(n)
# #### Numba on vectorized version
@jit
def mc_pi_numba(n):
x = np.random.uniform(-1, 1, (n,2))
return 4*np.sum((x**2).sum(1) < 1)/n
# %timeit mc_pi_numba(n)
# #### Using nopython
#
# Using nopython, either with the `@njit` decorator or with `@jit(nopython = True)`, tells `numba` to not use any Python objects in the C code, but only native C types. If `numba` cannot do this, it will raise an error. It is usually useful to run this, so you are aware of bottlenecks in your code.
@jit(nopython=True)
def mc_pi_numba_njit(n):
x = np.random.uniform(-1, 1, (n,2))
return 4*np.sum((x**2).sum(1) < 1)/n
# %timeit mc_pi_numba_njit(n)
# #### Numba on unrolled version
@jit(nopython=True)
def mc_pi_numba_unrolled(n):
s = 0
for i in range(n):
x = np.random.uniform(-1, 1)
y = np.random.uniform(-1, 1)
if (x*x + y*y) < 1:
s += 1
return 4*s/n
mc_pi_numba_unrolled(n)
# %timeit -r3 -n3 mc_pi_numba_unrolled(n)
# ### Usig cache=True
#
# This stores the compiled function in a file and avoids re-compilation on re-running a Python program.
@jit(nopython=True, cache=True)
def mc_pi_numba_unrolled_cache(n):
s = 0
for i in range(n):
x = np.random.uniform(-1, 1)
y = np.random.uniform(-1, 1)
if (x*x + y*y) < 1:
s += 1
return 4*s/n
# %timeit -r3 -n3 mc_pi_numba_unrolled_cache(n)
# ### Simple parallel loops with `numba`
from numba import njit, prange
@njit()
def sum_rows_range(A):
s = 0
for i in range(A.shape[0]):
s += np.sum(np.exp(np.log(np.sqrt(A[i]**2.0))))
return s
@njit(parallel=True)
def sum_rows_prange(A):
s = 0
for i in prange(A.shape[0]):
s += np.sum(np.exp(np.log(np.sqrt(A[i]**2.0))))
return s
A = np.random.randint(0, 10, (800, 100000))
A.shape
# Run once so that compile times excluded in benchmarking
sum_rows_range(A), sum_rows_prange(A)
# +
# %%time
sum_rows_range(A)
# +
# %%time
sum_rows_prange(A)
# -
# Using numba vectorize and guvectoize
# ----
#
# Sometimes it is convenient to use `numba` to convert functions to vectorized functions for use in `numpy`. See [documentation](http://numba.pydata.org/numba-doc/dev/user/vectorize.html) for details.
from numba import int32, int64, float32, float64
# ### Using `vectorize`
@numba.vectorize()
def f(x, y):
return np.sqrt(x**2 + y**2)
xs = np.random.random(10)
ys = np.random.random(10)
np.array([np.sqrt(x**2 + y**2) for (x, y) in zip(xs, ys)])
f(xs, ys)
# ### Adding function signatures
@numba.vectorize([float64(float64, float64),
float32(float32, float32),
float64(int64, int64),
float32(int32, int32)])
def f_sig(x, y):
return np.sqrt(x**2 + y**2)
f_sig(xs, ys)
# ### Using `guvectorize`
# **Create our own version of inner1d**
#
# Suppose we have two matrices, each with `m` rows. We may want to calculate an "row-wise" inner product, that is, generate a scalar for each pair of row vectors. We cannot use `@vectorize` because the elements are not scalars.
#
# The *layout* `(n),(n)->()` says the function to be vectorized takes two `n`-element one dimensional arrays `(n)` and returns a scalar `()`. The type *signature* is a list that matches the order of the *layout*.
@numba.guvectorize([(float64[:], float64[:], float64[:])], '(n),(n)->()')
def nb_inner1d(u, v, res):
res[0] = 0
for i in range(len(u)):
res[0] += u[i]*v[i]
xs = np.random.random((3,4))
nb_inner1d(xs, xs)
# **Check**
from numpy.core.umath_tests import inner1d
inner1d(xs,xs)
# #### Alternative to deprecated `inner1d` using Einstein summation notation
#
# For more on how to use Einstein notation, see the help documentation and [here](https://rockt.github.io/2018/04/30/einsum)
np.einsum('ij,ij->i', xs, xs)
# %timeit -r3 -n3 nb_inner1d(xs, xs)
# %timeit -r3 -n3 inner1d(xs, xs)
# **Create our own version of matrix_multiply**
@numba.guvectorize([(int64[:,:], int64[:,:], int64[:,:])],
'(m,n),(n,p)->(m,p)')
def nb_matrix_multiply(u, v, res):
m, n = u.shape
n, p = v.shape
for i in range(m):
for j in range(p):
res[i,j] = 0
for k in range(n):
res[i,j] += u[i,k] * v[k,j]
xs = np.random.randint(0, 10, (5, 2, 3))
ys = np.random.randint(0, 10, (5, 3, 2))
nb_matrix_multiply(xs, ys)
# **Check**
from numpy.core.umath_tests import matrix_multiply
matrix_multiply(xs, ys)
# %timeit -r3 -n3 nb_matrix_multiply(xs, ys)
# %timeit -r3 -n3 matrix_multiply(xs, ys)
# ## Parallelization with vectorize and guvectorize
#
# If you have an NVidia graphics card and CUDA drivers installed, you can also use `target = 'cuda'`.
@numba.vectorize([float64(float64, float64),
float32(float32, float32),
float64(int64, int64),
float32(int32, int32)],
target='parallel')
def f_parallel(x, y):
return np.sqrt(x**2 + y**2)
xs = np.random.random(int(1e8))
ys = np.random.random(int(1e8))
# %timeit -r3 -n3 f(xs, ys)
# %timeit -r3 -n3 f_parallel(xs, ys)
# ### Mandelbrot example with `numba`
# **Pure Python**
# color function for point at (x, y)
def mandel(x, y, max_iters):
c = complex(x, y)
z = 0.0j
for i in range(max_iters):
z = z*z + c
if z.real*z.real + z.imag*z.imag >= 4:
return i
return max_iters
def create_fractal(xmin, xmax, ymin, ymax, image, iters):
height, width = image.shape
pixel_size_x = (xmax - xmin)/width
pixel_size_y = (ymax - ymin)/height
for x in range(width):
real = xmin + x*pixel_size_x
for y in range(height):
imag = ymin + y*pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
# +
gimage = np.zeros((1024, 1536), dtype=np.uint8)
xmin, xmax, ymin, ymax = np.array([-2.0, 1.0, -1.0, 1.0]).astype('float32')
iters = 50
start = time.time()
create_fractal(xmin, xmax, ymin, ymax, gimage, iters)
dt = time.time() - start
print("Mandelbrot created on CPU in %f s" % dt)
plt.grid(False)
plt.imshow(gimage, cmap='jet')
pass
# -
# **Numba**
from numba import uint32, float32
# **The jit decorator can also be called as a regular function**
mandel_numba = jit(uint32(float32, float32, uint32))(mandel)
@jit
def create_fractal_numba(xmin, xmax, ymin, ymax, image, iters):
height, width = image.shape
pixel_size_x = (xmax - xmin)/width
pixel_size_y = (ymax - ymin)/height
for x in range(width):
real = xmin + x*pixel_size_x
for y in range(height):
imag = ymin + y*pixel_size_y
color = mandel_numba(real, imag, iters)
image[y, x] = color
# +
gimage = np.zeros((1024, 1536), dtype=np.uint8)
xmin, xmax, ymin, ymax = np.array([-2.0, 1.0, -1.0, 1.0]).astype('float32')
iters = 50
start = time.time()
create_fractal_numba(xmin, xmax, ymin, ymax, gimage, iters)
dt = time.time() - start
print("Mandelbrot created wiht Numba in %f s" % dt)
plt.grid(False)
plt.imshow(gimage, cmap='jet')
pass
# -
# #### Using `numba` with `ipyparallel`
# Using `numba.jit` is straightforward. See [example](https://github.com/barbagroup/numba_tutorial_scipy2016/blob/master/notebooks/10.optional.Numba.and.ipyparallel.ipynb)
| notebooks/S08B_Numba.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="qmM1BsxBaLQU"
import pandas_datareader as pdr
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import math
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
# + [markdown] id="4xZx7wwsi70R"
# We will train this LSTM network using Tesla's stock market data
# + id="KJcOIrtdaR6L"
df=pdr.get_data_tiingo('TSLA',api_key=key)
# + id="JQQ_P9o_ez2L"
df.to_csv('Tesla.csv')
# + id="3QE4XyhmfClD"
data=pd.read_csv('Tesla.csv')
# + colab={"base_uri": "https://localhost:8080/", "height": 417} id="6fFATXpFfS0C" outputId="a41aebb3-0d36-4d08-b401-e16712030016"
data
# + id="iNQF0-p0fYuY"
# We will consider only the closing price of the stock after
data1=data.reset_index()["close"]
# + colab={"base_uri": "https://localhost:8080/"} id="ua0su2PPfpkn" outputId="f4c52443-a97c-44ec-c2ba-800314af2a4c"
data1.shape
# + id="Kb1ufcB9gMD9"
# Getting the data in the range of (0,1) using Min max scaler as LSTMS are sensitive to large input values
scaler=MinMaxScaler(feature_range=(0,1))
data1=scaler.fit_transform(np.array(data1).reshape(-1,1))
# + colab={"base_uri": "https://localhost:8080/"} id="rdKn71lcGms6" outputId="99ceb0ec-2d97-480d-b0a0-ee9296c0a801"
# as u can see the data has been converted to an array
data1
# + id="CkEwf5gtGvxk"
#We will make a 75:25 train test split
training_size=int(len(data1)*0.75)
test_size=len(data1)-training_size
train_data,test_data=data1[0:training_size,:],data1[training_size:len(data1),:1]
# + colab={"base_uri": "https://localhost:8080/"} id="78JPVHkgHTep" outputId="15560e67-00f3-4607-acb1-6c8d167f2752"
train_data.shape,test_data.shape
# + id="ROEHayGwHqMF"
#Definig the Function to create training set and test set
def create_dataset(dataset, time_step):
data_X, data_Y = [], []
for i in range(len(dataset)-time_step-1):
a = dataset[i:(i+time_step), 0]
data_X.append(a)
data_Y.append(dataset[i + time_step, 0])
return np.array(data_X), np.array(data_Y)
# + id="Am6pnKTEIM1X"
time_step = 100
X_train, Y_train = create_dataset(train_data, time_step)
X_test, Y_test = create_dataset(test_data, time_step)
# + id="1giTyrZVI_RB"
#An LSTM needs 3 inputs
X_train =X_train.reshape(X_train.shape[0],X_train.shape[1] , 1)
X_test = X_test.reshape(X_test.shape[0],X_test.shape[1] , 1)
# + id="B_3xGoRQJLw9"
#Creating a model with 3 LSTM layers
model=Sequential()
model.add(LSTM(50,return_sequences=True,input_shape=(100,1))) #Layer with 50 activation units
model.add(LSTM(50,return_sequences=True)) #Layer with 50 activation units
model.add(LSTM(65)) #Layer with 65 activation units
model.add(Dense(1))
#Specifying the type of loss and the optimizing Algorithm
model.compile(loss='mean_squared_error',optimizer='adam')
# + colab={"base_uri": "https://localhost:8080/"} id="MYKDi_wiJgEb" outputId="82b3de29-cf4d-4877-c240-faa9efa17b39"
#Summary of the model
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="o8crBJitJ6OZ" outputId="8580698f-75e9-49c5-8d79-db0931fbaddf"
model.fit(X_train,Y_train,validation_data=(X_test,Y_test),epochs=100,batch_size=10,verbose=1)
# + id="gnXhoDgBKG14"
train_predict=model.predict(X_train)
test_predict=model.predict(X_test)
# + id="Kz-IULg_Qlb1"
#Scaling the predicting data back to the original range
train_predict=scaler.inverse_transform(train_predict)
test_predict=scaler.inverse_transform(test_predict)
# + colab={"base_uri": "https://localhost:8080/"} id="Ikg6WoA6Qv73" outputId="eef0a0ae-1ba6-4f60-b8b1-d4f6ad8a210a"
# Getting the error in our predictions
print(math.sqrt(mean_squared_error(Y_train,train_predict)))
print(math.sqrt(mean_squared_error(Y_test,test_predict)))
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="YL-LY5yZRE_I" outputId="969d4595-f472-4073-c979-f2ffbcdb560b"
### Plotting
# Green indicates test predictions , Blue indicates original data,orange indicates train predictions.
look_back=100
trainPredictPlot = np.empty_like(data1)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(train_predict)+look_back, :] = train_predict
# shift test predictions for plotting
testPredictPlot = np.empty_like(data1)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(train_predict)+(look_back*2)+1:len(data1)-1, :] = test_predict
# plot the original stock graph and the train and test predictions
plt.plot(scaler.inverse_transform(data1))
plt.plot(trainPredictPlot)
plt.plot(testPredictPlot)
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="DfLiSiCcSWBF" outputId="4bc26836-ad7e-4013-c16e-e655806c2779"
X_input=test_data[215:].reshape(1,-1)
X_input.shape
# + id="L-7TmYkiSk1i"
temp_input=list(X_input)
temp_input=temp_input[0].tolist()
# + colab={"base_uri": "https://localhost:8080/"} id="Y0gQgtgOSw5s" outputId="57d03a1d-180d-44f1-8e86-9d29bb97caf5"
# demonstrate prediction for next 30 days
output=[] #List to store output of price over next 30 days
n_steps=100
i=0
while(i<30):
#After the first prediction
if(len(temp_input)>100) :
X_input=np.array(temp_input[1:])
print(f"{i} day input {X_input}")
X_input=X_input.reshape(1,-1)
X_input = X_input.reshape((1, n_steps, 1))
yhat = model.predict(X_input, verbose=0)
print(f"{i} day output {yhat}")
temp_input.extend(yhat[0].tolist())
temp_input=temp_input[1:]
output.extend(yhat.tolist())
i=i+1
else:
X_input = X_input.reshape((1, n_steps,1))
yhat = model.predict(X_input, verbose=0)
#First Prediction
print(yhat[0])
temp_input.extend(yhat[0].tolist())
print(len(temp_input))
output.extend(yhat.tolist())
i=i+1
# + colab={"base_uri": "https://localhost:8080/"} id="MoxwV3cSUN-4" outputId="40d0b4bf-d747-4936-f11b-4b0edccf4863"
#prediction for next 30 days in the form of a list
output
# + id="lJiKZWD4Ucf0"
day_new=np.arange(1,101)
day_pred=np.arange(101,131)
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="nLNPB35sUwpI" outputId="f417648f-23d4-404a-985d-815da66020a5"
plt.plot(day_new,scaler.inverse_transform(data1[1157:]))
plt.plot(day_pred,scaler.inverse_transform(lst_output))
# + id="6xe7YH6-VTCd" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="bc0eb561-202e-4d89-f77a-55e860b13054"
df1=data1.tolist()
df1.extend(lst_output)
plt.plot(df1[1200:])
# + id="AN40xEDPXFnZ"
| Tesla.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Manipulation and Plotting with `pandas`
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# 
# ## Learning Goals
#
# - Load .csv files into `pandas` DataFrames
# - Describe and manipulate data in Series and DataFrames
# - Visualize data using DataFrame methods and `matplotlib`
# ## What is Pandas?
#
# Pandas, as [the Anaconda docs](https://docs.anaconda.com/anaconda/packages/py3.7_osx-64/) tell us, offers us "High-performance, easy-to-use data structures and data analysis tools." It's something like "Excel for Python", but it's quite a bit more powerful.
# Let's read in the heart dataset.
#
# Pandas has many methods for reading different types of files. Note that here we have a .csv file.
#
# Read about this dataset [here](https://www.kaggle.com/ronitf/heart-disease-uci).
heart_df = pd.read_csv('heart.csv')
# The output of the `.read_csv()` function is a pandas *DataFrame*, which has a familiar tabaular structure of rows and columns.
type(heart_df)
heart_df
# ## DataFrames and Series
#
# Two main types of pandas objects are the DataFrame and the Series, the latter being in effect a single column of the former:
age_series = heart_df['age']
type(age_series)
# Notice how we can isolate a column of our DataFrame simply by using square brackets together with the name of the column.
# Both Series and DataFrames have an *index* as well:
heart_df.index
age_series.index
# Pandas is built on top of NumPy, and we can always access the NumPy array underlying a DataFrame using `.values`.
heart_df.values
# ## Basic DataFrame Attributes and Methods
# ### `.head()`
heart_df.head()
# ### `.tail()`
heart_df.tail()
# ### `.info()`
heart_df.info()
# ### `.describe()`
heart_df.describe()
# ### `.dtypes`
heart_df.dtypes
# ### `.shape`
heart_df.shape
# ### Exploratory Plots
# Let's make ourselves a histogram of ages:
sns.set_style('darkgrid')
sns.distplot(a=heart_df['age']);
# And while we're at it let's do a scatter plot of maximum heart rate vs. age:
sns.scatterplot(x=heart_df['age'], y=heart_df['thalach']);
# ## Adding to a DataFrame
#
#
# ### Adding Rows
#
# Here are two rows that our engineer accidentally left out of the .csv file, expressed as a Python dictionary:
extra_rows = {'age': [40, 30], 'sex': [1, 0], 'cp': [0, 0], 'trestbps': [120, 130],
'chol': [240, 200],
'fbs': [0, 0], 'restecg': [1, 0], 'thalach': [120, 122], 'exang': [0, 1],
'oldpeak': [0.1, 1.0], 'slope': [1, 1], 'ca': [0, 1], 'thal': [2, 3],
'target': [0, 0]}
extra_rows
# How can we add this to the bottom of our dataset?
# +
# Let's first turn this into a DataFrame.
# We can use the .from_dict() method.
missing = pd.DataFrame(extra_rows)
missing
# +
# Now we just need to concatenate the two DataFrames together.
# Note the `ignore_index` parameter! We'll set that to True.
heart_augmented = pd.concat([heart_df, missing],
ignore_index=True)
# +
# Let's check the end to make sure we were successful!
heart_augmented.tail()
# -
# ### Adding Columns
#
# Adding a column is very easy in `pandas`. Let's add a new column to our dataset called "test", and set all of its values to 0.
heart_augmented['test'] = 0
heart_augmented.head()
# I can also add columns whose values are functions of existing columns.
#
# Suppose I want to add the cholesterol column ("chol") to the resting systolic blood pressure column ("trestbps"):
heart_augmented['chol+trestbps'] = heart_augmented['chol'] + heart_augmented['trestbps']
heart_augmented.head()
# ## Filtering
# We can use filtering techniques to see only certain rows of our data. If we wanted to see only the rows for patients 70 years of age or older, we can simply type:
heart_augmented[heart_augmented['age'] >= 70]
# Use '&' for "and" and '|' for "or".
# ### Exercise
#
# Display the patients who are 70 or over as well as the patients whose trestbps score is greater than 170.
# <details>
# <summary>Answer</summary>
# <code>heart_augmented[(heart_augmented['age'] >= 70) | (heart_augmented['trestbps'] > 170)]</code>
# </details>
# ### Exploratory Plot
#
# Using the subframe we just made, let's make a scatter plot of their cholesterol levels vs. age and color by sex:
# +
at_risk = #[ANSWER FROM EXERCISE]
sns.scatterplot(data=at_risk, x='age', y='chol', hue='sex');
# -
# ### `.loc` and `.iloc`
# We can use `.loc` to get, say, the first ten values of the age and resting blood pressure ("trestbps") columns:
heart_augmented.loc
heart_augmented.loc[:9, ['age', 'trestbps']]
# `.iloc` is used for selecting locations in the DataFrame **by number**:
heart_augmented.iloc
heart_augmented.iloc[3, 0]
# ### Exercise
#
# How would we get the same slice as just above by using .iloc() instead of .loc()?
# <details>
# <summary>Answer</summary>
# <code>heart_augmented.iloc[:10, [0, 3]]</code>
# </details>
# ## Statistics
#
# ### `.mean()`
heart_augmented.mean()
# Be careful! Some of these will are not straightforwardly interpretable. What does an average "sex" of 0.682 mean?
# ### `.min()`
heart_augmented.min()
# ### `.max()`
heart_augmented.max()
# ## Series Methods
#
# ### `.value_counts()`
#
# How many different values does have slope have? What about sex? And target?
heart_augmented['slope'].value_counts()
# ### `.sort_values()`
heart_augmented['age'].sort_values()
# ## `pandas`-Native Plotting
#
# The `.plot()` and `.hist()` methods available for DataFrames use a wrapper around `matplotlib`:
heart_augmented.plot(x='age', y='trestbps', kind='scatter');
heart_augmented.hist(column='chol');
# ## Exercises
# 1. Make a bar plot of "age" vs. "slope" for the `heart_augmented` DataFrame.
# <details>
# <summary>Answer</summary>
# <code>sns.barplot(data=heart_augmented, x='slope', y='age');</code>
# </details>
# 2. Make a histogram of ages for **just the men** in `heart_augmented` (heart_augmented['sex']=1).
# <details>
# <summary>Answer</summary>
# <code>men = heart_augmented[heart_augmented['sex'] == 1]
# sns.distplot(a=men['age']);</code>
# </details>
# 3. Make separate scatter plots of cholesterol vs. resting systolic blood pressure for the target=0 and the target=1 groups. Put both plots on the same figure and give each an appropriate title.
# <details>
# <summary>Answer</summary>
# <code>target0 = heart_augmented[heart_augmented['target'] == 0]
# target1 = heart_augmented[heart_augmented['target'] == 1]
# fig, ax = plt.subplots(1, 2, figsize=(10, 5))
# sns.scatterplot(data=target0, x='trestbps', y='chol', ax=ax[0])
# sns.scatterplot(data=target1, x='trestbps', y='chol', ax=ax[1])
# ax[0].set_title('Cholesterol Vs. Resting Blood Pressure in Women')
# ax[1].set_title('Cholesterol Vs. Resting Blood Pressure in Men');</code>
# </details>
# ## Let's find a .csv file online and experiment with it.
#
# I'm going to head to [dataportals.org](https://dataportals.org) to find a .csv file.
| Phase_1/ds-pandas_dataframes-main/data_manipulation_plotting_pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# -
#importing required libraries
import numpy as np
import pandas as pd
from keras.preprocessing.image import ImageDataGenerator,load_img
from keras.utils import to_categorical
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import random
import os
#defining the image properties
Image_Width=128
Image_Height=128
Image_Size=(Image_Width,Image_Height)
Image_Channels=3
| Projects/Cat and Dog classifier using deep learning/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="71vD2LrsVU0u"
# Import Libraries/ Read Data from GitHub
#
#
# + id="IVirOKYfUt7Z" executionInfo={"status": "ok", "timestamp": 1620232167054, "user_tz": 240, "elapsed": 1435, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}}
import pandas as pd
from mlxtend.frequent_patterns import apriori
from mlxtend.frequent_patterns import association_rules
# upload NYC Crime data for 2020
url = 'https://raw.githubusercontent.com/duketran1996/NYC-Crime/main/clean-dataset/nypd_arrest_data_clean_2020.csv'
df = pd.read_csv(url)
# + id="IiDXa0l-VgvV" executionInfo={"status": "ok", "timestamp": 1620232169012, "user_tz": 240, "elapsed": 572, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}}
#upload NYC Census Data
url1 = 'https://raw.githubusercontent.com/duketran1996/NYC-Crime/main/association_rule/nyc_population_census_2019.csv'
df_pop = pd.read_csv(url1)
# + id="mItT7nxV5BZe" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1620201707491, "user_tz": 240, "elapsed": 469, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}} outputId="c37f82d8-72eb-4bf6-e508-51a127f6faa4"
df.columns
# df.shape
# + [markdown] id="bbuDuUTBjKRE"
# Group Datasets by Borough and Race
# + id="_9UeZOVJx7OZ" executionInfo={"status": "ok", "timestamp": 1620232172301, "user_tz": 240, "elapsed": 368, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}}
df_crime_race_dist = df.groupby(['ARREST_BORO','PERP_RACE'])['ARREST_KEY'].count()
df_crime_race_dist = df_crime_race_dist.to_frame()
# + id="c79ciVvQWNn_" executionInfo={"status": "ok", "timestamp": 1620232174276, "user_tz": 240, "elapsed": 327, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}}
df_pop_race_dist = df_pop.groupby(['BOROUGH','RACE'])['POPULATION'].sum()
df_pop_race_dist = df_pop_race_dist.to_frame()
# + [markdown] id="qo5Cyi14jZ0_"
# Join Datasets of Crime and Population to find normalised rate of crime by race in every borough
# + id="XDE7m6TVWT13" executionInfo={"status": "ok", "timestamp": 1620232176038, "user_tz": 240, "elapsed": 388, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}}
df_joined = pd.concat([df_crime_race_dist, df_pop_race_dist], axis=1, join="inner")
# + [markdown] id="8W0LzGechnfz"
# ### **Normalise Crime Rate of Race by Population for each Borough**
#
# ---
#
#
#
# ---
#
#
# + id="jgxXfvAlf9M-" executionInfo={"status": "ok", "timestamp": 1620232177845, "user_tz": 240, "elapsed": 352, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-<KEY>/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}}
normalise_race_of_crime = ((df_joined['ARREST_KEY']/df_joined['POPULATION'])*100)
# + id="0fOkfrr-hE2M" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1620232180175, "user_tz": 240, "elapsed": 328, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}} outputId="40ee40dd-2a55-4dd7-9767-a1ced8367ef1"
normalise_race_of_crime
# + [markdown] id="hZlyxfSth28q"
# **Observation: ***
# Normalised data shows black has much higher rate of crime per borough.
#
# But in my opinion this doesnot tell much as the data can be skewed.
#
# There can be 100 Black individuals arrested for 11923 crimes committed in Queens. While there could be 3445 white individual arrested for 3445 crimes in Queens.
# + [markdown] id="b-flZnEGsr2w"
# ### **ASSOCIATION RULES**
#
# ---
#
#
#
# ---
#
#
#
# Based on suggestions from: https://pbpython.com/market-basket-analysis.html
# + [markdown] id="GMwyxlCRiHj_"
# Function to Onehot encode occurence count of offense.
# + id="fl_ymbMSZzys"
def encode_units(x):
if x <= 0:
return 0
if x >= 1:
return 1
# + [markdown] id="NGuTchhjipN9"
# Finding association of offenses likely to occur together in **Manhattan** on a given day.
# + id="keBU8EUqswZf" colab={"base_uri": "https://localhost:8080/", "height": 540} executionInfo={"status": "ok", "timestamp": 1620185018811, "user_tz": 240, "elapsed": 533, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}} outputId="b79c2a41-e040-4a69-9f51-e3f6d89de7f9"
basket_man = (df[df['ARREST_BORO'] =="Manhattan"]
.groupby(['ARREST_DATE', 'OFNS_DESC'])['ARREST_KEY'].count().unstack().reset_index().fillna(0).set_index('ARREST_DATE'))
basket_man
# + [markdown] id="hZaHHD6_jh4D"
# There are a lot of zeros in the data but we also need to make sure any positive values are converted to a 1 and anything less the 0 is set to 0. This step will complete the one hot encoding of the data
# + id="MvkQ6eXTuk-o" colab={"base_uri": "https://localhost:8080/", "height": 540} executionInfo={"status": "ok", "timestamp": 1620184997093, "user_tz": 240, "elapsed": 303, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}} outputId="8edbb4db-c3b6-47d4-cfa4-a8ea9258dd9d"
basket_sets_man = basket_man.applymap(encode_units)
basket_sets_man
# + id="rGVuu6zJu0yu"
frequent_itemsets_man = apriori(basket_sets_man, min_support=0.4, use_colnames=True)
# + id="U5AuePpou7JG" colab={"base_uri": "https://localhost:8080/", "height": 247} executionInfo={"status": "ok", "timestamp": 1620184948423, "user_tz": 240, "elapsed": 58774, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}} outputId="2075633b-838b-4a8f-ee26-9335ca929091"
rules_man = association_rules(frequent_itemsets_man, metric="lift", min_threshold=1)
rules_man.head()
# + [markdown] id="T_drNGFPjnVy"
# ### Same process is repeated for each Borough
#
# ---
#
#
# + [markdown] id="mlMv75Xx-o8Y"
# Finding association of offenses likely to occur together in **Bronx** on a given day.
# + id="eqzxEYBl4_0h"
basket_brx = (df[df['ARREST_BORO'] =="Bronx"]
.groupby(['ARREST_DATE', 'OFNS_DESC'])['ARREST_KEY'].count().unstack().reset_index().fillna(0).set_index('ARREST_DATE'))
# + id="p7Y-C76s_rq-"
basket_sets_brx = basket_brx.applymap(encode_units)
# + id="nwTWJ7VO_wec"
frequent_itemsets_brx = apriori(basket_sets_brx, min_support=0.4, use_colnames=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 247} id="lE7I6jlBj9x7" executionInfo={"status": "ok", "timestamp": 1620185415357, "user_tz": 240, "elapsed": 38338, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}} outputId="52c93eab-3edd-42e6-ab96-336c9aef7ef6"
rules_brx = association_rules(frequent_itemsets_brx, metric="lift", min_threshold=1)
rules_brx.head()
# + [markdown] id="GYxgbd1pkxKj"
# Finding association of offenses likely to occur together in **Queens** on a given day.
# + id="FG9hC_1mAEib"
basket_qns = (df[df['ARREST_BORO'] =="Queens"]
.groupby(['ARREST_DATE', 'OFNS_DESC'])['ARREST_KEY'].count().unstack().reset_index().fillna(0).set_index('ARREST_DATE'))
# + id="XIfFxAEjECa4"
basket_sets_qns = basket_qns.applymap(encode_units)
frequent_itemsets_qns = apriori(basket_sets_qns, min_support=0.4, use_colnames=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 247} id="YazZvfwnk9Ap" executionInfo={"status": "ok", "timestamp": 1620185522626, "user_tz": 240, "elapsed": 56035, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}} outputId="385dbe00-57d1-4e97-e1d6-1355fe18cd14"
rules_qns = association_rules(frequent_itemsets_qns, metric="lift", min_threshold=1)
rules_qns.head()
# + [markdown] id="de0GsGb9lT-a"
# Finding association of offenses likely to occur together in **Brooklyn** on a given day.
# + id="2s4DscmsUsLi"
basket_brk = (df[df['ARREST_BORO'] =="Brooklyn"]
.groupby(['ARREST_DATE', 'OFNS_DESC'])['ARREST_KEY'].count().unstack().reset_index().fillna(0).set_index('ARREST_DATE'))
# + id="ZvKctw4IUyeb"
basket_sets_brk = basket_brk.applymap(encode_units)
frequent_itemsets_brk = apriori(basket_sets_brk, min_support=0.5, use_colnames=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 247} id="ugfwCD1Na3dV" executionInfo={"status": "ok", "timestamp": 1620185770036, "user_tz": 240, "elapsed": 61061, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}} outputId="c634ff17-4af4-4b61-ce0b-ddace1ee194e"
rules_brk = association_rules(frequent_itemsets_brk, metric="lift", min_threshold=1)
rules_brk.head()
# + [markdown] id="XVK9r27AlmtT"
# Finding association of offenses likely to occur together in **Staten Island** on a given day.
# + id="ufgzWWxkdtmj"
basket_si = (df[df['ARREST_BORO'] =="Staten Island"]
.groupby(['ARREST_DATE', 'OFNS_DESC'])['ARREST_KEY'].count().unstack().reset_index().fillna(0).set_index('ARREST_DATE'))
basket_sets_si = basket_si.applymap(encode_units)
frequent_itemsets_si = apriori(basket_sets_si, min_support=0.5, use_colnames=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 247} id="dEgm8LHcd0Xy" executionInfo={"status": "ok", "timestamp": 1620185863012, "user_tz": 240, "elapsed": 327, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CqYLRoBa-WA/AAAAAAAAAAI/AAAAAAAAABI/bZS9sts68Lw/s64/photo.jpg", "userId": "14595891489077571335"}} outputId="56fd4524-99e7-4f25-fc6c-324863d12a00"
rules_si = association_rules(frequent_itemsets_si, metric="lift", min_threshold=1)
rules_si.head()
| association_rule/BigData_NYCCrime.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Predict rain based on other weather variables
# This notebook will use time lags to train a machine learning model for predicting wind speed.
#
# First, we select a random station. The data is kept at daily resolution. Then, we generate a lagged feature matrix.
import pandas as pd
import numpy as np
from numpy.random import randint
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import time
import glob
from mpl_toolkits.basemap import Basemap
# +
data_dir = '/datasets/NOAA_SST/'
#load(data_dir + “noaa_gsod/…/file”
t0 = time.time()
data = pd.read_pickle(data_dir+'noaa_gsod/Combined_noaa_gsod') # load weather data
stations = pd.read_pickle(data_dir+'noaa_gsod.stations') # load station data
# # USE ONLY 2008-2018 # #
data = data.loc[data.index >= pd.Timestamp(2008, 1, 1)]
data = data.drop(columns=['yr','year','da','mo']) # don't need these anymore
print(time.time()-t0)
# -
stations.head()
data.head()
# +
# # SELECT RANDOM STATION # #
np.random.seed(3)
rs = np.unique(data['stn'].values) # find unique stations with data
rand_stat = rs[randint(len(rs))] # pick a random station
# # ideally we should check < len(np.unique(data.index)), but many are shorter
while (len(data.loc[data['stn'] == rand_stat]) < 3650): # If not enough data
if len(stations.loc[stations['usaf'] == rand_stat]): # If station info available
if (stations.loc[stations['usaf'] == rand_stat].iloc[0]['wban'] != '99999'): # If station number not unique
rand_stat = rs[randint(len(rs))] # get a new station
else:
rand_stat = rs[randint(len(rs))] # get a new station
select_station = stations.loc[stations['usaf'] == rand_stat] # get location, etc, for random station
# -
features = data.loc[data['stn'] == rand_stat] # pick weather at random station
features = features.drop(columns='stn')
features = features.sort_index()
select_station.head() # see where it is
features.head()
# ### Time-shift the data
# +
features = features.drop(columns='mxpsd') # Drop maximum wind speed that day
columns = features.columns # weather variables
for co in columns:
# one day lag
features[co + '_lag1'] = features[co].shift(periods=1)
# two days lag
features[co + '_lag2'] = features[co].shift(periods=2)
# three days lag
features[co + '_lag3'] = features[co].shift(periods=3)
features = features.iloc[3:]
print(str(len(features)) + ' samples, ' + str(len(features.columns)) + ' features.')
features.head()
# -
# View station locations
fig = plt.figure(figsize=(18.5, 10.5))
m = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,\
llcrnrlon=-180,urcrnrlon=180, resolution='l')
m.drawmapboundary(fill_color='xkcd:lightblue')
m.fillcontinents(color='xkcd:green',lake_color='xkcd:lightblue')
m.drawmeridians(np.arange(-180.,180.,30.),labels=[True,False,False,True])
m.drawparallels(np.arange(-90.,90,30.),labels=[False,True,True,False])
lon = select_station['lon'].tolist()
lat = select_station['lat'].tolist()
#xpt,ypt = m(lon,lat)
m.plot(lon, lat,'r+')
#m.plot(179.75, -19.133, 'ro')
plt.show()
# ### Create train/val/test
# +
ylabel = features['wdsp'] # use today's wind speed as the label
features = features.drop(columns='wdsp') # don't put it in training data!!
# Use 20% test split (80% training + validation)
ntrain = int(len(features)*0.8)
x_test = features.iloc[ntrain:,:]
y_test = ylabel[ntrain:]
indices = np.arange(ntrain)
# Split remaining 80% into training-validation sets (of original data)
x_train, x_val, y_train, y_val = train_test_split(features.iloc[0:ntrain,:], ylabel[0:ntrain], \
indices, test_size=0.2, random_state=1)
# Scale features. Fit scaler on training only.
scaler = MinMaxScaler() #scale features between 0 and 1
x_train = scaler.fit_transform(x_train)
x_val = scaler.transform(x_val)
x_test = scaler.transform(x_test)
# -
# ### Predict with Random Forest
# +
# # Create, train, and predict random forest here # #
# -
# ### Plot the random forest
# +
# plot predictions
plt.figure(figsize=(15,7))
plt.subplot(1,2,1)
plt.plot(features.iloc[ntrain:].index,y_test) # plot actual wind speed
plt.plot(features.iloc[ntrain:].index, y) # plot predicted wind speed
# # PLOT TRAINING DATA HERE # #
# # INCREASE X TICK SPACING, UPDATE LEGEND # #
plt.xticks(features.index[::30], rotation = 45) # X-Ticks are spaced once every 30 days.
myFmt = mdates.DateFormatter('%d-%b-%y') # This shows day-month-year. Switch to month-year or annually
plt.legend(('Wind speed','Random Forest Prediction'), fontsize=12, loc=1) # Add entries for training predictions and truth
plt.gca().xaxis.set_major_formatter(myFmt)
plt.ylabel('Wind speed', fontsize=12)
plt.legend(('Wind speed','Random Forest Prediction'), fontsize=12, loc=1)
#plt.show()
# # Plot the feature importances # #
nfeatures = 10
fi = clf.feature_importances_ # get feature importances
fi_sort = np.argsort(fi)[::-1] # sort importances most to least
plt.subplot(1,2,2)
plt.bar(range(nfeatures), fi[fi_sort[0:nfeatures]], width=1, \
tick_label=features.columns.values[[fi_sort[0:nfeatures]]]) # plot features importances
plt.ylabel('Feature Importance (avg across trees)', fontsize=12)
plt.xticks(rotation = 45)
plt.show()
# -
# ### Feature importance is the weighted impurity of a branch adjusted by its children nodes and normalized by the impurities of all branches. The Random Forest feature importances are averaged over all regression trees.
| JupyterNotebookHW/timeseries_prediction_Wind.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Find the markdown blocks that say interaction required! The notebook should take care of the rest!
# # Import libs
# +
import sys
import os
sys.path.append('..')
from eflow.foundation import DataPipeline,DataFrameTypes
from eflow.model_analysis import ClassificationAnalysis
from eflow.utils.modeling_utils import optimize_model_grid
from eflow.utils.eflow_utils import get_type_holder_from_pipeline, remove_unconnected_pipeline_segments
from eflow.utils.pandas_utils import data_types_table
from eflow.auto_modeler import AutoCluster
from eflow.data_pipeline_segments import DataEncoder
import pandas as pd
import numpy as np
import scikitplot as skplt
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
import copy
import pickle
import time
import math
import multiprocessing as mp
from functools import partial
from scipy import stats
from IPython.display import clear_output
# +
# # Additional add ons
# # !pip install pandasgui
# # !pip install pivottablejs
# clear_output()
# -
# %matplotlib notebook
# %matplotlib inline
# ## Declare Project Variables
# ### Interaction required
# +
dataset_path = "Datasets/titanic_train.csv"
# -----
dataset_name = "Titanic Data"
pipeline_name = "Titanic Pipeline"
# -----
# -----
notebook_mode = True
# -
# ## Clean out segment space
remove_unconnected_pipeline_segments()
# # Import dataset
df = pd.read_csv(dataset_path)
shape_df = pd.DataFrame.from_dict({'Rows': [df.shape[0]],
'Columns': [df.shape[1]]})
display(shape_df)
display(df.head(30))
data_types_table(df)
# # Loading and init df_features
# +
# Option: 1
# df_features = get_type_holder_from_pipeline(pipeline_name)
# -
# Option: 2
df_features = DataFrameTypes()
df_features.init_on_json_file(os.getcwd() + f"/eflow Data/{dataset_name}/df_features.json")
df_features.display_features(display_dataframes=True,
notebook_mode=notebook_mode)
# # Any extra processing before eflow DataPipeline
# # Setup pipeline structure
# ### Interaction Required
main_pipe = DataPipeline(pipeline_name,
df,
df_features)
main_pipe.perform_pipeline(df,
df_features)
df
qualitative_features = list(df_features.get_dummy_encoded_features().keys())
# # Generate clustering models with automodeler (and find any other models in the directory structure)
auto_cluster = AutoCluster(df,
dataset_name=dataset_name,
dataset_sub_dir="Auto Clustering",
overwrite_full_path=None,
notebook_mode=True,
pca_perc=.8)
# # Inspect Hierarchical models
# +
# auto_cluster.visualize_hierarchical_clustering()
# +
# auto_cluster.create_elbow_models(sequences=5,
# max_k_value=10,
# display_visuals=True)
# -
auto_cluster.visualize_hierarchical_clustering()
# ## Remove Scaled data to save space(not needed but it helps)
1/0
scaled_data = auto_cluster.get_scaled_data()
# +
from pyclustering.cluster.center_initializer import kmeans_plusplus_initializer
from pyclustering.cluster.kmeans import kmeans
from pyclustering.cluster.silhouette import silhouette
from pyclustering.samples.definitions import SIMPLE_SAMPLES
from pyclustering.utils import read_sample
# Read data 'SampleSimple3' from Simple Sample collection.
# Prepare initial centers
centers = kmeans_plusplus_initializer(scaled_data, 4).initialize()
# Perform cluster analysis
kmeans_instance = kmeans(sample, centers)
kmeans_instance.process()
clusters = kmeans_instance.get_clusters()
# Calculate Silhouette score
score = silhouette(sample, clusters).process().get_score()
| testing/.ipynb_checkpoints/3.) Clustering of data-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Measuring the Z boson mass
#
#
# Let's look at a sample of $Z$ boson candidates recorded by CMS in 2011 and published at CERN opendata portal. It comes from DoubleMuon dataset with the following selection applied:
#
# - Both muons are "global" muons
# - invariant mass sits in range: 60 GeV < $ M_{\mu\mu}$ < 120 GeV
# - |$\eta$| < 2.1 for both muons
# - $p_{t}$ > 20 GeV
#
# The following columns presented in the CSV file:
#
# - `Run`, Event are the run and event numbers, respectively
# - `pt` is the transverse momentum $p_{t}$ of the muon
# - `eta` is the pseudorapidity of the muon: $\eta$
# - `phi` is the $\phi$ angle of the muon direction
# - `Q` is the charge of the muon
# - `dxy` is the impact parameter in the transverse plane: $d_{xy}$ - how distant is the track from the collision point
# - `iso` is the track isolation: $I_{track}$ - how many other tracks are there aroung given track
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from skopt import gp_minimize
# %matplotlib inline
# ## Read dataset
df = pd.read_csv('./Zmumu.csv')
# Let's calculate the invariant mass $M$ of the two muons using the formula
#
# $M = \sqrt{2p_{t}^{1}p_{t}^{2}(\cosh(\eta_{1}-\eta_{2}) - \cos(\phi_{1}-\phi_{2}))}$
df['M'] = np.sqrt(2 * df['pt1'] * df['pt2'] * (np.cosh(df['eta1'] - df['eta2']) - np.cos(df['phi1'] - df['phi2'])))
df.head(2)
# # Model
#
# The distribution of the Z boson mass has the form of a normal distribution, in addition there is a noise. The distribution of the noise has an exponential form. Thus, the resulting model, it is a result of a superposition of two distributions - normal and polinomial
# Let's plot the distribution of Z boson mass
def plot_mass(mass, bins_count=100):
y, x = np.histogram(mass, bins=bins_count, density=False)
err = np.sqrt(y)
fig = plt.figure(figsize=(15,7))
plt.title('Z mass', fontsize=20)
plt.xlabel("$m_{\mu\mu}$ [GeV]", fontsize=20)
plt.ylabel("Number of events", fontsize=20)
plt.errorbar(x[:-1], y, yerr=err, fmt='o', color='red', ecolor='grey', capthick=0.5, zorder=1, label="data")
return y, x
plot_mass(df.M);
# ## Exercise 1. clean up dataset a bit
# - demand that charge of muons should be opposite
# - $I_{track}$ < 3 and $d_{xy}$ < 0.2 cm
df.describe()
### YOUR CODE GOES HERE ###
df_sign = df[((df.Q1 > 0) & (df.Q2 < 0)) | ((df.Q1 < 0) & (df.Q2 > 0))]
df_isolation = df_sign[(df_sign.iso1 < 3) & (df_sign.iso2 < 3) & (df_sign.dxy1.abs() < 0.2) & (df_sign.dxy2.abs() < 0.2)]
df = df
plot_mass(df.M);
# ### Let's define parametrised model
# it should represent mixture of 1) Gaussian signal and 2) background that for the simplicity we consider to be flat over mass. So it gives the following set of parameters:
#
# - m0 - center of the Gaussian
# - sigma - standard deviation of the Gaussian
# - ampl - height of the peak
# - bck - height of the background
#
# finding those parameters is called _fitting_ model into the data. It will be the goal for the rest of the exercise. For simplicity sake we'll stick with old good binned fit.
def model_predict(params, X):
m0, sigma, ampl, bck = params
return bck + ampl / (sigma * np.sqrt(2 * np.pi)) * np.exp((-1) * (X - m0)**2 / (2 * sigma**2))
def model_loss(params, X, y):
# y, x = np.histogram(mass, bins=bins_count, density=False)
# residuals = model_predict(params, (x[1:] + x[:-1])/2) - y
residuals = y - model_predict(params, X)
return np.sum(residuals**2) / len(residuals)
def plot_mass_with_model(params, mass, bins_count=100):
y, X = plot_mass(mass, bins_count=bins_count)
X = (X[1:] + X[:-1]) / 2
error = model_loss(params, X, y)
plt.plot(X, model_predict(params, X), color='blue', linewidth=3.0, zorder=2, label="fit, loss=%.2f" % error)
plt.legend(fontsize='x-large')
# ## Here you can fit model parameters by hand
plot_mass_with_model((75, 5, 2300, 20), df.M)
# ## ... but you can do it automatically of course
# Setting up a scikit optimizer
# +
from tqdm import tqdm
from skopt import Optimizer
search_space = [(90.0, 91.0), # m0 range
(1, 2), # sigma range
(3250, 3500), # amplitude range
(0, 50) # bck range
]
y, X = np.histogram(df.M, bins=120, density=False)
X = (X[1:] + X[:-1]) / 2
opt = Optimizer(search_space, base_estimator="GP", acq_func="EI", acq_optimizer="lbfgs")
# -
# Running it for a while. You can re-run this cell several times
# +
from skopt.utils import create_result
for i in tqdm(range(50)):
next_x = opt.ask()
f_val = model_loss(next_x, X, y)
opt.tell(next_x, f_val)
res = create_result(Xi=opt.Xi, yi=opt.yi, space=opt.space,
rng=opt.rng, models=opt.models)
# -
# ## A bit of search history
import skopt.plots
skopt.plots.plot_convergence(res)
print (list(zip(["m0", "sigma", "ampl", "bck"], res.x)))
# Let's see how well the prediction fits the data
plot_mass_with_model(res.x, df.M, bins_count=120)
# token expires every 30 min
COURSERA_TOKEN = "Bay<PASSWORD>"### YOUR TOKEN HERE
COURSERA_EMAIL = "<EMAIL>"### YOUR EMAIL HERE
# ## Grader part, do not change, please
import grading
grader = grading.Grader(assignment_key="<KEY>",
all_parts=["VI3xu", "VuE8x", "KzmMV", "TwZBF"])
# +
ans_part1 = round(res.x[0])
grader.set_answer("VI3xu", ans_part1)
ans_part2 = round(res.x[1], 2)
grader.set_answer("VuE8x", ans_part2)
ans_part3 = round(res.x[3])
grader.set_answer("KzmMV", ans_part3)
ans_part4 = round(res.x[2])
grader.set_answer("TwZBF", ans_part4)
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
# -
| Hadron-Collider/Z-bozon mass estimator/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''3.8.5'': pyenv)'
# language: python
# name: python3
# ---
# # __MLflow Dashboard__
# ## __Import mandatory tools and libraries__
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error
from math import sqrt
import statsmodels.api as sm
# %matplotlib inline
import datetime
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.tsa.stattools import adfuller, kpss
from scipy import signal
from statsmodels.regression.rolling import RollingOLS
import warnings
warnings.filterwarnings('ignore')
from statsmodels.tsa.seasonal import seasonal_decompose
# Importing required libraries
import sys
# adding to the path variables the one folder higher (locally, not changing system variables)
sys.path.append("..")
import mlflow
# -
# setting the MLFlow connection and experiment
mlflow.set_tracking_uri(TRACKING_URI)
mlflow.set_experiment(EXPERIMENT_NAME)
mlflow.start_run()
run = mlflow.active_run()
| notebooks/mlflow_Katrin.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="8sSVlekj594x" executionInfo={"status": "ok", "timestamp": 1632341335638, "user_tz": 240, "elapsed": 20956, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg_QuRP-FvpZwye5zw3rmJmceg28bQqANBEfLr_13E=s64", "userId": "09054757205289220354"}} outputId="13fda361-322f-46e9-af38-35a63f366de5"
from google.colab import drive
drive.mount('/content/drive')
# #%cd ./drive/MyDrive/American_University/2021_Spring/CSC-676-001 Computer Vision/GitHub/Project/Pluralistic-Inpainting
# #!pwd
# + colab={"base_uri": "https://localhost:8080/"} id="U12ji4Bj66Cm" executionInfo={"status": "ok", "timestamp": 1631464357062, "user_tz": 240, "elapsed": 263, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg_QuRP-FvpZwye5zw3rmJmceg28bQqANBEfLr_13E=s64", "userId": "09054757205289220354"}} outputId="c3e8e161-3007-4e59-bc6a-768bdfa8a2b3"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="BiE8cKhG7nnm" executionInfo={"status": "ok", "timestamp": 1632341336231, "user_tz": 240, "elapsed": 602, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg_QuRP-FvpZwye5zw3rmJmceg28bQqANBEfLr_13E=s64", "userId": "09054757205289220354"}} outputId="5cc5b634-ff00-4a97-eb97-6eab8b26954c"
# %cd ./drive/MyDrive/American_University/2021_Fall/DATA-793-001_Data Science Practicum/Datasets
# !pwd
# + colab={"base_uri": "https://localhost:8080/"} id="mBAAcaNP7rR2" executionInfo={"status": "ok", "timestamp": 1631466158120, "user_tz": 240, "elapsed": 1624425, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg_QuRP-FvpZwye5zw3rmJmceg28bQqANBEfLr_13E=s64", "userId": "09054757205289220354"}} outputId="9e31959a-acbc-4498-9c53-46d581b808f9"
# ! python FaceForensics.py ./ -d all
# + colab={"base_uri": "https://localhost:8080/"} id="4HTeKJcaAk4G" executionInfo={"status": "ok", "timestamp": 1631215314493, "user_tz": 240, "elapsed": 405, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg_QuRP-FvpZwye5zw3rmJmceg28bQqANBEfLr_13E=s64", "userId": "09054757205289220354"}} outputId="6fe7bda4-504a-4ae6-bdb6-94d4ca7bcc8e"
# ! python FaceForensics.py -h
# + colab={"base_uri": "https://localhost:8080/", "height": 272} id="r0Z9I2bvoIj7" executionInfo={"status": "error", "timestamp": 1631464398783, "user_tz": 240, "elapsed": 623, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg_QuRP-FvpZwye5zw3rmJmceg28bQqANBEfLr_13E=s64", "userId": "09054757205289220354"}} outputId="e83d5db5-57ab-43f5-f572-72fa3ee13747"
# #!/usr/bin/env python
""" Downloads FaceForensics++ and Deep Fake Detection public data release
Example usage:
see -h or https://github.com/ondyari/FaceForensics
"""
# -*- coding: utf-8 -*-
import argparse
import os
import urllib
import urllib.request
import tempfile
import time
import sys
import json
import random
from tqdm import tqdm
from os.path import join
# URLs and filenames
FILELIST_URL = 'misc/filelist.json'
DEEPFEAKES_DETECTION_URL = 'misc/deepfake_detection_filenames.json'
DEEPFAKES_MODEL_NAMES = ['decoder_A.h5', 'decoder_B.h5', 'encoder.h5',]
# Parameters
DATASETS = {
'original_youtube_videos': 'misc/downloaded_youtube_videos.zip',
'original_youtube_videos_info': 'misc/downloaded_youtube_videos_info.zip',
'original': 'original_sequences/youtube',
'DeepFakeDetection_original': 'original_sequences/actors',
'Deepfakes': 'manipulated_sequences/Deepfakes',
'DeepFakeDetection': 'manipulated_sequences/DeepFakeDetection',
'Face2Face': 'manipulated_sequences/Face2Face',
'FaceShifter': 'manipulated_sequences/FaceShifter',
'FaceSwap': 'manipulated_sequences/FaceSwap',
'NeuralTextures': 'manipulated_sequences/NeuralTextures'
}
ALL_DATASETS = ['original', 'DeepFakeDetection_original', 'Deepfakes',
'DeepFakeDetection', 'Face2Face', 'FaceShifter', 'FaceSwap',
'NeuralTextures']
COMPRESSION = ['raw', 'c23', 'c40']
TYPE = ['videos', 'masks', 'models']
SERVERS = ['EU', 'EU2', 'CA']
def parse_args():
parser = argparse.ArgumentParser(
description='Downloads FaceForensics v2 public data release.',
formatter_class=argparse.ArgumentDefaultsHelpFormatter
)
parser.add_argument('output_path', type=str, help='Output directory.')
parser.add_argument('-d', '--dataset', type=str, default='all',
help='Which dataset to download, either pristine or '
'manipulated data or the downloaded youtube '
'videos.',
choices=list(DATASETS.keys()) + ['all']
)
parser.add_argument('-c', '--compression', type=str, default='raw',
help='Which compression degree. All videos '
'have been generated with h264 with a varying '
'codec. Raw (c0) videos are lossless compressed.',
choices=COMPRESSION
)
parser.add_argument('-t', '--type', type=str, default='videos',
help='Which file type, i.e. videos, masks, for our '
'manipulation methods, models, for Deepfakes.',
choices=TYPE
)
parser.add_argument('-n', '--num_videos', type=int, default=None,
help='Select a number of videos number to '
"download if you don't want to download the full"
' dataset.')
parser.add_argument('--server', type=str, default='EU',
help='Server to download the data from. If you '
'encounter a slow download speed, consider '
'changing the server.',
choices=SERVERS
)
args = parser.parse_args()
# URLs
server = args.server
if server == 'EU':
server_url = 'http://canis.vc.in.tum.de:8100/'
elif server == 'EU2':
server_url = 'http://kaldir.vc.in.tum.de/faceforensics/'
elif server == 'CA':
server_url = 'http://falas.cmpt.sfu.ca:8100/'
else:
raise Exception('Wrong server name. Choices: {}'.format(str(SERVERS)))
args.tos_url = server_url + 'webpage/FaceForensics_TOS.pdf'
args.base_url = server_url + 'v3/'
args.deepfakes_model_url = server_url + 'v3/manipulated_sequences/' + \
'Deepfakes/models/'
return args
def download_files(filenames, base_url, output_path, report_progress=True):
os.makedirs(output_path, exist_ok=True)
if report_progress:
filenames = tqdm(filenames)
for filename in filenames:
download_file(base_url + filename, join(output_path, filename))
def reporthook(count, block_size, total_size):
global start_time
if count == 0:
start_time = time.time()
return
duration = time.time() - start_time
progress_size = int(count * block_size)
speed = int(progress_size / (1024 * duration))
percent = int(count * block_size * 100 / total_size)
sys.stdout.write("\rProgress: %d%%, %d MB, %d KB/s, %d seconds passed" %
(percent, progress_size / (1024 * 1024), speed, duration))
sys.stdout.flush()
def download_file(url, out_file, report_progress=False):
out_dir = os.path.dirname(out_file)
if not os.path.isfile(out_file):
fh, out_file_tmp = tempfile.mkstemp(dir=out_dir)
f = os.fdopen(fh, 'w')
f.close()
if report_progress:
urllib.request.urlretrieve(url, out_file_tmp,
reporthook=reporthook)
else:
urllib.request.urlretrieve(url, out_file_tmp)
os.rename(out_file_tmp, out_file)
else:
tqdm.write('WARNING: skipping download of existing file ' + out_file)
def main(args):
# TOS
print('By pressing any key to continue you confirm that you have agreed '\
'to the FaceForensics terms of use as described at:')
print(args.tos_url)
print('***')
print('Press any key to continue, or CTRL-C to exit.')
_ = input('')
# Extract arguments
c_datasets = [args.dataset] if args.dataset != 'all' else ALL_DATASETS
c_type = args.type
c_compression = args.compression
num_videos = args.num_videos
output_path = args.output_path
os.makedirs(output_path, exist_ok=True)
# Check for special dataset cases
for dataset in c_datasets:
dataset_path = DATASETS[dataset]
# Special cases
if 'original_youtube_videos' in dataset:
# Here we download the original youtube videos zip file
print('Downloading original youtube videos.')
if not 'info' in dataset_path:
print('Please be patient, this may take a while (~40gb)')
suffix = ''
else:
suffix = 'info'
download_file(args.base_url + '/' + dataset_path,
out_file=join(output_path,
'downloaded_videos{}.zip'.format(
suffix)),
report_progress=True)
return
# Else: regular datasets
print('Downloading {} of dataset "{}"'.format(
c_type, dataset_path
))
# Get filelists and video lenghts list from server
if 'DeepFakeDetection' in dataset_path or 'actors' in dataset_path:
filepaths = json.loads(urllib.request.urlopen(args.base_url + '/' +
DEEPFEAKES_DETECTION_URL).read().decode("utf-8"))
if 'actors' in dataset_path:
filelist = filepaths['actors']
else:
filelist = filepaths['DeepFakesDetection']
elif 'original' in dataset_path:
# Load filelist from server
file_pairs = json.loads(urllib.request.urlopen(args.base_url + '/' +
FILELIST_URL).read().decode("utf-8"))
filelist = []
for pair in file_pairs:
filelist += pair
else:
# Load filelist from server
file_pairs = json.loads(urllib.request.urlopen(args.base_url + '/' +
FILELIST_URL).read().decode("utf-8"))
# Get filelist
filelist = []
for pair in file_pairs:
filelist.append('_'.join(pair))
if c_type != 'models':
filelist.append('_'.join(pair[::-1]))
# Maybe limit number of videos for download
if num_videos is not None and num_videos > 0:
print('Downloading the first {} videos'.format(num_videos))
filelist = filelist[:num_videos]
# Server and local paths
dataset_videos_url = args.base_url + '{}/{}/{}/'.format(
dataset_path, c_compression, c_type)
dataset_mask_url = args.base_url + '{}/{}/videos/'.format(
dataset_path, 'masks', c_type)
if c_type == 'videos':
dataset_output_path = join(output_path, dataset_path, c_compression,
c_type)
print('Output path: {}'.format(dataset_output_path))
filelist = [filename + '.mp4' for filename in filelist]
download_files(filelist, dataset_videos_url, dataset_output_path)
elif c_type == 'masks':
dataset_output_path = join(output_path, dataset_path, c_type,
'videos')
print('Output path: {}'.format(dataset_output_path))
if 'original' in dataset:
if args.dataset != 'all':
print('Only videos available for original data. Aborting.')
return
else:
print('Only videos available for original data. '
'Skipping original.\n')
continue
if 'FaceShifter' in dataset:
print('Masks not available for FaceShifter. Aborting.')
return
filelist = [filename + '.mp4' for filename in filelist]
download_files(filelist, dataset_mask_url, dataset_output_path)
# Else: models for deepfakes
else:
if dataset != 'Deepfakes' and c_type == 'models':
print('Models only available for Deepfakes. Aborting')
return
dataset_output_path = join(output_path, dataset_path, c_type)
print('Output path: {}'.format(dataset_output_path))
# Get Deepfakes models
for folder in tqdm(filelist):
folder_filelist = DEEPFAKES_MODEL_NAMES
# Folder paths
folder_base_url = args.deepfakes_model_url + folder + '/'
folder_dataset_output_path = join(dataset_output_path,
folder)
download_files(folder_filelist, folder_base_url,
folder_dataset_output_path,
report_progress=False) # already done
if __name__ == "__main__":
args = parse_args()
main(args)
| code/00.FaceForensics_download_script.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Hitting and Cold Weather in Baseball
#
# **A project by <NAME> (<EMAIL>) on the effects of temperature on major league batters**
#
# **Spring 2016 Semester**
#
#
# ## Introduction
#
# The Natural Gas Law (PV = nRT) tells us that as the temperature of a gas rises in a rigid container, the pressure of the said gas will steadily increase as well due to a rise in the average speed of gas molecules. In essence, the amount of energy contained within the system rises as heat is nothing more than thermal (kenetic) energy. While the Natural Gas Law holds for gasses, a similar increase in molecular vibrations - and therefore energy - is seen in solid objects as well. When the temperature rises, the amount of energy contained within a solid increases. The purpose of this project is to examine the effects of temperatures on the game of baseball, with specific regard to the hitting aspect of the game.
#
# ## Hitting in Baseball
#
# The art of hitting a MLB fastball combines an incredible amount of luck, lightning-fast reflexes, and skill. Hitters often have less than half a second to determine whether to swing at a ball or not. However, when sharp contact is made with a fastball screaming towads the plate at over 90 miles/hour, the sheer velocity and energy the ball carries with it helps it fly off of the bat at an even faster speed. The higher the pitch velocity, the more energy a ball contains, and the faster its "exit velocity" (the speed of the ball when it is hit). This project looks to examine whether or no the extra energy provided by the ball's temperature plays a significant factor in MLB hitters' abilities to hit the ball harder. By analyzing the rates of extra base hits (doubles, triples, and home runs which generally require a ball to be hit much harder and further than a single) at different temperature ranges, I hope to discover a significant correlation between temperature and hitting rates.
#
# ### Packages Used
#
#
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
# %matplotlib inline
# **Pandas:** I imported pandas for use in reading my many .csv files and because the pandas module contains dataframes, which are much easier to use for data analysis than lists or dictionaries.
#
# **matplotlib.pyplot:** matplotlib.pyplot was used to create graphs and scatterplots of the data, and because the creation of figure and axis objects with matplotlib allows for easier manipulation of the physical aspects of a plot.
#
# **statsmodels.formula.api** was imported for the linear regression models at the end of this project.
# ## Data Inputting
#
# The data for this project was collected from baseball-reference.com's [Play Index](http://www.baseball-reference.com/play-index/), which allows users to sort and search for baseball games based on a multitude of criteria including team, player, weather conditions (temperature, wind speed/direction, and precipitation). Unfortunately, the play index only allows registered users to access and export a list of 300 games at a time. As a result, I had to download 33 seperate CSV files from the website to gather all 9-inning MLB games from the 2013 - 2015 seasons. The total number of games used in this data set was 8805. Because the filenames were all **'C:/Users/Nathan/Desktop/BaseBall/data/play-index_game_finder.cgi_ajax_result_table.csv'** followed by a number in parenthesis, I was able to use a for loop to combine all the data into one large dataframe.
#
# **An online version of these files are avaliable at [this link](https://github.com/njd304/Data-Bootcamp)**
#
# +
#import data from CSVs
total = pd.read_csv('C:/Users/Nathan/Desktop/BaseBall/data/play-index_game_finder.cgi_ajax_result_table.csv')
for i in range(1,32):
file = 'C:/Users/Nathan/Desktop/Baseball/data/play-index_game_finder.cgi_ajax_result_table (' + str(i) +').csv'
data = pd.read_csv(file)
total = total.append(data)
total.head(30)
# -
# ## Data Cleansing
#
# Because of the nature of the baseball-reference.com Play Index, there were some repeated games in the CSV files, and after every 25 games the headers would reappear. In order to clean the data, I removed each row of data where the 'Temp' value was 'Temp', because those rows were the header rows. I removed the unnecessary columns by iterating through the column values with a for loop and removing the ones that were "unimportant" (as opposed to the "important" ones in the important list) Finally, I removed the duplicate entries from the datafram using the df.drop_duplicates() method.
#Clean data to remove duplicates, unwanted stats
important = ['Date','H', '2B', '3B', 'HR', 'Temp']
for i in total:
if i in important:
continue
del total[i]
#remove headers
total = total[total.Temp != 'Temp']
#remove duplicates
total = total.drop_duplicates()
#remove date -> cannot remove before because there are items that are identical except for date
del total['Date']
# remove date from important list
important.remove('Date')
total.head(5)
#change dtypes to int
total[['Temp', 'HR', '2B', '3B', 'H']] = total[['Temp', 'HR', '2B', '3B', 'H']].astype(int)
total.dtypes
# +
#calculte extra-base-hits (XBH) (doubles, triples, home runs) for each game
#by creating a new column in the dataframe
total['XBH'] = total['2B'] + total['3B'] + total['HR']
#append XBH to important list
important.append('XBH')
# -
#seperate data into new dataframes based on temperature ranges
#below 50
minus50 = total[total.Temp <= 50]
#50-60
t50 = total[total.Temp <= 60]
t50 = t50[t50.Temp > 50]
#60-70
t60 = total[total.Temp <= 70]
t60 = t60[t60.Temp > 60]
#70-80
t70 = total[total.Temp <= 80]
t70 = t70[t70.Temp > 70]
#80-90
t80 = total[total.Temp <= 90]
t80 = t80[t80.Temp > 80]
#90-100
t90 = total[total.Temp <= 100]
t90 = t90[t90.Temp > 90]
#over 100
over100= total[total.Temp > 100]
minus50.head(5)
#New dataframe organized by temperature
rangelist = [minus50, t60, t70, t80, t90, over100]
data_by_temp = pd.DataFrame()
data_by_temp['ranges']=['<50', "60's","70's","80's","90's",">100"]
#calculate per-game averages by temperature range
for i in important:
data_by_temp[i+'/Game'] = [sum(x[i])/len(x) for x in rangelist]
#set index to temperature ranges
data_by_temp = data_by_temp.set_index('ranges')
data_by_temp.head(10)
# ## Data Plots
#
# I made a couple bar graphs to compare average extra base hits per game by temperature range and to compare home runs per game as well because home runs are the furthest hit balls, and in theory should see the largest temperature impact if there is in fact a measureable impact on the baseballs. I then made a couple of scatterplots to compare the complete data results, and look for some sort of trendline. Unfortunately, because of the limited amount of possible results, the scatterplots did not come out as I had hoped.
#plots
fig, ax=plt.subplots()
data_by_temp['XBH/Game'].plot(ax=ax,kind='bar',color='blue', figsize=(10,6))
ax.set_title("Extra Base Hits Per Game by Temp Range", fontsize=18)
ax.set_ylim(2,3.6)
ax.set_ylabel("XBH/Game")
ax.set_xlabel("Temperature")
plt.xticks(rotation='horizontal')
#plots
fig, ax=plt.subplots()
data_by_temp['HR/Game'].plot(ax=ax,kind='bar',color='red', figsize=(10,6))
ax.set_title("Home Runs Per Game by Temp Range", fontsize=18)
ax.set_ylim(0,1.2)
ax.set_ylabel("HR/Game")
ax.set_xlabel("Temperature")
plt.xticks(rotation='horizontal')
#scatterplot
x = data_by_temp.index
fig, ax = plt.subplots()
ax.scatter(total['Temp'],total['XBH'])
ax.set_title("Temp vs Total Extra Base Hits", fontsize = 18)
ax.set_ylabel("XBH/Game")
ax.set_xlabel("Temperature")
plt.xticks(rotation='horizontal')
ax.set_ylim(-1,14)
#scatterplot
x = data_by_temp.index
fig, ax = plt.subplots()
ax.scatter(total['Temp'],total['HR'])
ax.set_title("Temp vs Total Home Runs", fontsize = 18)
ax.set_ylabel("HR/Game")
ax.set_xlabel("Temperature")
plt.xticks(rotation='horizontal')
ax.set_ylim(-1,10)
# ## Statistical Analysis
#
# I ran a linear regression of the total extra base hits and teperatures for the master data set to see if there was a correletion. Although the r-squared value is so small, due to the fact that there are a limited amount of possible home runs per game (realistically) and the sample size is so large (see the scatterplots above), the regressions for extra base hits and temperature, as well as home runs and temperature, both show a miniscule correlation between temperature and hits. Because the slope values are so small (a 100 degree increase in temperature correleates to a 1 extra base hit increase and a .7 home run increase), there is basically no correlation. After all, a 100 degree increase is basically the entire range of this project.
regression= smf.ols(formula="total['XBH'] ~ total['Temp']", data = total).fit()
regression.params
regression.summary()
regression2 = smf.ols(formula="total['HR'] ~ total['Temp']", data = total).fit()
regression2.params
regression2.summary()
# ## Conclusion
#
# Ultimately, the results of this project were mixed and more negative than positive. Though the bar graph on average extra base hits per game showed a steady increase as temperature increased, the same was not true for average home runs per game. Furthermore, the regression analysis showed a tiny relationship between the variables. Although the results were statistically significant, this was due more to the huge sample size than the existience of a correlation. Ultimately, the data collected failed to really suggest that temperature has a huge impact on the ability of MLB hitters to hit a baseball with power. I had been hoping to discover a more impactful effect of temmperature on the ability of hitters to hit the ball far. A possible expansion upon this experiment to go more in-depth perhaps could have been a team-by-team (or stadium by stadium) breakdown of how each team/stadium performed under diffrerent temperature conditons. For example, it is likely that the Tampa Bay Rays or the Los Angeles Angels, from the southern part of the US and unaccustomed to playing in colder temperatures, may have been more affected than a team like the Boston Red Sox who regularly play in colder games, especially in the spring and fall months.
# ## Sources
#
# Data from this project was collected from the baseball-reference.com Play Index, which can be found at http://www.baseball-reference.com/play-index/. In order to unlock the full potential of the Play Index, a paid membership is required.
| UG_S16/Ding-Baseball+Weather.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.0 64-bit
# language: python
# name: python3
# ---
# +
number = ['No.', 1, 2, 3, 4, 5, 6]
company = ['Company', 'Microsoft', 'Amazon',
'Paypal', 'Apple', 'Fastly', 'Square']
cap = ['Cap', 'Mega', 'Mega', 'Large', 'Large', 'Mid', 'Mid']
qty = ['Qty', '100', '5', '80', '100', '30', '30']
bought_price = ['Bought Price', '188', '1700', '100', '60', '40', '40']
market_price = ['Market Price', '207', '3003', '188', '110', '76', '178']
# space = 15
spaces = list(map(lambda x: len(str(x)),next(zip(number, company, cap, qty,
bought_price, market_price))))
spaces = [i+5 for i in spaces]
for a, b, c, d, e, f in zip(number, company, cap, qty, bought_price, market_price):
print(
f"{a:<{spaces[0]}} {b:<{spaces[1]}} {c:<{spaces[2]}} {d:<{spaces[3]}} {e:<{spaces[4]}} {f:<{spaces[5]}}")
# -
import numpy as np
m = np.nan
m is np.nan
np.isnan(m)
| format.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Functions, modules, packages and libraries
# There are many different types of programmers.
# Take, for example, the area of self-driving cars.
# We can think of a variety of people who write different types of programs in that industry.
#
# There are programmers who work in the low-levels of the car technologies, such as ABS brakes that prevent skidding.
# ABS brakes require small, special-purpose chips that read the outputs of sensors related to the brakes in your car and make decisions based on them.
# Somebody must write the code to run on those chips.
# In fact, there won't be just one person but a team of people, a team of specialised programmers.
#
# A different team of will be responsible for automating the systems that drive the car.
# They will write the higher-level software that makes softer decisions like whether a stray dog is likely to run in front of the car or not.
# The software will decide what angle to turn the steering wheel or how hard to push the accelerator.
# This team will rely at least partly on the team that program the low-level devices - their software will be based on a range of data coming from the lower levels.
# They'll have a different skillset and a different focus.
# Because of this, it's essential that the interface between the teams is well managed.
# That is, there must be clear lines of responsibility and communication between the teams.
#
# There are likely to be many other teams too.
# The automation team's work will be based on research.
# In companies, researchers tend to be a different group of people to the software developers.
# Programming for research tends to be bespoke, fast and ready.
# Research goals are typically scientific, statistical results.
# Researchers are typically not too concerned with writing reusable, user-friendly software.
# When a researcher writes code, they are usually the only person to ever run it and they essentially only run it once.
#
# Yet another team might take data from the car and combine it with data from all the other cars to analyse it.
# They'll be looking for anomalies that might be dangerous or performance metrics to see can they improve the car.
#
# All these teams must work together in an environment where they have different aims, constraints and issues.
# Over many decades the computing industry has elicited some basic principles for code writing.
# These make programming a little easier, a bit more efficient, and a lot safer for everyone involved.
# In what follows we'll touch on a couple of these principles.
# ## Reusability
#
# The basic principle guiding most programming work is reusability.
# There are numerous buzzwords and phrases related to reusability, such as Don't Repeat Yourself (DRY).
# The idea is that you avoid re-writing the same or similar code in different parts of your program.
# Rather, you write the code once, give it a name, and then use it by name from then on.
#
# For example, let's say you have many occasions to calculate the factorial of a positive integer. That's not functionality that's typically built-in to a programming language in Python, and you must write a few statements to do it.
# Calculate 10 factorial.
factorial10 = 1
for i in range(1, 11):
factorial10 = factorial10 * i
print(factorial10)
# This code calculates only the factorial of 10. If you want to calculate the factorial of 11, then you need to write code that is highly similar.
# Calculate 11 factorial.
factorial11 = 1
for i in range(1, 12):
factorial11 = factorial11 * i
print(factorial11)
# ## Functions
#
# Programmers hate this kind of duplication for many reasons.
# One is that if you find a bug in your factorial-calculating code then you must change the code everywhere it's written.
# Likewise, should you find a better and more efficient way to calculate the factorial of a number.
# To avoid this, we write a function with clearly defined inputs and output.
def factorial(n):
"""Return the factorial of n."""
ans = 1
for i in range(1, n + 1):
ans = ans * i
return ans
# Now you can call the code using its name and use it to calculate any factorial.
print(factorial(10))
print(factorial(11))
# Let's say you use this factorial function lots of times in your code, and then realise could make the function more efficient.
# For instance, our function multiplies `ans` by 1 in the first iteration of the `for` loop, which has no effect.
# We can change `range(1, n + 1)` to `range(2, n + 1)` which will give the same result with less iterations.
# Since we've written it as a function, we can just change the code in one place.
# It will automatically filter through to all the places where we have called the function.
def factorial(n):
"""Return the factorial of n."""
ans = 1
for i in range(2, n + 1):
ans = ans * i
return ans
# The following code, which is the same code from before, gives the same result but it now works more efficiently.
print(factorial(10))
print(factorial(11))
# ## Over-abstraction
#
# When you get the hang of functions, it's natural to start turning everything into a function.
# There is a temptation to start adding bells and whistles too.
# It turns out that you can take it too far.
#
# Let's write a simple (and largely unnecessary) function to square a number.
# Note that Python has a power operator built-in: `10**2` gives `100`.
# We won't use that here - imagine it doesn't exist for now.
def square(x):
"""Return the square of x."""
return x * x
print(square(11))
# Now, let's say you also need a function to cube a number.
def cube(x):
"""Return the cube of x."""
return x * x * x
print(cube(11))
# Here's an idea: let's write a power function that not only squares and cubes but can raise any number to any (positive integer) power.
def power(x, y):
"""Return x to the power of y."""
ans = x
for i in range(y - 1):
ans = ans * x
return ans
print(power(11, 2))
print(power(11, 3))
# In some ways this is better.
# We now have one function instead of two and we are looking super-DRY since we have removed some duplication of code.
# However, there are trade-offs to consider.
#
# First, the `power` function is a little more complex to use than each of the `square` and `cube` functions.
# We might be more likely to get confused when using it.
# Say I've had too much coffee (quite likely) and I incorrectly use the function as `power(2, 10)` when what I meant to write was `power(10, 2)` to get `100`.
# That wouldn't have happened if I was using `square` instead, as it only takes one argument.
# Of course, you can write `square` and `cube` in terms of `power` if you want.
# +
def square(x):
"""Returns the square of x."""
return power(x, 2)
def cube(x):
"""Returns the cube of x."""
return power(x, 3)
print(square(10))
print(cube(10))
# -
# Another trade-off is the efficiency of the code.
# Loops are typically costly operations in programming - they take a little while to get going and complete.
# The original `square` and `cube` functions don't use loops.
# The `power` function does, and therefore the second versions of `square` and `cube` that are based on it do too.
# You should consider how many times you will likely call the functions.
# If you use them once every so often, then the (possible) slight inefficiency won't matter.
# On the other hand, if you're calling them 1,000 times a second it might.
# The keyword here is abstraction.
# When we wrote our `factorial` function, we abstracted the idea of multiplying a number by all the numbers less than it.
# The factorial function is a high-level concept, an abstraction.
# Likewise, when we wrote our `power` function we abstracted the idea of multiplying a number by itself several times.
# We added another layer of abstraction when we re-wrote the square and cube functions to use the `power` function.
#
# We've seen a couple of downsides of using abstractions - the complexity and the possible inefficiency.
# Unfortunately, there's no one-size-fits-all rule as to when to when you should and shouldn't abstract.
# It might help to avoid considering whether abstraction is good or bad.
# Rather, think of it as tool that can be used when it helps.
# This is where programming becomes a bit of an art.
# ## Modules and packages
#
# Another benefit of writing our code in functions is that we can share them with our collaborators.
# Modern programming is fundamentally based on this idea.
# There are not too many people who know how to program everything from the ground up.
# Rather, people specialise in one aspect of programming and share their work with others
#
# Functions enable this by hiding the details under their hood.
# To use a function, it's enough to know what it does, what inputs it expects and what output it gives.
# How it does it is can often be left to someone else.
# This is sometimes called the *black box* view of functions.
#
# This is a useful concept and modern programming is largely based on it.
# A typical program will be built from lots of functions, often written by lots of different people.
# We can write a bunch of useful functions in a single file and pass the file around to our friends so that they can use our functions.
# Python calls these kinds of files modules.
#
# Modules are normal Python scripts that can be run through Python just like any other script.
# The difference is in their intended use.
# Modules are scripts that don't really do anything by themselves - they're meant for use in other programs.
# Remember all the programmers and teams involved in self-driving cars?
# Modules allow them to each write individual parts of the final code that can then joined up into one final automated driving program.
# Modules themselves can be organised in packages, which are essentially folders containing modules.
# Modules and packages turn out to be useful for lots of reasons.
# When working collaboratively, it's convenient that different programmers can work on small parts of the code organised in separate files.
# This helps to avoid problems (sometimes called conflicts) where two programmers edit the same part of the same file at the same time and the computer doesn't know which version of the code to keep.
#
# Modules also provide an easy solution to re-use the same code in different programs.
# There are usually some parts of programs that we write that can be re-used in other programs, while there are some parts that are unlikely to be re-used.
# If we separate the re-usable parts into a module, we can include just those parts in both programs.
#
# To find out more about modules, including how to write your own, you can consult [part 6 of the Python tutorial](https://docs.python.org/3.5/tutorial/modules.html).
# ## Libraries
#
# Over time the programming community at large have realised that there are vast swathes of re-usable functionality.
# This has led to the creation of libraries of packages, modules and functions that are freely available for incorporation into your own programs.
#
# One important library is (nearly) always installed alongside Python itself, and it is called the standard library.
# It contains functions that are commonly used in programs, but not often enough to be in included directly as part of Python itself.
# Generally, they try to keep Python lean, without all the extra functions unless they're needed.
#
# To use modules from the standard library you must first tell Python that you plan to use them.
# You do this using the `import` keyword.
# This incurs a slight cost, but you get the extra functionality.
# It turns out there's a function to calculate factorials in the standard library.
# +
import math
print(math.factorial(10))
# -
# Note the use of the `math` name in front of `factorial` function.
# This tells Python that the function is in the `math` module, rather than the current file we are writing in.
# We must have imported `math` somewhere previously in the current file, otherwise Python will give us an error to tell us it doesn't know what `math` is.
# When Python was installed on our computer it installed the math module and configured itself to know where it is.
# It can be in a different location depending on your own system, but we don't have to worry about it because the Python installer took care of it.
#
# (I'm glossing over quite a few technicalities here, but they're not important for this discussion. The math module is, in fact, a special module that isn't even written in Python, but that idea isn't relevant here. The only reason I mention it is that if you go looking for math.py on your system you won't find it. A module you can go looking for on your own system is `os`. It helps you access underlying Windows/MacOS/Linux functionality on your system. You can view the file online [here](https://svn.python.org/projects/python/trunk/Lib/os.py), just to convince you that it's a bunch of Python code that someone else wrote.)
# ## Other useful modules
#
# It turns out that while it's important to know the Python fundamentals, most programmers rarely write code from scratch. Rather they use other people's code as their building blocks. Aside from the modules provide in the standard library, there are many useful ones that are provided online. They come pre-installed with some versions of Python, such as Anaconda. If they aren't pre-installed, you can use programs like `pip` to install them. See [here](https://packaging.python.org/tutorials/installing-packages/) for information about pip.
# ### numpy
#
# Numpy provides functions for dealing with numerical data efficiently in Python. While Python does already provide good mathematical functionality out of the box, numpy is highly efficient at things like multiplying matrices and dealing with huge arrays of data.
# +
import numpy
# Create a matrix.
A = numpy.array([[5,2,9],[3,1,2],[8,8,3]])
print(A)
# +
# Access the first row of A.
print(A[0])
# Access the first column of A.
print(A[:,0])
# Access the second element of the third row of A.
print(A[2][1])
# Square A.
print(numpy.matmul(A,A))
# +
# Create list of ten random values between 0 (inclusive) and 1 (exclusive).
r = numpy.random.rand(10)
print(r)
# Create list of ten random normal values with mean 5 and standard deviation 0.1.
r = numpy.random.normal(5, 0.1, 10)
print(r)
# -
# Numpy is usually used as the basis for other modules, like matplotlib.pyplot which plots data for us.
# ### matplotlib.pyplot
# `matplotlib` is the most popular plotting (graphing) package for Python.
# Here we see an example of using it to plot the curve $ y = x^3 $.
# +
import matplotlib.pyplot as p
# Create a numpy array containing the numbers from 0 to 99 inclusive.
x = numpy.array(range(100))
# Create another numpy array from x, by squaring each element in turn.
y = x**2
# Plot x versus y.
p.plot(x, y)
# -
# There a couple of things to notice here.
# First, there's a dot in `matplotlib.pyplot`.
# The dot means that `matplotlib` is a package, and `pyplot` is a module within that package.
# It's not important for us to dwell on this.
# In use, we treat it much the same as if `matplotlib.pyplot` was the module.
#
# Secondly, the after the `import matplotlib.pyplot` we add `as p`.
# This is a handy way of avoiding having to type `matplotlib.pyplot` every time you want to use a function in it.
# It lets us, for example, type `p.plot()` instead of `matplotlin.pyplot.plot()`.
# It basically gives us a nickname for the module.
# ## Further reading
#
# As you learn more about Python, and begin to apply it to real-world problems, you will find yourself relying on modules and libraries written by other people.
# It's often best not to try to write much code from scratch yourself, as packages like `numpy` and `matplotlib` have been written by many people with a good deal of programming and mathematical expertise.
# They've been built up over several years, sometimes decades, and are usually heavily informed by research in these areas.
# Rather, for a given programming task, you should try to use packages like these as your building blocks.
# In future you might consider contributing to their development, as they are open source.
# For now, it's easy to get started with them, as there is a huge amount of beginner's literature available online, such as the following.
# 1. [Pyplot tutorial (https://matplotlib.org/users/pyplot_tutorial.html)](https://matplotlib.org/users/pyplot_tutorial.html)
# 2. [Numpy Quickstart tutorial (https://docs.scipy.org/doc/numpy-dev/user/quickstart.html)](https://docs.scipy.org/doc/numpy-dev/user/quickstart.html)
# 3. [Python numpy tutorial (http://cs231n.github.io/python-numpy-tutorial/)](http://cs231n.github.io/python-numpy-tutorial/)
# 4. [Scipy lecture notes (http://www.scipy-lectures.org/intro/matplotlib/matplotlib.html)](http://www.scipy-lectures.org/intro/matplotlib/matplotlib.html)
| functions-modules.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import nafigator
import lxml.etree as ET
# +
xml_filename = "..//data//example.naf"
# xsl_filename = "..//data//naf.xslt"
xsl_filename = "..//data//xml2rdf3.xsl"
output = "..//data//example.rdf"
dom = ET.parse(xml_filename)
xslt = ET.parse(xsl_filename)
# -
transform = ET.XSLT(xslt)
newdom = transform(dom)
newdom.write(output,
encoding='utf-8',
pretty_print=True,
xml_declaration=True)
import rdflib
# +
from rdflib import Graph
g = Graph()
g.parse(output)
# +
output_ttl = "..//data//example.ttl"
g.serialize(output_ttl, format='turtle', base=None, encoding="utf-8")
# -
| notebooks/Convert .naf (xml) to .rdf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="qAZbsGcIFimR"
# # How to understand and manipulate the periodogram of an oscillating star
# + [markdown] colab_type="text" id="Hzt4B1ZNFprf"
# ## Learning Goals
#
# By the end of this tutorial you will:
#
# - Understand the key features of periodograms of oscillating stars.
# - Understand how these features change depending on the type of star being studied.
# - Be able to manipulate the periodogram to focus in on areas you're interested in.
# - Be able to smooth a periodogram.
# - Be able to remove features such as the convective background in solar-like oscillators.
# + [markdown] colab_type="text" id="X5FusNRmGSwu"
# ## Introduction
#
# The brightnesses of stars can oscillate — that is, vary over time — for many different reasons. For example, in the companion tutorials we explored light curves that oscillated due to an eclipsing binary pair transiting in front of one another, and we looked at a star that showed variability due to star spots on its surface rotating in and out of view.
#
# In this tutorial, we will focus on *intrinsic* oscillators: stars that exhibit variability due to processes inside the stars. For example, one of these internal processes is the presence of standing waves trapped in the interior. When the light curve of a star is transformed into the frequency domain, such waves can be observed as distinct peaks in the frequency spectrum of the star. The branch of astronomy that focuses on studying these signals is called [*asteroseismology*](https://en.wikipedia.org/wiki/Asteroseismology).
#
# Asteroseismology is an important tool because it allows intrinsic properties of a star, such as its mass and radius, to be estimated from the light curve alone. The only requirement is that the quality of the light curve — its duration, sampling, and precision — must be good enough to provide a high-resolution view of the star in the frequency domain. *Kepler* data is particularly well-suited for this purpose.
#
# In this tutorial, we will explore two types of intrinsic oscillators that are commonly studied by asteroseismologists:
# 1. **$\delta$ Scuti stars**: a class of oscillating stars typically 1.5 to 2.5 times as massive as the Sun, which oscillate due to fluctuations in the opacity of the outer layers of the star.
# 2. **Solar-Like Oscillators**: a class that includes all stars that oscillate in the same manner as the Sun, namely due to turbulent motion in the convective outer layers of their atmospheres. This includes both main sequence stars as well as red giant stars.
#
#
#
# + [markdown] colab_type="text" id="xIVeKA87FwgX"
# ## Imports
# This tutorial only requires **[Lightkurve](https://docs.lightkurve.org)**, which in turn uses **[Matplotlib](https://matplotlib.org/)** for plotting.
# + cellView="both" colab={} colab_type="code" id="Bb6VnXNWFyl4"
import lightkurve as lk
# %matplotlib inline
# + [markdown] colab_type="text" id="pRDRo8S3Sa_Y"
# ---
# + [markdown] colab_type="text" id="fOoYYUBIGg4x"
# ## 1. Exploring the Frequency Spectrum of a $\delta$ Scuti Oscillator
# + [markdown] colab_type="text" id="adwNO24IKMSy"
# [$\delta$ Scuti stars](https://en.wikipedia.org/wiki/Delta_Scuti_variable) are stars roughly 1.5 to 2.5 as massive as the Sun, and oscillate due to fluctuations in the opacity of the outer layers of the star ([known as the Kappa mechanism](https://en.wikipedia.org/wiki/Kappa%E2%80%93mechanism)), alternately appearing brighter and fainter.
#
# An example star of this type is HD 42608, which was recently observed by the *TESS* space telescope. We can search for these data using Lightkurve:
# + cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 78} colab_type="code" executionInfo={"elapsed": 1668, "status": "ok", "timestamp": 1600488241133, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="L3g1lr0SKNgb" outputId="841dce66-4e19-428e-8a82-841a2bb909f0"
lk.search_lightcurve('HD 42608', mission='TESS')
# + [markdown] colab_type="text" id="DtrKUCiiMOSP"
# Success! A light curve for the object appears to be available in the data archive. Let's go ahead and download the data and convert it straight to a [`periodogram`](https://docs.lightkurve.org/api/lightkurve.periodogram.Periodogram.html) using the [`to_periodogram()`](https://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html#lightkurve.lightcurve.KeplerLightCurve.to_periodogram) function.
# + cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 404} colab_type="code" executionInfo={"elapsed": 46268, "status": "ok", "timestamp": 1600488171049, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="BvnxPAgQMIDm" outputId="23ff2f53-018b-4740-9f56-d9842c348e12"
lc = lk.search_lightcurve('HD 42608', mission='TESS', author='SPOC', sector=6).download()
pg = lc.normalize().to_periodogram()
pg.plot();
# + [markdown] colab_type="text" id="e2uG8e-OOkCn"
# We can see that there is a strong power excess around 50 cycles per day. These indicate stellar oscillations.
#
# To study these peaks in more detail, we can zoom in by recreating the periodogram using the [`minimum_frequency`](https://docs.lightkurve.org/api/lightkurve.periodogram.LombScarglePeriodogram.html#lightkurve.periodogram.LombScarglePeriodogram.from_lightcurve) and [`maximum_frequency`](https://docs.lightkurve.org/api/lightkurve.periodogram.LombScarglePeriodogram.html#lightkurve.periodogram.LombScarglePeriodogram.from_lightcurve) keywords:
# + cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 403} colab_type="code" executionInfo={"elapsed": 46924, "status": "ok", "timestamp": 1600488171725, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="QwDbpGCUO3y2" outputId="16f2d2d5-0aaa-403e-9dc7-7de18b63c76d"
pg = lc.normalize().to_periodogram(minimum_frequency=35,
maximum_frequency=60)
pg.plot();
# + [markdown] colab_type="text" id="P7-1v-C1PwaF"
# This is much clearer!
#
# Stars of this type are known to display multiple types of oscillation, including:
# - **Radial Oscillations**: caused by the star shrinking and expanding radially. Also called a "breathing mode."
# - **Dipole Oscillations**: caused by the star's hemispheres shrinking and expanding alternately.
#
# Both types of oscillations are on display in the figure above. Identifying exactly what type of oscillation a given peak represents is challenging. Fortunately, this star (HD 42608) is part of a set of stars for which the oscillations have been analyzed in detail in a research paper by [Bedding et al. (2020)](https://arxiv.org/pdf/2005.06157.pdf), so you can consult that paper to learn more about the details.
#
# Note that the modes of oscillation are very "sharp" in the figure above. This is because $\delta$ Scuti oscillations are *coherent*, which is a term astronomers in the field use for signals that have long lifetimes and are not heavily damped. Because of this, their exact oscillation frequencies can be observed in a fairly straightforward way. This sets $\delta$ Scuti stars apart from solar-like oscillators, which are damped. Let's look at an example of such a star next.
# + [markdown] colab_type="text" id="Yht1JopOMh4w"
# ## 2. Exploring the Frequency Spectrum of a Solar-Like Oscillator
#
# + [markdown] colab_type="text" id="ur-BwspSTU_k"
# Solar-like oscillators exhibit variability driven by a different mechanism than $\delta$ Scuti stars. They encompass the class of stars that [oscillate in the same manner as the Sun](https://en.wikipedia.org/wiki/Helioseismology). Because they have lower masses than $\delta$ Scuti stars, solar-like oscillators have convective outer envelopes. The turbulent motion of these envelopes excites standing waves inside the stars which cause brightness changes on the surface. Unlike $\delta$ Scuti stars however, these waves are not coherent. Instead, these waves are stochastic and damped, which means that the lifetimes and amplitudes of the waves are limited and variable.
#
# While the name might imply that only stars like the Sun are solar-like oscillators, this is not true. All stars with convective outer layers can exhibit solar-like oscillations, including red giant stars!
# + [markdown] colab_type="text" id="VRqwTgnjUPP9"
# Let's have a look at the Sun-like star KIC 10963065 ([also known as Rudy](https://arxiv.org/pdf/1612.00436.pdf)), observed with *Kepler*. Because solar-like oscillation amplitudes are low, we will need to combine multiple quarters of data to improve our signal-to-noise.
#
# We can list the available data sets as follows:
# + cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 372} colab_type="code" executionInfo={"elapsed": 47495, "status": "ok", "timestamp": 1600488172311, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="Zsw3tSKEUlzb" outputId="6b1773b3-16a8-4fe2-d960-74652a2ff534"
search_result = lk.search_lightcurve('KIC 10963065', mission='Kepler')
search_result
# + [markdown] colab_type="text" id="6-1E6szeVLwt"
# To create and plot this periodogram, we will apply a few common practices in the field:
# - We will combine multiple quarters to improve the frequency resolution.
# - We will [`normalize`](https://docs.lightkurve.org/api/lightkurve.lightcurve.LightCurve.html#lightkurve.lightcurve.LightCurve.normalize) the light curve to parts per million (`ppm`).
# - We will use the `psd` normalization option when calling [`to_periodogram`](https://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html#lightkurve.lightcurve.KeplerLightCurve.to_periodogram), which sets the units of frequency to microhertz, and normalizes the power using the spacing between bins of frequency.
#
# We'll also plot the resulting figure in log-log space.
# + cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 447} colab_type="code" executionInfo={"elapsed": 55366, "status": "ok", "timestamp": 1600488180193, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="yyuOcEJRU3t7" outputId="cd271b55-a15a-4c8d-f893-2aed959bb4e1"
lc = search_result[0:10].download_all().stitch()
pg = lc.normalize(unit='ppm').to_periodogram(normalization='psd')
pg.plot(scale='log');
# + [markdown] colab_type="text" id="ae6PIa9_VwnZ"
# This periodogram looks very different to that of the $\delta$ Scuti star above. There is a lot of power excess at low frequencies: this is what we call the *convective background*, which is additional noise contributed by the convective surface of the star constantly changing. We do not see any clear peaks like we did for the $\delta$ Scuti oscillator however.
#
# There is a good reason for this: this main sequence star oscillates at frequencies too large to be seen on this periodogram, lying above the periodogram's [Nyquist frequency](https://en.wikipedia.org/wiki/Nyquist_frequency).
# + [markdown] colab_type="text" id="tv7Xf7VwWLJF"
# The Nyquist frequency is a property of a time series that describes the maximum frequency that can be reliably determined in a periodogram. It stems from the assumption that you need a minimum of two observations per oscillation period to observe a pattern (one observation on the "up," and one on the "down" oscillation). It is defined as follows:
#
# $\nu_{\rm nyq} = \frac{1}{2\Delta t}$ ,
#
# where $\Delta t$ is the observing cadence.
# + [markdown] colab_type="text" id="askCRV-NWxVo"
# The reason that we can't see Rudy's oscillations in the periodogram above is because we constructed this periodogram using the *Kepler* 30-minute Long Cadence data. Solar-like oscillators on the main sequence typically oscillate on the order of minutes (five minutes for the Sun), at frequencies much higher than will be visible on this periodogram. To see Rudy's oscillations, we will need to use the *Kepler* Short Cadence (SC) observations, which used a time sampling of one minute. We can obtain these data as follows:
# + cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 447} colab_type="code" executionInfo={"elapsed": 94441, "status": "ok", "timestamp": 1600488219285, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="rxUH6IJ4XI4h" outputId="a28c9ee0-b638-4507-d96d-d87d4dcd86d1"
search_result = lk.search_lightcurve('KIC 10963065',
mission='Kepler',
cadence='short')
lc = search_result[0:10].download_all().stitch()
pg = lc.normalize(unit='ppm').to_periodogram(normalization='psd')
pg.plot(scale='log');
# + [markdown] colab_type="text" id="5rGMsCjOYHNe"
# Now we can see a power excess near $2000\, \mu\rm{Hz}$. This frequency is almost 10 times higher than we could view using the Long Cadence data alone. Let's zoom in on this region so we can look at the signals in more detail, like we did for the $\delta$ Scuti star.
# + cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 445} colab_type="code" executionInfo={"elapsed": 96945, "status": "ok", "timestamp": 1600488221802, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="-lJgX3KdYTYw" outputId="924b7080-6e5e-453b-d79e-241a2a645fd9"
zoom_pg = lc.normalize(unit='ppm').to_periodogram(normalization='psd',
minimum_frequency=1500,
maximum_frequency=2700)
zoom_pg.plot();
# + [markdown] colab_type="text" id="KkF828uuNJAQ"
# Compared to the $\delta$ Scuti, the modes of oscillation in the figure above are less sharp, even though we used much more data to create the periodogram. This is because the modes in solar-like oscillators are damped due to the turbulent motion of the convective envelope. This lowers their amplitudes and also causes the lifetimes of the oscillations to be short. The short lifetimes create some uncertainty around the exact oscillation frequency, and so the peaks that appear in the periodogram are a little broader (usually Lorentzian-like in shape). This may not be immediately apparant from these figures, but is much clearer if you zoom in on an individual mode.
# + [markdown] colab_type="text" id="IrAifdt5MlKT"
# ## 3. How to Smooth and Detrend a Periodogram
# + [markdown] colab_type="text" id="gzP9O5umjpXD"
# ### 3.1. The box kernel filter
#
# To further explore the oscillation modes, we will demonstrate some of Lightkurve's smoothing tools. There are two types of smoothing functions we can call through the [`smooth()`](https://docs.lightkurve.org/api/lightkurve.periodogram.Periodogram.html#lightkurve.periodogram.Periodogram.smooth) function. Let's start with a basic "moving median," also known as a 1D box kernel.
# + cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 391} colab_type="code" executionInfo={"elapsed": 98004, "status": "ok", "timestamp": 1600488222876, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="WmqyRfrFh8xq" outputId="12a5058a-db22-4ea0-eadc-e0ce2b57452c"
smooth_pg = zoom_pg.smooth(method='boxkernel', filter_width=0.5)
ax = zoom_pg.plot(label='Original')
smooth_pg.plot(ax=ax, color='red', label='Smoothed');
# + [markdown] colab_type="text" id="RGjW6wRch84H"
# In the figure above, the smoothed periodogram is plotted over the top of the original periodogram. In this case we have used the [Astropy `Box1DKernel`](https://docs.astropy.org/en/stable/api/astropy.convolution.Box1DKernel.html) filter, with a filter width of $0.5\, \mu \rm{Hz}$. The filter takes the median value of power in a region $0.5\, \mu \rm{Hz}$ around a data point, and replaces that point with the median value. It then moves on to the next data point. This creates a smoothed periodogram of the same length as the original. Because the power values are now correlated, these smoothed periodograms usually aren't used for computational analysis, but they can aid visual explorations of the location of the oscillation modes.
# + [markdown] colab_type="text" id="1WdETtxQiqlY"
# ### 3.2. The log median filter
# + [markdown] colab_type="text" id="LLzIXovvh9UP"
# While the [`Box1DKernel`](https://docs.astropy.org/en/stable/api/astropy.convolution.Box1DKernel.html) filter can be used to help identify modes of oscillation in the presence of noise, it is mostly good for smoothing on small scales. For large scales, we can instead use Lightkurve's log median filter.
#
# As we saw above, solar-like oscillators exhibit a large power excess at low frequencies due to the turbulent convection visible near the stellar surface. When studying modes of oscillation, we typically aren't interested in the convective background, and prefer to remove it.
#
# The log median filter performs a similar operation to the [`Box1DKernel`](https://docs.astropy.org/en/stable/api/astropy.convolution.Box1DKernel.html) filter, but does so in log space. This means that at low frequencies the number of frequency bins of which the median is taken is small, and that at high frequencies many frequency bins are included in the median calculation. As a result, the log median filter smooths over the convection background but ignores the modes of oscillation at high frequencies.
#
# The result of applying a log median filter is demonstrated using the red line in the figure below:
# + cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 393} colab_type="code" executionInfo={"elapsed": 101162, "status": "ok", "timestamp": 1600488226050, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="VpYyVGzzh9cr" outputId="24aa0787-9635-4ca5-ee97-2f7c7955264d"
smooth_pg = pg.smooth(method='logmedian', filter_width=0.1)
ax = pg.plot(label='Original')
smooth_pg.plot(ax=ax, linewidth=2, color='red', label='Smoothed', scale='log');
# + [markdown] colab_type="text" id="HBupThTbpZ2P"
# ### 3.3. Flattening
# + [markdown] colab_type="text" id="W0Tczntlpbm1"
# When studying modes of oscillation, it is typically preferred to remove the convective background. In a detailed analysis this would involve fitting a model to the background. As can be seen in the figure above, however, Lightkurve's log median [`smooth()`](https://docs.lightkurve.org/api/lightkurve.periodogram.LombScarglePeriodogram.html#lightkurve.periodogram.LombScarglePeriodogram.smooth) method provides a useful first-order approximation of the background without the need for a model.
#
# To divide the power spectrum by the background, we can use Lightkurve's [`flatten()`](https://docs.lightkurve.org/api/lightkurve.periodogram.LombScarglePeriodogram.html#lightkurve.periodogram.LombScarglePeriodogram.flatten) method. This function uses the log median smoothing method to determine the background, and returns a new [`periodogram`](https://docs.lightkurve.org/api/lightkurve.periodogram.Periodogram.html) object in which the background has been divided out.
# + cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 113778, "status": "ok", "timestamp": 1600488238677, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="v8MrrPbup7fw" outputId="ffed8cee-ab9b-4549-9315-d3e48c8e4465"
snrpg = pg.flatten()
snrpg
# + [markdown] colab_type="text" id="ASHfwxXAqCUc"
# The periodogram obtained by dividing by the noise in this way is commonly called a Signal-to-Noise periodogram (`SNRPeriodogram`), because the noise, in the form of the convective background, has been removed. This is a little bit of a misnomer, because there is still noise present in the periodogram.
#
# We plot the `SNRPeriodogram` below, and see that the modes of oscillation stick out from the noise much more clearly now that the convective background has been removed.
# + cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 391} colab_type="code" executionInfo={"elapsed": 114510, "status": "ok", "timestamp": 1600488239419, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="elhNeMJuqSBM" outputId="4aa26013-7c56-4e98-9042-fb29eaff2dca"
snrpg.plot();
# + [markdown] colab_type="text" id="I0GKf9vMGdLI"
# ## 4. Closing Comments
#
# In this tutorial, we explored two common types of oscillating stars, and demonstrated how Lightkurve can be used to study their power specta. In the accompanying tutorials, you can learn how to use these tools to extract more detailed information from them, including the radius and mass of a star!
#
# For further reading on $\delta$ Scuti stars, solar-like oscillators, and Fourier Transforms, we recommend you consult the following papers:
#
# - [Vanderplas (2017)](https://arxiv.org/pdf/1703.09824.pdf) – A detailed paper on Fourier Transforms and Lomb-Scargle Periodograms.
# - [Bedding et al. (2020)](https://arxiv.org/pdf/2005.06157.pdf) – A demonstration of mode identification in $\delta$ Scuti stars.
# - [Chaplin & Miglio (2013)](https://arxiv.org/pdf/1303.1957.pdf) – A review paper on asteroseismology of solar-like oscillators with *Kepler*.
# - [Aerts (2019)](https://arxiv.org/pdf/1912.12300.pdf) – A comprehensive review that covers asteroseismology of a wide range of oscillating stars, including solar-like oscillators and $\delta$ Scutis.
#
# + [markdown] colab_type="text" id="JMrH7qyC9G8x"
# ## About this Notebook
#
# **Authors**: <NAME> (<EMAIL>), <NAME>
#
# **Updated On**: 2020-09-29
# + [markdown] colab_type="text" id="AQknrSCV9PuY"
# ## Citing Lightkurve and Astropy
#
# If you use `lightkurve` or `astropy` for published research, please cite the authors. Click the buttons below to copy BibTeX entries to your clipboard.
# + colab={"base_uri": "https://localhost:8080/", "height": 144} colab_type="code" executionInfo={"elapsed": 114501, "status": "ok", "timestamp": 1600488239420, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj8sjdnDeqdejfe7OoouYPIclAQV0KSTpsU469Jyeo=s64", "userId": "05704237875861987058"}, "user_tz": 420} id="AmAGa51_9Vyo" outputId="7a7850c4-9e5c-4501-f2fe-ac5fac1ee0c0"
lk.show_citation_instructions()
# + [markdown] colab_type="text" id="OOnhHhZR9bXo"
# <img style="float: right;" src="https://raw.githubusercontent.com/spacetelescope/notebooks/master/assets/stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="Space Telescope Logo" width="200px"/>
#
| docs/source/tutorials/3-science-examples/asteroseismology-oscillating-star-periodogram.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.5 with Spark
# language: python3
# name: python3
# ---
# + [markdown] id="UP2Th8B0LLey" colab_type="text"
# # SIT742: Modern Data Science
# **(Week 01: Programming Python)**
#
# ---
# - Materials in this module include resources collected from various open-source online repositories.
# - You are free to use, change and distribute this package.
# - If you found any issue/bug for this document, please submit an issue at [tulip-lab/sit742](https://github.com/tulip-lab/sit742/issues)
#
#
#
# Prepared by **SIT742 Teaching Team**
#
# ---
#
#
# # Session 1A - IPython notebook and basic data types
#
#
# In this session,
# you will learn how to run *Python* code under **IPython notebook**. You have two options for the environment:
#
# 1. Install the [Anaconda](https://www.anaconda.com/distribution/), and run it locally; **OR**
# 1. Use one cloud data science platform such as:
# - [Google Colab](https://colab.research.google.com): SIT742 lab session will use Google Colab.
# - [IBM Cloud](https://www.ibm.com/cloud)
# - [DataBricks](https://community.cloud.databricks.com)
#
#
#
# In IPython notebook, you will be able to execute and modify your *Python* code more efficiently.
#
# - **If you are using Google Colab for SIT742 lab session practicals, you can ignore this Part 1 of this Session 1A, and start with Part 2.**
#
#
#
# In addition, you will be given an introduction on *Python*'s basic data types,
# getting familiar with **string**, **number**, data conversion, data comparison and
# data input/output.
#
# Hopefully, by using **Python** and the powerful **IPython Notebook** environment,
# you will find writing programs both fun and easy.
#
# + [markdown] id="V0vSWcazLLe3" colab_type="text"
# ## Content
#
# ### Part 1 Create your own IPython notebook
#
# 1.1 [Start a notebook server](#cell_start)
#
# 1.2 [A tour of IPython notebook](#cell_tour)
#
# 1.3 [IPython notebook infterface](#cell_interface)
#
# 1.4 [Open and close notebooks](#cell_close)
#
# ### Part 2 Basic data types
#
# 2.1 [String](#cell_string)
#
# 2.2 [Number](#cell_number)
#
# 2.3 [Data conversion and comparison](#cell_conversion)
#
# 2.4 [Input and output](#cell_input)
#
# + [markdown] id="DAeickflLLe7" colab_type="text"
# # Part 1. Create your own IPython notebook
#
# - **If you are using Google Colab for SIT742 lab session practicals, you can ignore this Part 1, and start with Part 2.**
#
#
# This notebook will show you how to start an IPython notebook session. It guides you through the process of creating your own notebook. It provides you details on the notebook interface and show you how to nevigate with a notebook and manipulate its components.
# + [markdown] id="C6uxTijTLLe_" colab_type="text"
# <a id = "cell_start"></a>
# + [markdown] id="JO2tcWluLLfB" colab_type="text"
#
# ## 1. 1 Start a notebook server
#
# As described in Part 1, you start the IPython notebnook server by keying in the command in a terminal window/command line window.
#
# However, before you do this, make sure you have created a folder **p01** under **H:/sit742**, download the file **SIT742P01A-Python.ipynb** notebook, and saved it under **H:/sit742/p01**.
#
# If you are using [Google Colab](https://colab.research.google.com), you can upload this notebook to Google Colab and run it from there. If any difficulty, please ask your tutor, or check the CloudDeakin discussions.
#
#
# After you complete this, you can now switch working directory to **H:/sit742**, and start the IPython notebook server by the following commands:
# + [markdown] id="sxivgj4ULLfE" colab_type="raw"
# ipython notebook %don't run this in notebook, run it on command line to activate the server
# + [markdown] id="yWJLYqESLLfH" colab_type="text"
# You can see the message in the terminal windows as follows:
#
#
# <img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/start-workspace.jpg">
#
# This will open a new browser window(or a new tab in your browser window). In the browser, there is an **dashboard** page which shows you all the folders and files under **sit742** folder
#
#
# <img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/start-index.jpg">
#
# + [markdown] id="HTSWM3swLLfJ" colab_type="text"
# <a id = "cell_tour"></a>
# + [markdown] id="Slhkb9iqLLfM" colab_type="text"
#
# ## 1.2 A tour of iPython notebook
#
# ### Create a new ipython notebook
#
#
# To create a new notebook, go to the menu bar and select **File -> New Notebook -> Python 3**
#
# By default, the new notebook is named **Untitled**. To give your notebook a meaningful name, click on the notebook name and rename it. We would like to call our new notebook **hello.ipynb**. Therefore, key in the name **hello**.
#
# <img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/emptyNotebook.jpg">
#
#
#
# ### Run script in code cells
#
# After a new notebook is created, there is an empty box in the notebook, called a **cell**. If you double click on the cell, you enter the **edit** mode of the notebook. Now we can enter the following code in the cell
# + [markdown] id="KjqBdqFeLLfO" colab_type="raw"
# text = "Hello World"
# print(text)
# + [markdown] id="NYDnH3V1LLfR" colab_type="text"
# After this, press **CTRL + ENTER**, and execute the cell. The result will be shown after the cell.
#
#
#
# <img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/hello-world.jpg">
#
#
#
#
# After a cell is executed , the notebook is switched to the **Commmand** mode. In this mode, you can manipulte the notebook and its commponent. Alternatively, you can use **ESC** key to switch from **Edit** mode to **Command** mode without executing code.
#
# To modify the code you entered in the cell, **double click** the cell again and modify its content. For example, try to change the first line of previouse cell into the following code:
# + [markdown] id="TJQbOFqJLLfU" colab_type="raw"
# text = "Good morning World!"
# + [markdown] id="YhgTgof0LLfX" colab_type="text"
# Afterwards, press **CTRL + ENTER**, and the new output is displayed.
#
# As you can see, you are switching bewteen two modes, **Command** and **Edit**, when editing a notebook. We will in later section look into these two operation modes of closely. Now practise switching between the two modes until you are comfortable with them.
# + [markdown] id="N3q3nZrQLLfY" colab_type="text"
# ### Add new cells
#
# To add a new cell to a notebook, you have to ensure the notebook is in **Command** mode. If not, refer to previous section to switch to **Command** mode.
#
#
# To add cell below the currrent cell, go to menubar and click **Insert-> Insert Cell Below**. Alternatively, you can use shortcut i.e. pressing **b** (or **a** to create a cell above).
#
#
# <img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/new-cell.jpg">
#
# ### Add markdown cells
#
# By default, a code cell is created when adding a new cell. However, IPython notebook also use a **Markdown** cell for enter normal text. We use markdown cell to display the text in specific format and to provide structure for a notebook.
#
# Try to copy the text in the cell below and paste it into your new notebook. Then from toolbar(**Cell->Cell Type**), change cell type from **Code** to **Markdown**.
#
# Please note in the following cell, there is a space between the leading **-, #, 0** and the text that follows.
# + [markdown] id="yE7_3iNPLLfa" colab_type="raw"
# ## Heading 2
# Normal text here!
#
# ### Heading 3
# ordered list here
#
# 0. Fruits
# 0. Banana
# 0. Grapes
# 0. Veggies
# 0. Tomato
# 0. Broccoli
#
# Unordered list here
# - Fruits
# - Banana
# - Grapes
# - Veggies
# - Tomato
# - Broccoli
# + [markdown] id="slcY8TkgLLfc" colab_type="text"
# Now execute the cell by press **CTRL+ ENTER**. You notebook should look like this:
#
# <img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/new-markdown.jpg">
#
# + [markdown] id="Wvfjy1AALLff" colab_type="text"
# Here is what the formated Markdown cell looks like:
# + [markdown] id="dPvd8QuFLLfh" colab_type="text"
# ### Exercise:
# Click this cell, and practise writing markdown language here....
# + [markdown] id="FNzxurYjLLfj" colab_type="text"
# <a id = "cell_interface"></a>
# + [markdown] id="R0RQeVQlLLfn" colab_type="text"
#
# ### 1.3 IPython notebook interface
#
# Now you have created your first notebook, let us have a close look at the user interface of IPython notebook.
#
#
#
# ### Notebook component
# When you create a new notebook document, you will be presented with the notebook name, a menu bar, a toolbar and an empty code cell.
#
# We can see the following components in a notebook:
#
# - **Title bar** is at the top of the page and contains the name of the notebook. Clicking on the notebook name brings up a dialog which allows you to rename it. Please renaming your notebook name from “Untitled0” to “hello”. This change the file name from **Untitled0.ipynb** to **hello.ipynb**.
#
# - **Menu bar** presents different options that can be used to manipulate the way the notebook functions.
#
# - **Toolbar** gives a quick way of performing the most-used operations within the notebook.
#
# - An empty computational cell is show in a new notebook where you can key in your code.
#
# The notebook has two modes of operatiopn:
#
# - **Edit**: In this mode, a single cell comes into focus and you can enter text or execute code. You activate the **Edit mode** by **clicking on a cell** or **selecting a cell and then pressing Enter key**.
#
# - **Command**: In this mode, you can perform tasks that is related to the whole notebook structure. For example, you can move, copy, cut and paste cells. A series of keyboard shortcuts are also available to enable you to performa these tasks more effiencient. One easiest way of activating the command mode by pressing the **Esc** key to exit editing mode.
#
#
#
#
#
# ### Get help and interrupting
#
#
# To get help on the use of different cammands, shortcuts, you can go to the **Help** menu, which provides links to relevant documentation.
#
# It is also easy to get help on any objects(including functions and methods). For example, to access help on the sum() function, enter the followsing line in a cell:
# + [markdown] id="TGoAeYhYLLfp" colab_type="raw"
# sum?
# + [markdown] id="tbgcq5bXLLfu" colab_type="text"
# The other improtant thing to know is how to interrupt a compuation. This can be done through the menu **Kernel->Interrupt** or **Kernel->Restart**, depending on what works on the situation. We will have chance to try this in later session.
# + [markdown] id="C0Ra6R3PLLfw" colab_type="text"
#
# ### Notebook cell types
#
#
# There are basically three types of cells in a IPython notebook: Code Cells, Markdown Cells, Raw Cells.
#
#
# **Code cells** : Code cell can be used to enter code and will be executed by Python interpreter. Although we will not use other language in this unit, it is good to know that Jupyter Notebooks also support JavaScript, HTML, and Bash commands.
#
# *** Markdown cells***: You have created markdown cell in the previouse section. Markdown Cells are the easiest way to write and format text. It is also give structure to the notebook. Markdown language is used in this type of cell. Follow this link https://daringfireball.net/projects/markdown/basics for the basics of the syntax.
#
# This is a Markdown Cells example notebook sourced from : https://ipython.org/ipython-doc/3/notebook/notebook.html
# This markdown cheat sheet can also be good reference to the main markdowns you might need to use in our pracs http://nestacms.com/docs/creating-content/markdown-cheat-sheet
#
#
# **Raw cells** : Raw cells, unlike all other Jupyter Notebook cells, have no input-output distinction. This means that raw Cells cannot be rendered into anything other than what they already are. They are mainly used to create examples.
#
#
# As you have seen, you can use the toolbar to choose between different cell types. In addition, shortcut **M** and **Y** can be used to quickly change a cell to Code cell or Markdown cell under Command mode.
#
#
# ### Operation modes of IPytho notebook
#
# **Edit mode**
#
#
#
# The Edit mode is used to enter text in cells and to execute code. As you have seen, after typing some code in the notebook and pressing **CTRL+Enter**, the notebook executes the cell and diplays output. The other two shortcuts used to run code in a cell are **Shift +Enter** and **Alt + Enter**.
#
# These three ways to run the the code in a cells are summarized as follows:
#
#
# - Pressing Shift + Enter: This runs the cell and select the next cell(A new cell is created if at the end of the notebook). This is the most usual way to execute a cell.
#
# - Pressing Ctrl + Enter: This runs the cell and keep the same cell selected.
#
# - Pressing Alt + Enter: This runs the cell and insert a new cell below it.
#
#
# **Command mode**
#
# In Command mode, you can edit the notebook as a whole, but not type into individual cells.
#
# You can use keyboard shortcut in this mode to perform the notebook and cell actions effeciently. For example, if you are in command mode and press **c**, you will copy the current cell.
#
#
#
# There are a large amount of shortcuts avaialbe in the command mode. However, you do not have to remember all of them, since most actions in the command mode are available in the menu.
#
# Here is a list of the most useful shortcuts. They are arrganged by the
# order we recommend you learn so that you can edit the cells effienctly.
#
#
# 1. Basic navigation:
#
# - Enter: switch to Edit mode
#
# - Esc: switch to Command mode
#
# - Shift+enter: Eexecute a cell
#
# - Up, down: Move to the cell above or below
#
# 2. Cell types:
# - y: switch to code cell)
# - m: switch to markdown cell)
#
# 3. Cell creation:
# - a: insert new sell above
# - b: insert new cell below
#
# 4. Cell deleting:
# - press D twice.
#
# Note that one of the most common (and frustrating) mistakes when using the
# notebook is to type something in the wrong mode. Remember to use **Esc**
# to switch to the Command mode and **Enter** to switch to the Edit mode.
# Also, remember that **clicking** on a cell automatically places it in the Edit
# mode, so it will be necessary to press **Esc** to go to the Command mode.
#
# ### Exercise
# Please go ahead and try these shortcuts. For example, try to insert new cell, modify and delete an existing cell. You can also switch cells between code type and markdown type, and practics different kinds of formatting in a markdown cell.
#
# For a complete list of shortcut in **Command** mode, go to menu bar **Help->Keyboardshorcut**. Feel free to explore the other shortcuts.
#
#
# + [markdown] id="M2sk2y0ILLfy" colab_type="text"
# <a id = "cell_close"></a>
# + [markdown] id="W0JXgsqELLf0" colab_type="text"
#
# ## 1.4 open and close notebooks
#
# You can open multiple notebooks in a browser windows. Simply go to menubar and choose **File->open...**, and select one **.ipynb** file. The second notebook will be opened in a seperated tab.
#
# Now make sure you still have your **hello.ipynb** open. Also please download **ControlAdvData.ipynb** from cloudDeakin, and save under **H:/sit742/prac01**. Now go to the manu bar, click on **File->open ...**, locate the file **ControlAdvData.ipynb**, and open this file.
#
# When you finish your work, you will need to close your notebooks and shutdown the IPython notebook server. Instead of simply close all the tabs in the browser, you need to shutdown each notebook first. To do this, swich to the **Home** tab(**Dashboard page**) and **Running** section(see below). Click on **Shutdown** button to close each notebook. In case **Dashboard** page is not open, click on the **Jupyter** icon to reopen it.
#
#
# <img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/close-index.jpg">
#
# After each notebook is shutdown, it is time to showdown the IPython notebook server. To do this, go to the terminal window and press **CTRL + C**, and then enter **Y**. After the notebook server is shut down, the terminal window is ready for you to enter any new command.
#
#
# <img src="https://raw.githubusercontent.com/tuliplab/mds/master/Jupyter/image/close-terminal.jpg">
#
#
# + [markdown] id="WKEEelJeLLf2" colab_type="text"
#
# # Part 2 Basic Data Types
#
# + [markdown] id="rtm_f4lwLLf4" colab_type="text"
#
#
# In this part, you will get better understanding with Python's basic data type. We will
# look at **string** and **number** data type in this section. Also covered are:
#
# - Data conversion
# - Data comparison
# - Receive input from users and display results effectively
#
# You will be guided through completing a simple program which receives input from a user,
# process the information, and display results with specific format.
# + [markdown] id="THnW6RPgLLf6" colab_type="text"
# ## 2.1 String
#
# A string is a *sequence of characters*. We are using strings in almost every Python
# programs. As we can seen in the **”Hello, World!”** example, strings can be specified
# using single quotes **'**. The **print()** function can be used to display a string.
# + id="CG-9AQrmLLf7" colab_type="code" colab={}
print('Hello, World!')
# + [markdown] id="ZPKAMR6WLLgI" colab_type="text"
# We can also use a variable to store the string value, and use the variable in the
# **print()** function.
# + id="5ZsarfIpLLgO" colab_type="code" colab={}
# Assign a string to a variable
text = 'Hello, World!'
print(text)
# + [markdown] id="cm9IgbLyLLgU" colab_type="text"
# A *variable* is basically a name that represents (or refers to) some value. We use **=**
# to assign a value to a variable before we use it. Variable names are given by a programer
# in a way that the program is easy to understanding. Variable names are *case sensitive*.
# It can consist of letters, digits and underscores. However, it can not begin with a digit.
# For example, **plan9** and **plan_9** are valid names, where **9plan** is not.
# + id="zilCIz-GLLga" colab_type="code" colab={}
text = 'Hello, World!'
# + id="cHelVR_8LLgg" colab_type="code" colab={}
# with print() function, content is displayed without quotation mark
print(text)
# + [markdown] id="zshg7IvZLLgp" colab_type="text"
# With variables, we can also display its value without **print()** function. Note that
# you can not display a variable without **print()** function in Python script(i.e. in a **.py** file). This method only works under interactive mode (i.e. in the notebook).
# + id="A8eOiupBLLgq" colab_type="code" colab={}
# without print() function, quotation mark is displayed together with content
text
# + [markdown] id="grJ6X6aMLLg8" colab_type="text"
# Back to representation of string, there will be issues if you need to include a quotation
# mark in the text.
# We provide a example use a apostrophe mark(’) similar with singal quaotation mark(`).
# You will find that it will show "SyntaxError: invalid character in identifier". Just try to change the apostrophe mark with singal quaotation mark and run it again.
# + id="QcxK0VgBLLg9" colab_type="code" colab={}
text = ’What’ s your name ’
# + [markdown] id="HHs3pOo7ummI" colab_type="text"
# <details><summary><u><b><font color="Blue">Click here for solution</u></b></summary>
# ```python
# text = ' What\'s your name?'
# print(text)
# ```
# + [markdown] id="W3vLIE_bLLhC" colab_type="text"
# Since strings in double quotes **"** work exactly the same way as string in single quotes.
# By mixing the two types, it is easy to include quaotation mark itself in the text.
# + id="kcq185ULLLhD" colab_type="code" colab={}
text = "What' s your name?"
print(text)
# + [markdown] id="ncHDozPILLhI" colab_type="text"
# Alternertively, you can use:
# + id="X-DGDDQaLLhI" colab_type="code" colab={}
text = '"What is the problem?", he asked.'
print(text)
# + [markdown] id="bwoz8VV2LLhS" colab_type="text"
# You can specify multi-line strings using triple quotes (**"""** or **'''**). In this way, single
# quotes and double quotes can be used freely in the text.
# Here is one example:
# + id="q1mgdutELLhU" colab_type="code" colab={}
multiline = '''This is a test for multiline. This is the first line.
This is the second line.
I asked, "What's your name?"'''
print(multiline)
# + [markdown] id="i5Kx0MKLLLhZ" colab_type="text"
# Notice the difference when the variable is displayed without **print()** function in this case.
# + id="5j7kVvnnLLhd" colab_type="code" colab={}
multiline = '''This is a test for multiline. This is the first line.
This is the second line.
I asked, "What's your name?"'''
multiline
# + [markdown] id="glnQnKz6LLhi" colab_type="text"
# Another way of include the special characters, such as single quotes is with help of
# escape sequences **\\**. For example, you can specify the single quote using **\\' ** as follows.
# + id="DC97cQs2LLhj" colab_type="code" colab={}
string = 'What\'s your name?'
print(string)
# + [markdown] id="kp8W43nnLLhv" colab_type="text"
# There are many more other escape sequences (See Section 2.4.1 in [Python3.0 official document](https://docs.python.org/3.1/reference/lexical_analysis.html)). But I am going to mention the most useful two examples here.
#
# First, use escape sequences to indicate the backslash itself e.g. **\\\\**
# + id="U5PUlblFLLh3" colab_type="code" colab={}
path = 'c:\\windows\\temp'
print(path)
# + [markdown] id="cm0OdobTLLh9" colab_type="text"
# Second, used escape sequences to specify a two-line string. Apart from using a triple-quoted
# string as shown previously, you can use **\n** to indicate the start of a new line.
# + id="6Hx7whpWLLh9" colab_type="code" colab={}
multiline = 'This is a test for multiline. This is the first line.\nThis is the second line.'
print(multiline)
# + [markdown] id="SxYdplr7LLiB" colab_type="text"
# To manipulate strings, the following two operators are most useful:
# * **+** is use to concatenate
# two strings or string variables;
# * ***** is used for concatenating several copies of the same
# string.
# + id="atOH5kwTLLiC" colab_type="code" colab={}
print('Hello, ' + 'World' * 3)
# + [markdown] id="3e2FhseULLiE" colab_type="text"
# Below is another example of string concatenation based on variables that store strings.
# + id="b4rw4CjPLLiG" colab_type="code" colab={}
name = 'World'
greeting = 'Hello'
print(greeting + ', ' + name + '!')
# + [markdown] id="ggn2lPQMLLiJ" colab_type="text"
# Using variables, change part of the string text is very easy.
# + id="i0jfd08dLLiK" colab_type="code" colab={}
name
# + id="gDymwzHmLLiN" colab_type="code" colab={}
greeting
# + id="-xdxHvZlLLiQ" colab_type="code" colab={}
# Change part of the text is easy
greeting = 'Good morning'
print(greeting + ', ' + name + '!')
# + [markdown] id="aUcMPHTMLLiS" colab_type="text"
# ## 2.2 Number
# + [markdown] id="NTaPssJ_LLiT" colab_type="text"
# There are two types of numbers that are used most frequently: integers and floats. As we
# expect, the standard mathematic operation can be applied to these two types. Please
# try the following expressions. Note that **\*\*** is exponent operator, which indicates
# exponentation exponential(power) caluclation.
# + id="4bJEQg_MLLiU" colab_type="code" colab={}
2 + 3
# + id="p6bV4vC9LLiX" colab_type="code" colab={}
3 * 5
# + id="-Rs_u_qnLLia" colab_type="code" colab={}
#3 to the power of 4
3 ** 4
# + [markdown] id="XTDUWNsPLLie" colab_type="text"
# Among the number operations, we need to look at division closely. In Python 3.0, classic division is performed using **/**.
# + id="l5ee7L9mLLif" colab_type="code" colab={}
15 / 5
# + id="ZtobUWNcLLij" colab_type="code" colab={}
14 / 5
# + [markdown] id="6ID6qNctLLil" colab_type="text"
# *//* is used to perform floor division. It truncates the fraction and rounds it to the next smallest whole number toward the left on the number line.
# + id="hT18fhA8LLim" colab_type="code" colab={}
14 // 5
# + id="Jj02iHTpLLip" colab_type="code" colab={}
# Negatives move left on number line. The result is -3 instead of -2
-14 // 5
# + [markdown] id="Z-YaXqCSLLir" colab_type="text"
# Modulus operator **%** can be used to obtain remaider. Pay attention when negative number is involved.
# + id="WIZ3IGCILLis" colab_type="code" colab={}
14 % 5
# + id="4Ao-du3_LLix" colab_type="code" colab={}
# Hint: −14 // 5 equal to −3
# (-3) * 5 + ? = -14
-14 % 5
# + [markdown] id="I0sASGehLLiz" colab_type="text"
# *Operator precedence* is a rule that affects how an expression is evaluated. As we learned in high school, the multiplication is done first than the addition. e.g. **2 + 3 * 4**. This means multiplication operator has higher precedence than the addition operator.
#
# For your reference, a precedence table from the python reference manual is used to indicate the evaluation order in Python. For a complete precedence table, check the heading "Python Operators Precedence" in this [Python tutorial](http://www.tutorialspoint.com/python/python_basic_operators.htm)
#
#
# However, When things get confused, it is far better to use parentheses **()** to explicitly
# specify the precedence. This makes the program more readable.
#
# Here are some examples on operator precedence:
# + id="Q8-htgKSLLi0" colab_type="code" colab={}
2 + 3 * 4
# + id="MTstkwWoLLi2" colab_type="code" colab={}
(2 + 3) * 4
# + id="X5zqw9gQLLi6" colab_type="code" colab={}
2 + 3 ** 2
# + id="2J5jmyrALLi8" colab_type="code" colab={}
(2 + 3) ** 2
# + id="4c0ohRKmLLi-" colab_type="code" colab={}
-(4+3)+2
# + [markdown] id="6QklYZFXLLjB" colab_type="text"
# Similary as string, variables can be used to store a number so that it is easy to manipulate them.
# + id="TsmyIIe0LLjC" colab_type="code" colab={}
x = 3
y = 2
x + 2
# + id="joZHkCbELLjJ" colab_type="code" colab={}
sum = x + y
sum
# + id="2lzP8lYMLLjU" colab_type="code" colab={}
x * y
# + [markdown] id="1gsMSLAgLLjW" colab_type="text"
# One common expression is to run a math operation on a variable and then assign the result of the operation back to the variable. Therefore, there is a shortcut for such a expression.
# + id="D1hxyg1mLLjX" colab_type="code" colab={}
x = 2
x = x * 3
x
# + [markdown] id="M5b_TPipLLjZ" colab_type="text"
# This is equivalant to:
# + id="Qb0BdkZ9LLja" colab_type="code" colab={}
x = 2
# Note there is no space between '*' and '+'
x *= 3
x
# + [markdown] id="Mk39mzf8LLjd" colab_type="text"
# ## 2.3 Data conversion and comparison
# + [markdown] id="DI609niBLLje" colab_type="text"
# So far, we have seen three types of data: interger, float, and string. With various data type, Python can define the operations possible on them and the storage method for each of them. In the later pracs, we will further introduce more data types, such as tuple, list and dictionary.
#
# To obtain the data type of a variable or a value, we can use built-in function **type()**;
# whereas functions, such as **str()**, **int()**, **float()**, are used to convert data one type to another. Check the following examples on the usage of these functions:
# + id="V62kYzxKLLjf" colab_type="code" colab={}
type('Hello, world!)')
# + id="Boe-ivfiLLjh" colab_type="code" colab={}
input_Value = '45.6'
type(input_Value)
# + id="4wGHArWZLLji" colab_type="code" colab={}
weight = float(input_Value)
weight
type(weight)
# + [markdown] id="T7mJiYNiLLjk" colab_type="text"
# Note the system will report error message when the conversion function is not compatible with the data.
# + id="i9bnxwLRLLjk" colab_type="code" colab={}
input_Value = 'David'
weight = float(input_Value)
# + [markdown] id="s_NS6zzbLLjl" colab_type="text"
# Comparison between two values can help make decision in a program. The result of the comparison is either **True** or **False**. They are the two values of *Boolean* type.
# + id="hpz8oajsLLjp" colab_type="code" colab={}
5 > 10
# + id="vZnUQd69LLjx" colab_type="code" colab={}
type(5 > 10)
# + id="Zfw9EfrqLLj0" colab_type="code" colab={}
# Double equal sign is also used for comparison
10.0 == 10
# + [markdown] id="_rZvY3UlLLj1" colab_type="text"
# Check the following examples on comparison of two strings.
# + id="l4dFT7CYLLj1" colab_type="code" colab={}
'cat' < 'dog'
# + id="EzfdIp6XLLj3" colab_type="code" colab={}
# All uppercases are smaller than low cases in terms of ASCII code. It will compare each character from the beginning to the end between two words based on their ACSII code value.
'cat' < 'Dog'
# + id="s88N5oeRLLj4" colab_type="code" colab={}
'apple' < 'apricot'
# + [markdown] id="SRn9iUKrLLj8" colab_type="text"
# There are three logical operators, *not*, *and* and *or*, which can be applied to the boolean values.
# + id="TJOHXNYOLLj8" colab_type="code" colab={}
# Both condition #1 and condition #2 are True?
3 < 4 and 7 < 8
# + id="e3678xUuLLj9" colab_type="code" colab={}
# Either condition 1 or condition 2 are True?
3 < 4 or 7 > 8
# + id="AgVnOhigLLj-" colab_type="code" colab={}
# Both conditional #1 and conditional #2 are False?
not ((3 > 4) or (7 > 8))
# + [markdown] id="B3ClkWDGLLkG" colab_type="text"
# ## 2. 4. Input and output
# + [markdown] id="DWqLiEZ1LLkH" colab_type="text"
# All programing languages provide features to interact with user. Python provide *input()* function to get input. It waits for the user to type some input and press return. We can add some information for the user by putting a message inside the function's brackets. It must be a string or a string variable. The text that was typed can be saved in a variable. Here is one example:
# + id="uotQ6pd8LLkH" colab_type="code" colab={}
nInput = input('Enter you number here:\n')
# + [markdown] id="jhMpU6dfLLkJ" colab_type="text"
# However, be aware that the input received from the user are treated as a string, even
# though a user entered a number. The following **print()** function invokes an error message.
# + id="otpC8BEbLLkJ" colab_type="code" colab={}
print(nInput + 3)
# + [markdown] id="FaIP1_b4LLkL" colab_type="text"
# The input need to be converted to an integer before the match operation can be performed as follows because the string data cannot add the interger data directly. They are totally different two types of data.
# + id="pe9Ey8M-LLkL" colab_type="code" colab={}
print(int(nInput) + 3)
# + [markdown] id="hWChILEcLLkP" colab_type="text"
# After user's input are accepted, the messages need to be displayed to the user accordingly. String concatenation is one way to display messages which incorporate variable values.
# + id="UBLRtd-kLLkP" colab_type="code" colab={}
name = 'David'
print('Hello, ' + name)
# + [markdown] id="OyEkpA3mLLkR" colab_type="text"
# Another way of achieving this is using **print()** funciton with *string formatting*. We need to use the *string formatting operator*, the percent(**%**) sign.
# + id="33qCABY9LLkS" colab_type="code" colab={}
name = 'David'
print('Hello, %s' % name)
# + [markdown] id="UvuFlcAcLLkW" colab_type="text"
# Here is another example with two variables:
# + id="-cFfoUp5LLkX" colab_type="code" colab={}
name = 'David'
age = 23
print('%s is %d years old.' % (name, age))
# + [markdown] id="MZQSERheLLkl" colab_type="text"
# Notice that the two variables, **name**, **age**, that specify the values are included at the end of the statement, and enclosed with a bracket.
#
# With the quotation mark, **%s** and **%d** are used to specify formating for string and integer respectively.
# The following table shows a selected set of symbols which can be used along with %.
# + [markdown] id="1wKLT4k4LLkl" colab_type="text"
# <table width="304" border="1">
# <tr>
# <th width="112" scope="col">Format symbol</th>
# <th width="176" scope="col">Conversion</th>
# </tr>
# <tr>
# <td>%s</td>
# <td>String</td>
# </tr>
# <tr>
# <td>%d</td>
# <td>Signed decimal integer</td>
# </tr>
# <tr>
# <td>%f</td>
# <td>Floating point real number</td>
# </tr>
# </table>
# + [markdown] id="Q0_RdlHALLkm" colab_type="text"
# There are extra charaters that are used together with above symbols:
# + [markdown] id="iaQEU8mCLLkm" colab_type="text"
# <table width="400" border="1">
# <tr>
# <th width="100" scope="col">Symbol</th>
# <th width="3000" scope="col">Functionality</th>
# </tr>
# <tr>
# <td>-</td>
# <td>Left justification</td>
# </tr>
# <tr>
# <td>+</td>
# <td>Display the sign</td>
# </tr>
# <tr>
# <td>m.n</td>
# <td>m is the minimum total width; n is the number of digits to display after the decimal point</td>
# </tr>
# </table>
# + [markdown] id="jfsOdi7pLLkn" colab_type="text"
# Here are more examples that use above specifiers:
# + id="9CSC_2NOLLkn" colab_type="code" colab={}
# With %f, the format is right justification by default.
# As a result, white spaces are added to the left of the number
# 10.4 means minimal width 10 with 4 decinal points
print('Output a float number: %10.4f' % (3.5))
# + id="FeXQfC1OLLkq" colab_type="code" colab={}
# plus sign after % means to show positive sign
# Zero after plus sign means using leading zero to fill width of 5
print('Output an integer: %+05d' % (23))
# + [markdown] id="tlMQpvLnLLkt" colab_type="text"
# ### 2.5 Notes on *Python 2*
#
# You need to pay attention if you test examples in this prac under *Python* 2.
#
# 1. In *Python 3, * **/** is float division, and **//** is integer division; while in Python 2,
# both **/** and **//**
# perform *integer division*.
# However, if you stick to using **float(3)/2** for *float division*,
# and **3/2** for *integer division*,
# you will have no problem in both version.
#
# 2. Instead using function **input()**,
# **raw_input()** is used in Python 2.
# Both functions have the same functionality,
# i.e. take what the user typed and passes it back as a string.
#
# 3. Although both versions support **print()** function with same format,
# Python 2 also allows the print statement (e.g. **print "Hello, World!"**),
# which is not valid in Python 3.
# However, if you stick to our examples and using **print()** function with parantheses,
# your programs should works fine in both versions.
| Jupyter/SIT742P01A-Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2 as cv
import numpy as np
import random
import xml.etree.ElementTree as ET
NUMBER = 7 #画像1枚当たりの牌の枚数
size_x = 300
size_y = 300
DataSize = 5000 #データ数
percent = 0.8 #Trainデータの割合
robust = 1 #データロバストするか
mode = 10 #ロバストする確率
# -
dict={"0":"1m","1":"2m","2":"3m","3":"4m","4":"5m","5":"6m","6":"7m","7":"8m","8":"9m","9":"1p","10":"2p","11":"3p","12":"4p","13":"5p","14":"6p","15":"7p","16":"8p","17":"9p","18":"1s","19":"2s","20":"3s","21":"4s","22":"5s","23":"6s","24":"7s","25":"8s","26":"9s","27":"east","28":"south","29":"west","30":"north","31":"white","32":"hatsu","33":"tyun"}
#画像、XMLの生成
def Make_PicXML(sample_filename , datasize, start=0):
No = []
place_x = []
place_y = 0
#画像の読み込み
pais=[]
for i in range(34):
filename = sample_filename + '/' +str(i)+'.jpg'
pais.append(cv.imread(filename))
sample_height = pais[0].shape[0]
sample_width = pais[0].shape[1]
for i in range(datasize):
#サイズの倍率(80~100)
magni = min(size_x/NUMBER/sample_width , size_y/sample_height)*random.randint(90,100)/100
x = int(magni * sample_width)
y = int(magni * sample_height)
#牌の種類の決定
No=[]
for num in range(NUMBER):
No.append(random.randint(0,33))
#場所の決定
place_x=[]
place_x.append(random.randint( 0,int(size_x - x*NUMBER)))
for num in range(NUMBER):
place_x.append(place_x[0]+ x*(num+1))
place_y=random.randint(1,int(size_y - y))
#画像の生成
img=np.zeros((size_y,size_x,3))
img=cv.rectangle(img,(0,0),(size_x,size_y),(0,128,0),cv.FILLED)
for num in range(NUMBER):
pai = cv.resize(pais[No[num]],(x,y))
#牌反転処理
if random.randint(0,1)== 0:
pai =cv.flip(pai ,0)
img[place_y:place_y+y,place_x[num]:place_x[num]+x]=pai
#ロバストの実行
if robust==1:
img = Robust(img)
#保存
if i < datasize*percent:
filename = 'images/train/' + str(start + i) + '.jpg'
else:
filename = 'images/test/' + str(start + i ) + '.jpg'
cv.imwrite(filename,img)
#XMLファイルの生成
Annotation = ET.Element('annotation')
Filename = ET.SubElement(Annotation,'filename')
Filename.text = str(i)+'.jpg'
size = ET.SubElement(Annotation,'size')
width = ET.SubElement(size,'width')
width.text = str(size_x)
height = ET.SubElement(size,'height')
height.text = str(size_y)
for num in range(NUMBER):
Object = ET.SubElement(Annotation, 'object')
name = ET.SubElement(Object, 'name')
name.text =dict[str(No[num])]
bndbox = ET.SubElement(Object, 'bndbox')
xmin = ET.SubElement(bndbox, 'xmin')
xmin.text = str(place_x[num])
ymin = ET.SubElement(bndbox, 'ymin')
ymin.text = str(place_y)
xmax = ET.SubElement(bndbox, 'xmax')
xmax.text = str(place_x[num] + x)
ymax = ET.SubElement(bndbox, 'ymax')
ymax.text = str(place_y + y)
tree = ET.ElementTree(element=Annotation)
#保存
if i < datasize*percent:
filename = 'images/train/' + str( start + i) + '.xml'
else:
filename = 'images/test/' + str( start + i) + '.xml'
tree.write(filename, encoding='utf-8', xml_declaration=True)
if i % 500 == 0 :
print("now complete No."+str(i))
# +
#画像のロバスト
saturation_var=0.5
brightness_var=0.5
contrast_var=0.5
lighting_std=0.5
def grayscale(rgb):
return rgb.dot([0.299, 0.587, 0.114])
def saturation(rgb):
gs = grayscale(rgb)
alpha = 2 * np.random.random() * saturation_var
alpha += 1 - saturation_var
rgb = rgb * alpha + (1 - alpha) * gs[:, :, None]
return np.clip(rgb, 0, 255)
def brightness(rgb):
alpha = 2 * np.random.random() * brightness_var
alpha += 1 - saturation_var
rgb = rgb * alpha
return np.clip(rgb, 0, 255)
def contrast(rgb):
gs = grayscale(rgb).mean() * np.ones_like(rgb)
alpha = 2 * np.random.random() * contrast_var
alpha += 1 - contrast_var
rgb = rgb * alpha + (1 - alpha) * gs
return np.clip(rgb, 0, 255)
def lighting(img):
cov = np.cov(img.reshape(-1, 3) / 255.0, rowvar=False)
eigval, eigvec = np.linalg.eigh(cov)
noise = np.random.randn(3) * lighting_std
noise = eigvec.dot(eigval * noise) * 255
img = np.add(img, noise)
return np.clip(img, 0, 255)
def Robust(img):
a =random.randint(0 , mode)
if a == 0:
img = grayscale(img)
elif a == 1:
img = saturation(img)
elif a == 2:
img = brightness(img)
elif a == 3:
img = contrast(img)
elif a == 4:
img = lighting(img)
return img
# -
Make_PicXML(sample_filename = 'sample/mj' ,datasize =DataSize ,start =0)
Make_PicXML(sample_filename = 'sample/home' ,datasize =DataSize ,start =DataSize)
| research/object_detection/make_annotation3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.2 64-bit
# language: python
# name: python38264bit68ee5070ed6a4df7bba1957ddac2fae1
# ---
# ## Berikut adalah projek 3
# Soal-soal terkahir dari kami sebagai pengajar Data Science Indonesia Regional Jawa Timur
# ### <i>Dataset</i>
# +
from pandas import read_csv
data = read_csv("data.csv")
data
# -
# Data di atas digunakan untuk menyelesaikan soal nomor satu dan dua. Adapun keterangan pada <i>dataset</i> dapat dilihat pada file yang bernama <b>Info.names</b>. <i>Dataset</i> di atas memiliki 1484 baris dan 9 kolom. Mengingat soal nomor satu dan dua bukan ditekankan pada <i>Exploratory Data Analysis</i> maka untuk menjawab soal nomor satu dan dua tidak perlu menunjukan visualisasi dari data di atas.
# ## 1. <i>Unsupervised Learning</i>
# Berdasarkan data yang tersedia, terapkan metode klasterisasi guna mengelompokkan tiap baris pada kelompok tertentu. Sehingga hasil akhir yang diminta adalah 1 kolom tambahan yang merupakan penentu baris tersebut adalah kategori/klaster yang mana. Untuk menjawab soal ini, silahkan simak materi pertemuan ke sepuluh yang membahas <i>predictive analytics</i> (<i>unsupervised learning/clustering</i>).
# ## 2. <i>Supervised Learning</i>
# Setelah mendapatkan kategori untuk tiap baris pada data di atas, maka latih sebuah algoritma klasifikasi pada <i>dataset</i> yang barusan kalian olah untuk dapat memprediksi masukan baru. Untuk menjawab soal ini, silahkan simak materi pertemuan ke sembilan yang membahas <i>predictive analytics</i> (<i>supervised learning</i>).
# ## 3. <i>Data Storytelling</i>
# Carilah <i>dataset</i> yang hendak kalian gunakan untuk "bercerita" terkait penemuan wawasan yang kalian coba usahakan. Minimal 3 visualisasi yang menarik.
# ## Contoh sederhana pengerjaan soal nomor 1 & 2
data = read_csv("contoh.csv", index_col="Id")
data
# +
from sklearn.cluster import KMeans
kmean = KMeans(n_clusters = 3)
# -
data['kategori'] = kmean.fit_predict(data)
data
from seaborn import pairplot
pairplot(data, hue = 'kategori')
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
# +
knn = KNeighborsClassifier()
x_train, x_test, y_train, y_test = train_test_split(data[data.columns[:-1]],
data['kategori'],
test_size = 0.2)
# -
knn.fit(x_train, y_train)
# +
hasil_prediksi = knn.predict(x_test)
print(classification_report(hasil_prediksi, y_test))
# -
print(hasil_prediksi)
print(y_test.tolist())
knn.predict([[1, 1, 1, 1]])
| Pertemuan 12 - Projek 3/Soal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="C7F31RIiVkxz" colab_type="code" outputId="f562386d-46ca-4e49-c555-0bee98f0ca96" colab={"base_uri": "https://localhost:8080/", "height": 125}
Io from google.colab import drive
drive.mount('/content/drive')
# + id="MxMQ0DMAc2vL" colab_type="code" colab={}
# ! unzip '/content/drive/My Drive/tiny-imagenet-200.zip'
# + id="1hX-oJ9IPKCd" colab_type="code" outputId="39041842-d8bd-4513-a645-5e1500696e58" colab={"base_uri": "https://localhost:8080/", "height": 34}
# %cd /content/drive/My\ Drive/Assignment\ 4/
# + id="TR3xDzmgirIG" colab_type="code" outputId="5eba12ab-c89c-4c55-f252-25bb8401315b" colab={"base_uri": "https://localhost:8080/", "height": 34}
import os
import cv2
import imutils
import json
import imutils
import numpy as np
from dataset_utils import HDF5DatasetWriter
from sklearn.preprocessing import LabelEncoder
from imutils import paths
TRAIN = "/content/tiny-imagenet-200/train/"
VAL = "/content/tiny-imagenet-200/val/images"
VAL_ANNOT = "/content/tiny-imagenet-200/val/val_annotations.txt"
WORDNET = "/content/tiny-imagenet-200/wnits.txt"
WORD_LABELS = "/content/tiny-imagenet-200/words.txt"
# + id="RxfISHg-nWy2" colab_type="code" outputId="3e52e153-a590-4069-b1db-cf4691fc94e5" colab={"base_uri": "https://localhost:8080/", "height": 52}
from keras.preprocessing.image import ImageDataGenerator
import pandas as pd
val_data = pd.read_csv(VAL_ANNOT , sep='\t', names=['File', 'Class', 'X', 'Y', 'H', 'W'])
val_data.drop(['X','Y','H', 'W'], axis=1, inplace=True)
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=18,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest"
)
train_generator = train_datagen.flow_from_directory(
TRAIN,
target_size=(64, 64),
batch_size=64,
class_mode='categorical')
val_datagen = ImageDataGenerator(rescale=1./255)
val_generator = val_datagen.flow_from_dataframe(
val_data, directory=VAL,
x_col='File',
y_col='Class',
target_size=(64, 64),
color_mode='rgb',
class_mode='categorical',
batch_size=64,
shuffle=False,
seed=42
)
# + id="3z_dqLlSd_ti" colab_type="code" colab={}
import preprocess
from preprocess import MeanPreprocessor, ImageToArrayPreprocessor
from dataset_utils import *
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import SGD, Adam
from keras.models import load_model
import keras.backend as K
import json
import sys
class SimplePreprocessor:
def __init__(self, width, height, inter=cv2.INTER_AREA):
# store the target image width, height, and interpolation
# method used when resizing
self.width = width
self.height = height
self.inter = inter
def preprocess(self, image):
# resize the image to a fixed size, ignoring the aspect
# ratio
return cv2.resize(image, (self.width, self.height),
interpolation=self.inter)
# + id="T5xxL3OkkvD4" colab_type="code" colab={}
# Data Augmentation
datagen = ImageDataGenerator(
rotation_range=18,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest")
means = json.loads(open(MEAN_NORM).read())
# Preprocessing
sp = SimplePreprocessor(64, 64)
mp = preprocess.MeanPreprocessor(means["R"], means["G"], means["B"])
iap = ImageToArrayPreprocessor()
# train_gen = HDF5DatasetGenerator(TRAIN_HF5, 64, aug=datagen,
# preprocessors=[sp, mp, iap], classes=200)
val_gen = HDF5DatasetGenerator(VAL_HF5, 64, classes=200)
# + id="Zgq6Gz9inSJk" colab_type="code" colab={}
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Conv2D, SeparableConv2D
from keras.layers.convolutional import AveragePooling2D, MaxPooling2D, ZeroPadding2D
from keras.layers.core import Activation, Dense
from CyclicLearningRate.clr_callback import *
from keras.layers import Flatten, Input, add
from keras.optimizers import Adam
from keras.callbacks import *
from keras.models import Model
from keras.regularizers import l2
from keras import backend as K
class ResNet:
@staticmethod
def residual_model(data, kernels, strides, chanDim, reduced=False, reg=0.0001, epsilon=2e-5, mom=0.9):
shortcut = data
bn1 = BatchNormalization(axis=chanDim, epsilon=epsilon, momentum=mom)(data)
act1 = Activation("relu")(bn1)
conv1 = SeparableConv2D(kernels, (3,3), padding='same', strides=strides, use_bias=False, depthwise_regularizer=l2(reg))(act1)
bn2 = BatchNormalization(axis=chanDim, epsilon=epsilon, momentum=mom)(conv1)
act2 = Activation("relu")(bn2)
conv2 = SeparableConv2D(kernels, (3,3), padding='same', strides=strides, use_bias=False, depthwise_regularizer=l2(reg))(act2)
input_shape = K.int_shape(data)
residual_shape = K.int_shape(conv2)
stride_width = int(round(input_shape[1] / residual_shape[1]))
stride_height = int(round(input_shape[2] / residual_shape[2]))
equal_channels = input_shape[3] == residual_shape[3]
shortcut = act1
# 1 X 1 conv if shape is different. Else identity.
if stride_width > 1 or stride_height > 1 or not equal_channels:
shortcut = Conv2D(filters=residual_shape[3],
kernel_size=(1, 1),
strides=(stride_width, stride_height),
padding="valid",
kernel_initializer="he_normal",
kernel_regularizer=l2(0.0001))(act1)
x = add([shortcut, conv2])
return x
@staticmethod
def build(width, height, depth, classes, stages, filters, reg=0.0001, epsilon=2e-5, mom=0.9):
inputShape = (height, width, depth)
inputs = Input(shape=inputShape)
chanDim= -1
# 2 x (3x3) => 64x64x3 -> 58x58x64
x = BatchNormalization(axis=chanDim, epsilon=epsilon, momentum=mom)(inputs)
# x = Activation("relu")(x)
x = SeparableConv2D(filters[0], (3, 3), use_bias=False, depthwise_regularizer=l2(reg), input_shape=inputShape)(x)
x = BatchNormalization(axis=chanDim, epsilon=epsilon, momentum=mom)(inputs)
x = Activation("relu")(x)
x = SeparableConv2D(filters[0], (3, 3), use_bias=False, depthwise_regularizer=l2(reg), input_shape=inputShape)(x)
x = BatchNormalization(axis=chanDim, epsilon=epsilon, momentum=mom)(inputs)
x = Activation("relu")(x)
x = SeparableConv2D(filters[0], (3, 3), use_bias=False, depthwise_regularizer=l2(reg), input_shape=inputShape)(x)
# MaxPool Layer + ZeroPadding => 58x58x64 -> 31x31x64
x = BatchNormalization(axis=chanDim, epsilon=epsilon, momentum=mom)(x)
x = Activation("relu")(x)
x = ZeroPadding2D((2, 2))(x)
x = MaxPooling2D((2, 2))(x)
for i in range(0, len(stages)):
if i == 0:
strides = (1,1)
else:
strides = (2,2)
x = ResNet.residual_model(x, filters[i+1], strides, chanDim, reduced=True)
for j in range(0, stages[i] - 1):
x = ResNet.residual_model(x, filters[i+1], (1,1), chanDim, epsilon=epsilon, mom=mom)
x = BatchNormalization(axis=chanDim, epsilon=epsilon, momentum=mom)(x)
x = Activation("relu")(x)
x = Conv2D(200, (1,1), use_bias=False, kernel_regularizer=l2(reg))(x)
# x = AveragePooling2D((3, 3))(x)
x = Flatten()(x)
x = Activation("softmax")(x)
model = Model(inputs, x, name="resnet")
return model
# + id="mld7rEyHndRm" colab_type="code" colab={}
model = ResNet.build(64, 64, 3, 200, (3, 4, 6, 3), (64, 64, 128, 256, 512), reg=0.0005)
# + id="PtnqmZVhqGWl" colab_type="code" colab={}
from keras.callbacks import Callback
class EpochCheckpoint(Callback):
def __init__(self, outputPath, every=5, startAt=0):
# call the parent constructor
super(Callback, self).__init__()
self.outputPath = outputPath
self.every = every
self.intEpoch = startAt
def on_epoch_end(self, epoch, logs={}):
# check to see if the model should be serialized to disk
if (self.intEpoch + 1) % self.every == 0:
p = os.path.sep.join([self.outputPath,
"resnet.hdf5".format(self.intEpoch + 1)])
self.model.save(p, overwrite=True)
self.intEpoch += 1
# + id="aSuy9JhqtRjc" colab_type="code" colab={}
import tensorflow
flag = 1
# Cyclic Learning Rate
clr_triangular = CyclicLR(mode='triangular')
checkpoint_path = "../checkpoints/check.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Create checkpoint callback
cp_callback = tensorflow.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=False,
verbose=1, period=5)
callbacks = [
EpochCheckpoint('../checkpoints/', every=5),
cp_callback,
clr_triangular
]
# + id="_POn-Gf-Gapj" colab_type="code" outputId="b182f9e9-6df6-47eb-e9ff-d48c05c03410" colab={"base_uri": "https://localhost:8080/", "height": 4976}
from keras.models import load_model
if flag == 0:
# opt = SGD(lr=1e-1, momentum=0.9)
model = ResNet.build(64, 64, 3, 200, (3, 4, 6, 3), (64, 64, 128, 256, 512), reg=0.0005)
model.compile(optimizer=Adam(0.1), loss="categorical_crossentropy", metrics=["accuracy"])
else:
# Load the model
model = load_model('../checkpoints/resnet.hdf5')
# model.compile(optimizer=Adam(0.1), loss="categorical_crossentropy", metrics=["accuracy"])
# Update the learning rate
print(f'Old Learning Rate: {K.get_value(model.optimizer.lr)}')
K.set_value(model.optimizer.lr, 0.01)
print(f'New Learning Rate: {K.get_value(model.optimizer.lr)}')
model.summary()
# + id="LEVV7NQBHIxS" colab_type="code" outputId="29fe28d8-b04d-47cb-f819-9751ac11fad9" colab={"base_uri": "https://localhost:8080/", "height": 1103}
model.fit_generator(
train_generator,
steps_per_epoch=100000 // 64,
validation_data=val_generator,
validation_steps=10000 // 64,
epochs=50,
max_queue_size=64 * 2,
callbacks=callbacks, verbose=1
)
# close the databases
train_gen.close()
val_gen.close()
# + [markdown] id="zIQjXH1w4BUi" colab_type="text"
# ### Colab crashed after 23 epochs
# ### Total Epochs = 23
# + id="iqJ641PXTGQa" colab_type="code" colab={}
flag = 1
from keras.models import load_model
if flag == 0:
# opt = SGD(lr=1e-1, momentum=0.9)
model = ResNet.build(64, 64, 3, 200, (3, 4, 6, 3), (64, 64, 128, 256, 512), reg=0.0005)
model.compile(optimizer=Adam(0.1), loss="categorical_crossentropy", metrics=["accuracy"])
else:
# Load the model
model = load_model('../checkpoints/resnet.hdf5')
# model.compile(optimizer=Adam(0.1), loss="categorical_crossentropy", metrics=["accuracy"])
# # Update the learning rate
# print(f'Old Learning Rate: {K.get_value(model.optimizer.lr)}')
# K.set_value(model.optimizer.lr, 0.01)
# print(f'New Learning Rate: {K.get_value(model.optimizer.lr)}')
model.summary()
# + id="NLADjXga-uxR" colab_type="code" outputId="a28d35b8-9fb7-4da5-d302-b33c3696a339" colab={"base_uri": "https://localhost:8080/", "height": 1414}
model.fit_generator(
train_generator,
steps_per_epoch=100000 // 64,
validation_data=val_generator,
validation_steps=10000 // 64,
epochs=27,
max_queue_size=64 * 2,
callbacks=callbacks, verbose=1
)
model.save(checkpoint_path)
# + [markdown] id="TP8jIRf64Puk" colab_type="text"
# ### Ran for 27 epochs with Learning Rate Decay
# ### Total epochs = 50
# + id="dAYExFsIfMG4" colab_type="code" outputId="6ccbfb4f-ce76-4e6d-ca26-b40fd230ac32" colab={"base_uri": "https://localhost:8080/", "height": 1118}
flag = 1
from keras.models import load_model
if flag == 0:
# opt = SGD(lr=1e-1, momentum=0.9)
model = ResNet.build(64, 64, 3, 200, (3, 4, 6, 3), (64, 64, 128, 256, 512), reg=0.0005)
model.compile(optimizer=Adam(0.1), loss="categorical_crossentropy", metrics=["accuracy"])
else:
# Load the model
model = load_model('../checkpoints/resnet.hdf5')
# model.compile(optimizer=Adam(0.1), loss="categorical_crossentropy", metrics=["accuracy"])
# Update the learning rate
print(f'Old Learning Rate: {K.get_value(model.optimizer.lr)}')
K.set_value(model.optimizer.lr, 0.01)
print(f'New Learning Rate: {K.get_value(model.optimizer.lr)}')
model.fit_generator(
train_generator,
steps_per_epoch=100000 // 64,
validation_data=val_generator,
validation_steps=10000 // 64,
epochs=25,
max_queue_size=64 * 2,
callbacks=callbacks, verbose=1
)
model.save(checkpoint_path)
# + [markdown] id="wbW8l3884cen" colab_type="text"
# ### Ran for another 25 epochs with fluctuating accuracies.
# ### This is a sign of slight overfitting and thus had to regularize the model more and decrease the learning rate
# ### Total epochs = 75
# + id="x8ygp0QrbqNS" colab_type="code" outputId="aff9232c-10e6-419e-cc03-a82b4d371e58" colab={"base_uri": "https://localhost:8080/", "height": 1972}
flag = 1
from keras.models import load_model
if flag == 0:
# opt = SGD(lr=0.005, momentum=0.9)
model = ResNet.build(64, 64, 3, 200, (3, 4, 6, 3), (64, 64, 128, 256, 512), reg=0.0005)
model.compile(optimizer=Adam(0.1), loss="categorical_crossentropy", metrics=["accuracy"])
else:
# Load the model
model = load_model(checkpoint_path)
# model.compile(optimizer=Adam(0.1), loss="categorical_crossentropy", metrics=["accuracy"])
# Update the learning rate
print(f'Old Learning Rate: {K.get_value(model.optimizer.lr)}')
K.set_value(model.optimizer.lr, 0.005)
print(f'New Learning Rate: {K.get_value(model.optimizer.lr)}')
model.fit_generator(
train_generator,
steps_per_epoch=100000 // 64,
validation_data=val_generator,
validation_steps=10000 // 64,
epochs=25,
max_queue_size=64 * 2,
callbacks=callbacks, verbose=1
)
model.save(checkpoint_path)
# + id="O9O2iFalSDc_" colab_type="code" colab={}
# + id="9WncczeQQpYE" colab_type="code" outputId="0229a559-c086-40c4-e73b-1d0f1a56abcc" colab={"base_uri": "https://localhost:8080/", "height": 283}
flag = 1
from keras.models import load_model
if flag == 0:
# opt = SGD(lr=0.005, momentum=0.9)
model = ResNet.build(64, 64, 3, 200, (3, 4, 6, 3), (64, 64, 128, 256, 512), reg=0.0005)
model.compile(optimizer=Adam(0.1), loss="categorical_crossentropy", metrics=["accuracy"])
else:
# Load the model
model = load_model(checkpoint_path)
# model.compile(optimizer=Adam(0.1), loss="categorical_crossentropy", metrics=["accuracy"])
# Update the learning rate
print(f'Old Learning Rate: {K.get_value(model.optimizer.lr)}')
K.set_value(model.optimizer.lr, 0.005)
print(f'New Learning Rate: {K.get_value(model.optimizer.lr)}')
model.fit_generator(
train_generator,
steps_per_epoch=100000 // 64,
validation_data=val_generator,
validation_steps=10000 // 64,
epochs=25,
max_queue_size=64 * 2,
callbacks=callbacks, verbose=1
)
model.save(checkpoint_path)
# + id="k3l1wu4lQxOF" colab_type="code" colab={}
| Assignment 4/Resnet_ImageNet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="nU-KzysL9ICe" colab_type="code" colab={}
# #!pip install datadotworld
# #!pip install datadotworld[pandas]
# + id="fRXsQFx6_jBX" colab_type="code" colab={}
# #!dw configure
# + id="RGdCc4cc-p0M" colab_type="code" colab={}
from google.colab import drive
import pandas as pd
import numpy as np
import datadotworld as dw
# + id="fEqaW4WdAEJT" colab_type="code" colab={}
#drive.mount("/content/drive")
# + id="AY-LCuyjAdek" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="841015d6-7061-488f-a535-bad097f44de5" executionInfo={"status": "ok", "timestamp": 1581521169755, "user_tz": -60, "elapsed": 2058, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
# ls
# + id="PYNF3Cl-AlUw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="366da108-0bed-43ee-8fce-a7f0c28f3cda" executionInfo={"status": "ok", "timestamp": 1581521259788, "user_tz": -60, "elapsed": 878, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
# cd "drive/My Drive/Colab Notebooks/dataworkshop_matrix"
# + id="m-kboGZZA1c8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="73534e6e-a530-43aa-e55d-b199da360f7a" executionInfo={"status": "ok", "timestamp": 1581521269222, "user_tz": -60, "elapsed": 1889, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
# ls
# + id="foTx0E-UA8Vt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fb6cd15a-f142-4069-f827-509c50d71b87" executionInfo={"status": "ok", "timestamp": 1581521762814, "user_tz": -60, "elapsed": 2217, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
# ls day3_matrix_one
# + id="JG65lL2nC0js" colab_type="code" colab={}
# !mkdir data
# + id="BfwTT5gUDJvH" colab_type="code" colab={}
# !echo 'data' > .gitignore
# + id="sOqBUE0RDapS" colab_type="code" colab={}
# !git add .gitignore
# + id="HalnALFVD_Ju" colab_type="code" colab={}
data = dw.load_dataset('datafiniti/mens-shoe-prices')
# + id="wOz28PCrEdZu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="93325b04-a3ed-4fca-d7dd-c3f74d355288" executionInfo={"status": "ok", "timestamp": 1581522294194, "user_tz": -60, "elapsed": 1895, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
df = data.dataframes['7004_1']
df.shape
# + id="U2qXcIobE11W" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 513} outputId="c55c201e-6c88-4f2c-efde-8af9de44a4dd" executionInfo={"status": "ok", "timestamp": 1581522323802, "user_tz": -60, "elapsed": 651, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
df.sample(5)
# + id="EdL0AV16FAjC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="57bc65cd-26f9-4263-81fc-279c48247620" executionInfo={"status": "ok", "timestamp": 1581522365944, "user_tz": -60, "elapsed": 643, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
df.columns
# + id="T-4sp_4mFJZb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="241e6a5b-5b92-4597-94b0-2f48eec1ba4c" executionInfo={"status": "ok", "timestamp": 1581522409411, "user_tz": -60, "elapsed": 656, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
df.prices_currency.unique()
# + id="GPuj79rhFXep" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 255} outputId="668c284a-729f-4428-c0c1-0bdccd0c53cc" executionInfo={"status": "ok", "timestamp": 1581522507813, "user_tz": -60, "elapsed": 652, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
df.prices_currency.value_counts(normalize=True)
# + id="VUj6naN-Fqrb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1255eecb-09be-4f9f-be3c-5f737b1df25c" executionInfo={"status": "ok", "timestamp": 1581522681298, "user_tz": -60, "elapsed": 703, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
df_usd = df[ df.prices_currency == 'USD' ].copy()
df_usd.shape
# + id="ytK_T6S_Ghh9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="eefe1226-2813-4fa4-e055-60f4c80332b9" executionInfo={"status": "ok", "timestamp": 1581522773878, "user_tz": -60, "elapsed": 786, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
df_usd.prices_amountmin.head()
# + id="vtFqkuByGsQ7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="fc61789b-2065-4cb0-bc44-bfe155b34528" executionInfo={"status": "ok", "timestamp": 1581523046602, "user_tz": -60, "elapsed": 719, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
df_usd['prices_amountmin'] = df_usd.prices_amountmin.astype(np.float)
df_usd['prices_amountmin'].hist()
# + id="CGbaxl8MHPaC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6ab0bfdf-559b-43af-aff3-ce62dedf686a" executionInfo={"status": "ok", "timestamp": 1581523065678, "user_tz": -60, "elapsed": 716, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
filter_max = np.percentile( df_usd['prices_amountmin'], 99 )
filter_max
# + id="tMAytlWDHuES" colab_type="code" colab={}
df_usd_filter = df_usd[ df_usd['prices_amountmin'] < filter_max ]
# + id="jnzDPuM2IIpZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="72732a03-cc46-4f3f-fd53-4075648e73fe" executionInfo={"status": "ok", "timestamp": 1581523254992, "user_tz": -60, "elapsed": 822, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
df_usd_filter.prices_amountmin.hist(bins=100)
# + id="weok6OzgJCIA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="03910e45-2a5c-47aa-c27d-d9e84434dcc7" executionInfo={"status": "ok", "timestamp": 1581523427247, "user_tz": -60, "elapsed": 2174, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
# ls
# + id="Y29ikQLlJGvd" colab_type="code" colab={}
df.to_csv('data/shoes_prices.csv', index=False)
# + id="htrtmCxuMKet" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="268fc79a-6f87-4798-8183-f9dbf0c03c5b" executionInfo={"status": "ok", "timestamp": 1581524256366, "user_tz": -60, "elapsed": 1988, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
# ls day3_matrix_one
# + id="Rhnt5HzrNaXU" colab_type="code" colab={}
# !git add day3_matrix_one/day3.ipynb
# + id="S0DCl8UKNxLH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="deb59c22-72c3-498d-b131-14ee89cf5b87" executionInfo={"status": "ok", "timestamp": 1581524837142, "user_tz": -60, "elapsed": 4066, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
# !git commit -m "Read Men's Shoes Prices dataset from data.world"
# + id="qFVQryPROR2U" colab_type="code" colab={}
# !git config --global user.email "<EMAIL>"
# !git config --global user.name "Paulina"
# + id="pLxfuaNPPDWm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="57fbb2a7-dde5-4c5f-f190-949d3b03a91e" executionInfo={"status": "ok", "timestamp": 1581525028556, "user_tz": -60, "elapsed": 7713, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbVS0exRFMvJLxGv-ZtTyGfxdhMPjX7mq78-H-4MQ=s64", "userId": "08144210629576263422"}}
# !git push -u origin master
| day3_matrix_one/day3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import statistics
import matplotlib.pyplot as plt
import pymysql
import config
import transformations
import ml
from sklearn.model_selection import KFold
import pandas as pd
# +
conn = pymysql.connect(config.host, user=config.username,port=config.port,
passwd=config.password)
#gather all historical data to build model
RideWaits = pd.read_sql_query("call DisneyDB.RideWaitQuery", conn)
#transform data for model bulding
RideWaits = transformations.transformData(RideWaits)
RideWaits.info()
# -
originalData = RideWaits.copy()
originalData.info()
keyFeatures = ["Name","MagicHourType",
"Tier", "IntellectualProp",
"SimpleStatus", "ParkName",
"DayOfWeek", "Weekend", "TimeSinceOpen", "MinutesSinceOpen",
"CharacterExperience", "TimeSinceMidday",
"inEMH", "EMHDay"]
newModel = ml.buildModel(RideWaits, keyFeatures, "Wait")
originalData.info()
rides = originalData.Name.unique()
rides
keyFeatures
# +
#build our new data data frame
from datetime import datetime
from datetime import timedelta
import numpy as np
today = datetime.now()
currentDate = datetime.date(today)
DayOfWeek = datetime.weekday(today)
Weekend = 1 if DayOfWeek == 5 or DayOfWeek == 6 else 0
print(today)
# print(DayOfWeek)
# print(Weekend)
newData = pd.DataFrame()
conn = pymysql.connect(config.host, user = config.username, port = config.port, passwd = config.password)
for ride in rides:
rideData = originalData[originalData['Name'] == ride]
rideStatic = {'Name': ride,
'Tier': rideData['Tier'].iloc[0],
'IntellectualProp': rideData['IntellectualProp'].iloc[0],
'ParkName': rideData['ParkName'].iloc[0],
'CharacterExperience': rideData['CharacterExperience'].iloc[0],
'DayOfWeek': DayOfWeek,
'Weekend': Weekend}
rideFrame = pd.DataFrame(rideStatic, index = [0])
getParkHours = "select * from DisneyDB.ParkHours phours join DisneyDB.Park park on phours.ParkId = park.Id where Name = '"+ rideStatic['ParkName'] + "' and Date = '" + str(currentDate)+"'"
parkHours = pd.read_sql_query(getParkHours, conn)
emhDay = 0 if parkHours.EMHOpen[0] == 'None' else 1
rideFrame['EMHDay'] = emhDay
parkHours['ParkOpen'] = pd.to_datetime(parkHours['ParkOpen'], format = '%I:%M %p').dt.strftime('%H:%M')
parkHours['ParkOpen'] = pd.to_datetime(parkHours['ParkOpen'], format = '%H:%M').dt.time
parkHours['ParkClose'] = pd.to_datetime(parkHours['ParkClose'], format = '%I:%M %p').dt.strftime('%H:%M')
parkHours['ParkClose'] = pd.to_datetime(parkHours['ParkClose'], format = '%H:%M').dt.time
parkHours["EMHOpen"] = pd.to_datetime(parkHours["EMHOpen"], format = '%I:%M %p', errors = 'coerce').dt.strftime('%H:%M')
parkHours["EMHClose"] = pd.to_datetime(parkHours["EMHClose"], format = '%I:%M %p', errors = 'coerce').dt.strftime('%H:%M')
parkHours["EMHOpen"] = pd.to_datetime(parkHours["EMHOpen"], format = '%H:%M', errors = 'coerce').dt.time
parkHours["EMHClose"] = pd.to_datetime(parkHours["EMHClose"], format = '%H:%M', errors = 'coerce').dt.time
parkOpen = parkHours.ParkOpen.iloc[0]
parkClose = parkHours.ParkClose.iloc[0]
emhOpen = parkHours.EMHOpen.iloc[0]
emhClose = parkHours.EMHClose.iloc[0]
if emhDay == 1:
if emhClose == parkOpen:
emhType = 'Morning'
else:
emhType = 'Night'
pOpenToday = today.replace(hour = parkOpen.hour, minute = parkOpen.minute, second = 0, microsecond = 0)
pCloseToday = today.replace(hour = parkClose.hour, minute= parkClose.minute, second = 0, microsecond = 0)
if pCloseToday < pOpenToday:
try:
pCloseToday = pCloseToday.replace(day = pCloseToday.day + 1)
except:
try:
pCloseToday = pCloseToday.replace(month = pCloseToday.month + 1, day = 1)
except:
pCloseToday = pCloseToday.replace(year = pCloseToday.year + 1, month = 1, day = 1)
# print("=========================")
# print("park open: "+ str(pOpenToday))
# print("park close: "+ str(pCloseToday))
if emhDay == 1:
eOpenToday = today.replace(hour = emhOpen.hour, minute = emhOpen.minute, second = 0, microsecond = 0)
if eOpenToday.hour < 6:
try:
eOpenToday = eOpenToday.replace(day = eOpenToday.day + 1)
except:
try:
eOpenToday = eOpenToday.replace(month = eOpenToday.month + 1, day = 1)
except:
eOpenToday = eOpenToday.replace(year = eOpenToday.year + 1, month = 1, day = 1)
eCloseToday = today.replace(hour = emhClose.hour, minute = emhClose.minute, second = 0, microsecond = 0)
if (eCloseToday < pOpenToday) and (emhType == 'Night'):
try:
eCloseToday = eCloseToday.replace(day = eCloseToday.day + 1)
except:
try:
eCloseToday = eCloseToday.replace(month = eCloseToday.month + 1, day = 1)
except:
eCloseToday = eCloseToday.replace(year = eCloseToday.year + 1, month =1, day = 1)
print("emh open: "+ str(eOpenToday))
print("emh close: "+ str(eCloseToday))
totalRideFrame = pd.DataFrame()
startTime = eOpenToday if emhDay == 1 and emhType == 'Morning' else pOpenToday
validTime = True
currentTime = startTime
midday = today.replace(hour = 14, minute = 0, second = 0, microsecond = 0)
while validTime:
timeSinceOpen = currentTime - startTime
timeSinceMidDay = currentTime - midday
if emhDay == 1:
if (currentTime >= eOpenToday) and (currentTime <= eCloseToday):
inEMH = 1
else:
inEMH = 0
else:
inEMH = 0
minutesSinceOpen = int(round(timeSinceOpen.total_seconds()/60))
timeSinceMidDayHours = int(round(abs(timeSinceMidDay.total_seconds()/3600)))
timeSinceOpenHours = int(round(timeSinceOpen.total_seconds()/3600))
currentRow = rideFrame.copy()
currentRow['TimeSinceOpen'] = timeSinceOpenHours
currentRow['MinutesSinceOpen'] = minutesSinceOpen
currentRow['TimeSinceMidday'] = timeSinceMidDayHours
currentRow['inEMH'] = inEMH
totalRideFrame = pd.concat([totalRideFrame,currentRow])
newTime = currentTime + timedelta(minutes=15)
if emhDay == 1:
if emhType == 'Morning':
if (newTime >= eOpenToday) and (newTime <= pCloseToday):
validTime = True
else:
validTime = False
else:
if (newTime <= pOpenToday) and (newTime <= eCloseToday):
validTime = True
else:
validTime = False
else:
if (newTime >= pOpenToday) and (newTime <= pCloseToday):
validTime = True
else:
validTime = False
currentTime = newTime
newData = pd.concat([newData, totalRideFrame])
# print([startTime, endTime,emhDay, inEMH])
conn.close()
#print(parkHours)
# -
newData
newData[newData["Name"] == '<NAME>']
newData["SimpleStatus"] = "Clear"
newModel
import sys
# !{sys.executable} -m pip install statsmodels
from statsmodels.tools import categorical
RideWaits.info()
RideWaits.
| src/.ipynb_checkpoints/Model Run-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.read_excel("stocks.xlsx",header=[0,1])
df
df_stacked = df.stack()
df_stacked
df_stacked.unstack()
df2 = pd.read_excel('stocks_3_levels.xlsx', header = [0, 1, 2])
df2
df2.stack(level = 2)
| Programming Languages & Libraries/Python/Pandas Tutorial(codebasics)/Stack Unstack/stack_unstack.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (deep_learning)
# language: python
# name: deep_learning
# ---
# +
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1,6,5)
self.conv2 = nn.Conv2d(6,16,5)
self.pool = nn.MaxPool2d(2,2)
self.fc1 = nn.Linear(16*5*5,120)
self.fc2 = nn.Linear(120,85)
self.fc3 = nn.Linear(85,10)
def forward(self,x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1,16*5*5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
print(net)
# -
params = list(net.parameters())
print(len(params))
print(params[0].size())
input = Variable(torch.randn(1,1,32,32))
out = net(input)
print(out)
import torch
import torchvision
import torchvision.transforms as transforms
# +
transform = transforms.Compose([transforms.Pad(2),
transforms.ToTensor(),
transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
trainset = torchvision.datasets.MNIST(root='../datasets/',
train=True,
download=True,
transform=transform)
trainloader = torch.utils.data.DataLoader(trainset,
batch_size=4,
shuffle=True,
num_workers=2)
# +
testset = torchvision.datasets.MNIST(root="../datasets/",
train=False,
download=True,
transform=transform)
testloader = torch.utils.data.DataLoader(testset,
batch_size=4,
shuffle=False,
num_workers=2)
# -
classes = tuple([str(x) for x in range(10)])
# +
import matplotlib.pyplot as plt
import numpy as np
dataiter = iter(trainloader)
images, labels = dataiter.next()
# -
plt.imshow(np.transpose(torchvision.utils.make_grid(images).numpy(),(1,2,0)))
plt.show()
print(" ".join("%5s" % classes[labels[j]] for j in range(4)))
# +
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
# +
for epoch in range(4):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
inputs, labels = Variable(inputs), Variable(labels)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.data[0]
if i % 2000 == 1999:
print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 20000))
running_loss = 0.0
print("finished training")
# +
dataiter = iter(trainloader)
images, labels = dataiter.next()
plt.imshow(np.transpose(torchvision.utils.make_grid(images).numpy(),(1,2,0)))
plt.show()
print(" ".join("%5s" % classes[labels[j]] for j in range(4)))
# -
outputs = net(Variable(images))
# +
_, predicted = torch.max(outputs.data, 1)
print('Predicted: ', ' '.join(str(predicted[j])
for j in range(4)))
# +
correct = 0
total = 0
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
# +
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i]
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
# -
| notebooks/MNIST with Torch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.0 64-bit
# metadata:
# interpreter:
# hash: 0cd9dfacd0aaad32f3368ab886f30578631b8ea573a55d5d46e6046682feb4d9
# name: python3
# ---
from os.path import isfile, join
sys.path.append("C:/GitHub/SchoolNetUtilities")
import wikipedia.alphabet as alphabet
import re
STOP_CHARS = ['.', '!', '?', '{', '}']
UNWANTED_CHARS = ['.', ',', ':', '!', '?', ';']
DATABASE = {}
countSen = []
countWords = []
def convertFile(text):
global DATABASE
global countSen
global countWords
fileWords = 0
fileSen = 0
allWords: List[str]
allWords = re.findall("[" + alphabet.getAlphabet() + "]+[ \n\.\,\;\:\!\?]", text)
startSen = False
for word in allWords:
if re.match("[" + alphabet.getAlphabet() + "]+[ \n]", word):
startSen = True
for ending in STOP_CHARS:
if word.endswith(ending):
if startSen:
startSen = False
fileSen += 1
cleanWord = word[:len(word) - 1]
print("Clean Word: ", cleanWord)
if cleanWord in DATABASE:
DATABASE[cleanWord] = DATABASE[cleanWord] + 1
else:
DATABASE[cleanWord] = 1
# Append statistics for each file
countWords.append(len(allWords))
countSen.append(fileSen)
convertFile("test Јас сум Митко. Прост пример реченица, се гледаме утре. something { something")
print("Count of Sen: ", countSen)
print("Count of Words:", countWords)
| notebooks/findWords.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Using NCEI geoportal REST API to collect information about IOOS Regional Association archived data
#
# Created: 2017-06-12
#
# IOOS regional associations archive their non-federal observational data with NOAA's National Center for Environmental Information (NCEI). In this notebook we will use the [RESTful](https://github.com/Esri/geoportal-server/wiki/REST-API-Syntax) services of the [NCEI geoportal](https://www.ncei.noaa.gov/metadata/geoportal/#searchPanel) to collect metadata from the archive packages found in the NCEI archives. The metadata information are stored in [ISO 19115-2](https://wiki.earthdata.nasa.gov/display/NASAISO/ISO+19115-2) xml files which the NCEI geoportal uses for discovery of Archival Information Packages (AIPs). This example uses the ISO metadata records to display publication information as well as plot the time coverage of each AIP at NCEI which meets the search criteria.
#
# First we update the namespaces dictionary from owslib to include the appropriate namespace reference for gmi and gml.
#
# For more information on ISO Namespaces see: https://geo-ide.noaa.gov/wiki/index.php?title=ISO_Namespaces
# +
from owslib.iso import namespaces
# Append gmi namespace to namespaces dictionary.
namespaces.update({"gmi": "http://www.isotc211.org/2005/gmi"})
namespaces.update({"gml": "http://www.opengis.net/gml/3.2"})
del namespaces[None]
# -
# ### Now we select a Regional Association and platform
# This is where the user identifies the Regional Association and the platform type they are interested in. Change the RA acronym to the RA of interest. The user can also omit the Regional Association, by using `None`, to collect metadata information about all IOOS non-Federal observation data archived through the NCEI-IOOS pipeline.
#
# The options for platform include: `"HF Radar"`, `"Glider"`, and `"FIXED PLATFORM"`.
# +
# Select RA, this will be the acronym for the RA or None if you want to search across all RAs
ra = 'CARICOOS'
# Identify the platform.
platform = '"FIXED PLATFORM"' # Options include: None, "HF Radar", "Glider", "FIXED PLATFORM"
# -
# ### Next we generate a geoportal query and georss feed
# To find more information about how to compile a geoportal query, have a look at [REST API Syntax](https://github.com/Esri/geoportal-server/wiki/REST-API-Syntax) and the [NCEI Search Tips](https://www.nodc.noaa.gov/search/granule/catalog/searchtips/searchtips.page) for the [NCEI geoportal](https://data.nodc.noaa.gov/geoportal/catalog/search/search.page). The example provided is specific to the NCEI-IOOS data pipeline project and only searches for non-federal timeseries data collected by each Regional Association.
#
# The query developed here can be updated to search for any Archival Information Packages at NCEI, therefore the user should develop the appropriate query using the [NCEI Geoportal](https://data.nodc.noaa.gov/geoportal/catalog/search/search.page) and update this portion of the code to identify the REST API of interest.
# +
try:
from urllib.parse import quote
except ImportError:
from urllib import quote
# Generate geoportal query and georss feed.
# Base geoportal url.
baseurl = "https://www.ncei.noaa.gov/" "metadata/geoportal/opensearch" "?q="
# Identify the Regional Association
if ra is None:
reg_assoc = ''
else:
RAs = {
"AOOS": "Alaska Ocean Observing System",
"CARICOOS": "Caribbean Coastal Ocean Observing System",
"CeNCOOS": "Central and Northern California Coastal Ocean Observing System",
"GCOOS": "Gulf of Mexico Coastal Ocean Observing System",
"GLOS": "Great Lakes Observing System",
"MARACOOS": "Mid-Atlantic Regional Association Coastal Ocean Observing System",
"NANOOS": "Northwest Association of Networked Ocean Observing Systems",
"NERACOOS": "Northeastern Regional Association of Coastal Ocean Observing System",
"PacIOOS": "Pacific Islands Ocean Observing System",
"SCCOOS": "Southern California Coastal Ocean Observing System",
"SECOORA": "Southeast Coastal Ocean Observing Regional Association",
}
reg_assoc = '(dataThemeinstitutions_s:"%s" dataThemeprojects_s:"%s (%s)")'%(RAs[ra], RAs[ra], ra)
# Identify the project.
project = '"Integrated Ocean Observing System Data Assembly Centers Data Stewardship Program"'
# Identify the amount of records and format of the response: 1 to 1010 records.
records = "&start=1&num=1010"
# Identify the format of the response: georss.
response_format = "&f=csv"
if platform is not None:
reg_assoc_plat = quote(reg_assoc + ' AND' + platform)
else:
reg_assoc_plat = quote(reg_assoc)
# Combine the URL.
url = "{}{}{}{}".format(baseurl , reg_assoc_plat, '&filter=dataThemeprojects_s:', quote(project) + records + response_format)
print("Identified response format:\n{}".format(url))
print(
"\nSearch page response:\n{}".format(url.replace(response_format, "&f=searchPage"))
)
# -
# ### Time to query the portal and parse out the csv response
# Here we are opening the specified REST API and parsing it into a string. Then, since we identified it as a csv format above, we parse it using the Pandas package. We also split the Data_Date_Range column into two columns, `data_start_date` and `data_end_date` to have that useful information available.
# +
import pandas as pd
import numpy as np
df = pd.read_csv(url)
df[['data_start_date','data_end_date']] = df['Data_Date_Range'].str.split(' to ',expand=True)
df['data_start_date'] = pd.to_datetime(df['data_start_date'])
df['data_end_date'] = pd.to_datetime(df['data_end_date']) + pd.Timedelta(np.timedelta64(1, "ms"))
df.head()
# -
# Now, lets pull out all the ISO metadata record links and print them out so the user can browse to the metadata record and look for what items they might be interested in.
# +
# parse the csv response
print("Found %i record(s)" % len(df))
for index, row in df.iterrows():
print('ISO19115-2 record:',row['Link_Xml']) # URL to ISO19115-2 record.
print('NCEI dataset metadata page: https://www.ncei.noaa.gov/access/metadata/landing-page/bin/iso?id=' + row['Id'] )
print('\n')
# -
# ### Let's collect what we have found
# Now that we have all the ISO metadata records we are interested in, it's time to do something fun with them. In this example we want to generate a timeseries plot of the data coverage for the "Southern California Coastal Ocean Observing System" stations we have archived at NCEI.
#
# First we need to collect some information. We loop through each iso record to collect metadata information about each package. The example here shows how to collect the following items:
# 1. NCEI Archival Information Package (AIP) Accession ID (7-digit Accession Number)
# 2. The first date the archive package was published.
# 3. The platform code identified from the provider.
# 4. The version number and date it was published.
# 5. The current AIP size, in MB.
#
# There are plenty of other metadata elements to collect from the ISO records, so we recommend browsing to one of the records and having a look at the items of interest to your community.
# +
# Process each iso record.
# %matplotlib inline
from datetime import datetime
import xml.etree.ElementTree as ET
from owslib import util
from urllib.request import urlopen
df[['provider_platform_name','NCEI_accession_number','package_size_mb','submitter']] = ''
# For each accession in response.
for url in df['Link_Xml']:
iso = urlopen(url)
iso_tree = ET.parse(iso)
root = iso_tree.getroot()
vers_dict = dict()
# Collect Publication date information.
date_path = (
".//"
"gmd:identificationInfo/"
"gmd:MD_DataIdentification/"
"gmd:citation/"
"gmd:CI_Citation/"
"gmd:date/"
"gmd:CI_Date/"
"gmd:date/gco:Date"
)
# First published date.
pubdate = root.find(date_path, namespaces)
print("\nFirst published date = %s" % util.testXMLValue(pubdate))
# Data Temporal Coverage.
temporal_extent_path = (
".//"
"gmd:temporalElement/"
"gmd:EX_TemporalExtent/"
"gmd:extent/"
"gml:TimePeriod"
)
beginPosition = root.find(temporal_extent_path + '/gml:beginPosition', namespaces).text
endPosition = root.find(temporal_extent_path + '/gml:endPosition', namespaces).text
print("Data time coverage: %s to %s" % (beginPosition, endPosition))
# Collect keyword terms of interest.
for MD_keywords in root.iterfind('.//gmd:descriptiveKeywords/gmd:MD_Keywords', namespaces):
for thesaurus_name in MD_keywords.iterfind('.//gmd:thesaurusName/gmd:CI_Citation/gmd:title/gco:CharacterString', namespaces):
if thesaurus_name.text == "Provider Platform Names":
plat_name = MD_keywords.find('.//gmd:keyword/gco:CharacterString', namespaces).text
print("Provider Platform Code = %s" % plat_name)
df.loc[df.Link_Xml == url, ['provider_platform_name']] = plat_name
break
elif thesaurus_name.text == "NCEI ACCESSION NUMBER":
acce_no = MD_keywords.find('.//gmd:keyword/gmx:Anchor', namespaces).text
print("Accession:",acce_no)
df.loc[df.Link_Xml == url, ['NCEI_accession_number']] = acce_no
break
elif thesaurus_name.text == "NODC SUBMITTING INSTITUTION NAMES THESAURUS":
submitter = MD_keywords.find('.//gmd:keyword/gmx:Anchor', namespaces).text
print("Submitter:", submitter)
df.loc[df.Link_Xml == url, ['submitter']] = submitter
# Pull out the version information.
# Iterate through each processing step which is an NCEI version.
for process_step in root.iterfind(".//gmd:processStep", namespaces):
# Only parse gco:DateTime and gmd:title/gco:CharacterString.
vers_title = (
".//"
"gmi:LE_ProcessStep/"
"gmi:output/"
"gmi:LE_Source/"
"gmd:sourceCitation/"
"gmd:CI_Citation/"
"gmd:title/"
"gco:CharacterString"
)
vers_date = (
".//"
"gmi:LE_ProcessStep/"
"gmd:dateTime/"
"gco:DateTime"
)
if process_step.findall(vers_date, namespaces) and process_step.findall(vers_title, namespaces):
# Extract dateTime for each version.
datetime = pd.to_datetime(process_step.find(vers_date, namespaces).text)
# Extract version number.
version = process_step.find(vers_title, namespaces).text.split(" ")[-1]
print(
"{} = {}".format(
version, datetime
)
)
vers_dict[version] = datetime
df.loc[df.Link_Xml == url, ['version_info']] = [vers_dict]
# Collect package size information.
# Iterate through transfer size nodes.
for trans_size in root.iterfind(".//gmd:transferSize", namespaces):
if trans_size.find(".//gco:Real", namespaces).text:
sizes = trans_size.find(".//gco:Real", namespaces).text
print("Current AIP Size = %s MB" % sizes)
df.loc[df.Link_Xml == url, ['package_size_mb']] = float(sizes)
break
break
# -
# ### Create a timeseries plot of data coverage
# Now that we have a DataFrame with all the information we're interested in, lets make a time coverage plot for all the AIP's at NCEI.
# +
import matplotlib.dates as mdates
import matplotlib.pyplot as plt
ypos = range(len(df))
fig, ax = plt.subplots(figsize=(15, 12))
# Plot the data
ax.barh(ypos, mdates.date2num(df['data_end_date']) - mdates.date2num(df['data_start_date']),
left = mdates.date2num(df['data_start_date']),
height = 0.5,
align = 'center')
xlim = ( mdates.date2num(df['data_start_date'].min() - pd.Timedelta(np.timedelta64(1, "M"))),
mdates.date2num(df['data_end_date'].max() + pd.Timedelta(np.timedelta64(1, "M"))) )
ax.set_xlim(xlim)
ax.set(yticks = np.arange(0, len(df)))
ax.tick_params(which="both", direction="out")
ax.set_ylabel("NCEI Accession Number")
ax.set_yticklabels(df['NCEI_accession_number'])
ax.set_title('NCEI archive package time coverage')
ax.xaxis_date()
ax.set_xlabel('Date')
plt.grid(axis='x', linestyle='--')
# -
# This procedure has been developed as an example of how to use NCEI's geoportal REST API's to collect information about packages that have been archived at NCEI. The intention is to provide some guidance and ways to collect this information without having to request it directly from NCEI. There are a significant amount of metadata elements which NCEI makes available through their ISO metadata records. Therefore, anyone interested in collecting other information from the records at NCEI should have a look at the ISO metadata records and determine which items are of interest to their community. Then, update the example code provided to collect that information.
# **Author:** <NAME>
| jupyterbook/content/code_gallery/data_access_notebooks/2017-06-12-NCEI_RA_archive_history.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + deletable=true editable=true
# import necessary modules
# uncomment to get plots displayed in notebook
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from classy import Class
from scipy.optimize import fsolve
# + deletable=true editable=true
# esthetic definitions for the plots
font = {'size' : 16, 'family':'STIXGeneral'}
axislabelfontsize='large'
matplotlib.rc('font', **font)
matplotlib.mathtext.rcParams['legend.fontsize']='medium'
# + deletable=true editable=true
# a function returning the three masses given the Delta m^2, the total mass, and the hierarchy (e.g. 'IN' or 'IH')
# taken from a piece of MontePython written by <NAME>
def get_masses(delta_m_squared_atm, delta_m_squared_sol, sum_masses, hierarchy):
# any string containing letter 'n' will be considered as refering to normal hierarchy
if 'n' in hierarchy.lower():
# Normal hierarchy massive neutrinos. Calculates the individual
# neutrino masses from M_tot_NH and deletes M_tot_NH
#delta_m_squared_atm=2.45e-3
#delta_m_squared_sol=7.50e-5
m1_func = lambda m1, M_tot, d_m_sq_atm, d_m_sq_sol: M_tot**2. + 0.5*d_m_sq_sol - d_m_sq_atm + m1**2. - 2.*M_tot*m1 - 2.*M_tot*(d_m_sq_sol+m1**2.)**0.5 + 2.*m1*(d_m_sq_sol+m1**2.)**0.5
m1,opt_output,success,output_message = fsolve(m1_func,sum_masses/3.,(sum_masses,delta_m_squared_atm,delta_m_squared_sol),full_output=True)
m1 = m1[0]
m2 = (delta_m_squared_sol + m1**2.)**0.5
m3 = (delta_m_squared_atm + 0.5*(m2**2. + m1**2.))**0.5
return m1,m2,m3
else:
# Inverted hierarchy massive neutrinos. Calculates the individual
# neutrino masses from M_tot_IH and deletes M_tot_IH
#delta_m_squared_atm=-2.45e-3
#delta_m_squared_sol=7.50e-5
delta_m_squared_atm = -delta_m_squared_atm
m1_func = lambda m1, M_tot, d_m_sq_atm, d_m_sq_sol: M_tot**2. + 0.5*d_m_sq_sol - d_m_sq_atm + m1**2. - 2.*M_tot*m1 - 2.*M_tot*(d_m_sq_sol+m1**2.)**0.5 + 2.*m1*(d_m_sq_sol+m1**2.)**0.5
m1,opt_output,success,output_message = fsolve(m1_func,sum_masses/3.,(sum_masses,delta_m_squared_atm,delta_m_squared_sol),full_output=True)
m1 = m1[0]
m2 = (delta_m_squared_sol + m1**2.)**0.5
m3 = (delta_m_squared_atm + 0.5*(m2**2. + m1**2.))**0.5
return m1,m2,m3
# + deletable=true editable=true
# test of this function, returning the 3 masses for total mass of 0.1eV
m1,m2,m3 = get_masses(2.45e-3,7.50e-5,0.1,'NH')
print 'NH:',m1,m2,m3,m1+m2+m3
m1,m2,m3 = get_masses(2.45e-3,7.50e-5,0.1,'IH')
print 'IH:',m1,m2,m3,m1+m2+m3
# + deletable=true editable=true
# The goal of this cell is to compute the ratio of P(k) for NH and IH with the same total mass
commonsettings = {'N_ur':0,
'N_ncdm':3,
'output':'mPk',
'P_k_max_1/Mpc':3.0,
# The next line should be uncommented fgor higher precision (but significantly slower running)
'ncdm_fluid_approximation':3,
# You may uncomment this line to get more info on the ncdm sector from Class:
'background_verbose':1
}
# array of k values in 1/Mpc
kvec = np.logspace(-4,np.log10(3),100)
# array for storing legend
legarray = []
# loop over total mass values
for sum_masses in [0.1, 0.115, 0.13]:
# normal hierarchy
[m1, m2, m3] = get_masses(2.45e-3,7.50e-5, sum_masses, 'NH')
NH = Class()
NH.set(commonsettings)
NH.set({'m_ncdm':str(m1)+','+str(m2)+','+str(m3)})
NH.compute()
# inverted hierarchy
[m1, m2, m3] = get_masses(2.45e-3,7.50e-5, sum_masses, 'IH')
IH = Class()
IH.set(commonsettings)
IH.set({'m_ncdm':str(m1)+','+str(m2)+','+str(m3)})
IH.compute()
pkNH = []
pkIH = []
for k in kvec:
pkNH.append(NH.pk(k,0.))
pkIH.append(IH.pk(k,0.))
NH.struct_cleanup()
IH.struct_cleanup()
# extract h value to convert k from 1/Mpc to h/Mpc
h = NH.h()
plt.semilogx(kvec/h,1-np.array(pkNH)/np.array(pkIH))
legarray.append(r'$\Sigma m_i = '+str(sum_masses)+'$eV')
plt.axhline(0,color='k')
plt.xlim(kvec[0]/h,kvec[-1]/h)
plt.xlabel(r'$k [h \mathrm{Mpc}^{-1}]$')
plt.ylabel(r'$1-P(k)^\mathrm{NH}/P(k)^\mathrm{IH}$')
plt.legend(legarray)
# + deletable=true editable=true
plt.savefig('neutrinohierarchy.pdf')
| notebooks/neutrinohierarchy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from lxml import etree as parser
# -
import xmltodict as parser
import pandas
from io import StringIO
import json
doc = parser.parse(open("/tmp/named_download.xml",'rb'))
doc
json.dumps(doc)
doc = json.loads(json.dumps(doc))
from IPython.display import JSON
JSON(filename='/tmp/named_download.xml')
doc
JSON(json.loads('{"a": [{"aa": 1, "bb":2}, {"cc":3, "dd":4}],"b": 5}'))
| docs/source/include/notebooks/orthoxml sandbox.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# -
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
train_df=pd.read_csv('train.csv')
# -
test_df=pd.read_csv('test.csv')
train_df
train_df['length']=train_df['text'].apply(len)
sns.barplot(data=train_df,x='target',y='length')
train_df.hist(column='length',by='target')
import string
import nltk
from nltk.corpus import stopwords
def text_process(tweet):
nopunc = [char for char in tweet if char not in string.punctuation]
nopunc = ''.join(nopunc)
return [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
train_df.head()
from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer,TfidfVectorizer
# +
bow_transformer=CountVectorizer(analyzer=text_process).fit(train_df['text'])
bow_transformer_test=CountVectorizer(analyzer=text_process).fit(test_df['text'])
# -
bow_transformer
# +
tweet_bow=bow_transformer.transform(train_df['text'])
tweet_bow_test=bow_transformer.transform(test_df['text'])
# -
tweet_bow_test
tweet_bow
# +
tweet_tfidf=TfidfTransformer(use_idf=False).fit_transform(tweet_bow)
tweet_trans=TfidfTransformer(use_idf=False).transform(tweet_tfidf)
tweet_tfidf_test=TfidfTransformer(use_idf=False).fit_transform(tweet_bow_test)
tweet_trans_test=TfidfTransformer(use_idf=False).transform(tweet_tfidf_test)
# -
tweet_tfidf
tweet_tfidf
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression
# tweet_mnb=MultinomialNB.fit(tweet_tfidf,train_df['text'])
# +
tweet_model=LogisticRegression().fit(tweet_tfidf,train_df['target'])
# -
tweet_model
tweet_model
train_df['text']
test_df.info()
pred=tweet_model.predict(tweet_tfidf_test)
print(pred)
| Notebooks/NLP_With_Disaster_tweets/nlp-with-disaster-tweets .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Built-in Data Structures, Functions,
# ## Data Structures and Sequences
# ### Tuple
tup = 4, 5, 6
tup
nested_tup = (4, 5, 6), (7, 8)
nested_tup
tuple([4, 0, 2])
tup = tuple('string')
tup
tup[0]
tup = tuple(['foo', [1, 2], True])
tup[2] = False
tup[1].append(3)
tup
(4, None, 'foo') + (6, 0) + ('bar',)
('foo', 'bar') * 4
# #### Unpacking tuples
tup = (4, 5, 6)
a, b, c = tup
b
tup = 4, 5, (6, 7)
a, b, (c, d) = tup
d
# tmp = a
# a = b
# b = tmp
a, b = 1, 2
a
b
b, a = a, b
a
b
seq = [(1, 2, 3), (4, 5, 6), (7, 8, 9)]
for a, b, c in seq:
print('a={0}, b={1}, c={2}'.format(a, b, c))
values = 1, 2, 3, 4, 5
a, b, *rest = values
a, b
rest
a, b, *_ = values
# #### Tuple methods
a = (1, 2, 2, 2, 3, 4, 2)
a.count(2)
# ### List
a_list = [2, 3, 7, None]
tup = ('foo', 'bar', 'baz')
b_list = list(tup)
b_list
b_list[1] = 'peekaboo'
b_list
gen = range(10)
gen
list(gen)
# #### Adding and removing elements
b_list.append('dwarf')
b_list
b_list.insert(1, 'red')
b_list
b_list.pop(2)
b_list
b_list.append('foo')
b_list
b_list.remove('foo')
b_list
'dwarf' in b_list
'dwarf' not in b_list
# #### Concatenating and combining lists
[4, None, 'foo'] + [7, 8, (2, 3)]
x = [4, None, 'foo']
x.extend([7, 8, (2, 3)])
x
# everything = []
# for chunk in list_of_lists:
# everything.extend(chunk)
# everything = []
# for chunk in list_of_lists:
# everything = everything + chunk
# #### Sorting
a = [7, 2, 5, 1, 3]
a.sort()
a
b = ['saw', 'small', 'He', 'foxes', 'six']
b.sort(key=len)
b
# #### Binary search and maintaining a sorted list
import bisect
c = [1, 2, 2, 2, 3, 4, 7]
bisect.bisect(c, 2)
bisect.bisect(c, 5)
bisect.insort(c, 6)
c
# #### Slicing
seq = [7, 2, 3, 7, 5, 6, 0, 1]
seq[1:5]
seq[3:4] = [6, 3]
seq
seq[:5]
seq[3:]
seq[-4:]
seq[-6:-2]
seq[::2]
seq[::-1]
# ### Built-in Sequence Functions
# #### enumerate
# i = 0
# for value in collection:
# # do something with value
# i += 1
# for i, value in enumerate(collection):
# # do something with value
some_list = ['foo', 'bar', 'baz']
mapping = {}
for i, v in enumerate(some_list):
mapping[v] = i
mapping
# #### sorted
sorted([7, 1, 2, 6, 0, 3, 2])
sorted('horse race')
# #### zip
seq1 = ['foo', 'bar', 'baz']
seq2 = ['one', 'two', 'three']
zipped = zip(seq1, seq2)
list(zipped)
seq3 = [False, True]
list(zip(seq1, seq2, seq3))
for i, (a, b) in enumerate(zip(seq1, seq2)):
print('{0}: {1}, {2}'.format(i, a, b))
pitchers = [('Nolan', 'Ryan'), ('Roger', 'Clemens'),
('Schilling', 'Curt')]
first_names, last_names = zip(*pitchers)
first_names
last_names
# #### reversed
list(reversed(range(10)))
# ### dict
empty_dict = {}
d1 = {'a' : 'some value', 'b' : [1, 2, 3, 4]}
d1
d1[7] = 'an integer'
d1
d1['b']
'b' in d1
d1[5] = 'some value'
d1
d1['dummy'] = 'another value'
d1
del d1[5]
d1
ret = d1.pop('dummy')
ret
d1
list(d1.keys())
list(d1.values())
d1.update({'b' : 'foo', 'c' : 12})
d1
# #### Creating dicts from sequences
# mapping = {}
# for key, value in zip(key_list, value_list):
# mapping[key] = value
mapping = dict(zip(range(5), reversed(range(5))))
mapping
# #### Default values
# if key in some_dict:
# value = some_dict[key]
# else:
# value = default_value
# value = some_dict.get(key, default_value)
words = ['apple', 'bat', 'bar', 'atom', 'book']
by_letter = {}
for word in words:
letter = word[0]
if letter not in by_letter:
by_letter[letter] = [word]
else:
by_letter[letter].append(word)
by_letter
# for word in words:
# letter = word[0]
# by_letter.setdefault(letter, []).append(word)
# from collections import defaultdict
# by_letter = defaultdict(list)
# for word in words:
# by_letter[word[0]].append(word)
# #### Valid dict key types
hash('string')
hash((1, 2, (2, 3)))
hash((1, 2, [2, 3])) # fails because lists are mutable
d = {}
d[tuple([1, 2, 3])] = 5
d
# ### set
set([2, 2, 2, 1, 3, 3])
{2, 2, 2, 1, 3, 3}
a = {1, 2, 3, 4, 5}
b = {3, 4, 5, 6, 7, 8}
a.union(b)
a | b
a.intersection(b)
a & b
c = a.copy()
c |= b
c
d = a.copy()
d &= b
d
my_data = [1, 2, 3, 4]
my_set = {tuple(my_data)}
my_set
a_set = {1, 2, 3, 4, 5}
{1, 2, 3}.issubset(a_set)
a_set.issuperset({1, 2, 3})
{1, 2, 3} == {3, 2, 1}
# ### List, Set, and Dict Comprehensions
# [
# result = []
# for val in collection:
# if
strings = ['a', 'as', 'bat', 'car', 'dove', 'python']
[x.upper() for x in strings if len(x) > 2]
# dict_comp = {
# set_comp = {
unique_lengths = {len(x) for x in strings}
unique_lengths
set(map(len, strings))
loc_mapping = {val : index for index, val in enumerate(strings)}
loc_mapping
# #### Nested list comprehensions
all_data = [['John', 'Emily', 'Michael', 'Mary', 'Steven'],
['Maria', 'Juan', 'Javier', 'Natalia', 'Pilar']]
# names_of_interest = []
# for names in all_data:
# enough_es = [name for name in names if name.count('e') >= 2]
# names_of_interest.extend(enough_es)
result = [name for names in all_data for name in names
if name.count('e') >= 2]
result
some_tuples = [(1, 2, 3), (4, 5, 6), (7, 8, 9)]
flattened = [x for tup in some_tuples for x in tup]
flattened
# flattened = []
#
# for tup in some_tuples:
# for x in tup:
# flattened.append(x)
[[x for x in tup] for tup in some_tuples]
# ## Functions
# def my_function(x, y, z=1.5):
# if z > 1:
# return z * (x + y)
# else:
# return z / (x + y)
# my_function(5, 6, z=0.7)
# my_function(3.14, 7, 3.5)
# my_function(10, 20)
# ### Namespaces, Scope, and Local Functions
# def func():
# a = []
# for i in range(5):
# a.append(i)
# a = []
# def func():
# for i in range(5):
# a.append(i)
a = None
def bind_a_variable():
global a
a = []
bind_a_variable()
print(a)
# ### Returning Multiple Values
# def f():
# a = 5
# b = 6
# c = 7
# return a, b, c
#
# a, b, c = f()
# return_value = f()
# def f():
# a = 5
# b = 6
# c = 7
# return {'a' : a, 'b' : b, 'c' : c}
# ### Functions Are Objects
states = [' Alabama ', 'Georgia!', 'Georgia', 'georgia', 'FlOrIda',
'south carolina##', 'West virginia?']
# +
import re
def clean_strings(strings):
result = []
for value in strings:
value = value.strip()
value = re.sub('[!#?]', '', value)
value = value.title()
result.append(value)
return result
# -
clean_strings(states)
# +
def remove_punctuation(value):
return re.sub('[!#?]', '', value)
clean_ops = [str.strip, remove_punctuation, str.title]
def clean_strings(strings, ops):
result = []
for value in strings:
for function in ops:
value = function(value)
result.append(value)
return result
# -
clean_strings(states, clean_ops)
for x in map(remove_punctuation, states):
print(x)
# ### Anonymous (Lambda) Functions
# def short_function(x):
# return x * 2
#
# equiv_anon = lambda x: x * 2
# def apply_to_list(some_list, f):
# return [f(x) for x in some_list]
#
# ints = [4, 0, 1, 5, 6]
# apply_to_list(ints, lambda x: x * 2)
strings = ['foo', 'card', 'bar', 'aaaa', 'abab']
strings.sort(key=lambda x: len(set(list(x))))
strings
# ### Currying: Partial Argument Application
# def add_numbers(x, y):
# return x + y
# add_five = lambda y: add_numbers(5, y)
# from functools import partial
# add_five = partial(add_numbers, 5)
# ### Generators
some_dict = {'a': 1, 'b': 2, 'c': 3}
for key in some_dict:
print(key)
dict_iterator = iter(some_dict)
dict_iterator
list(dict_iterator)
def squares(n=10):
print('Generating squares from 1 to {0}'.format(n ** 2))
for i in range(1, n + 1):
yield i ** 2
gen = squares()
gen
for x in gen:
print(x, end=' ')
# #### Generator expresssions
gen = (x ** 2 for x in range(100))
gen
# def _make_gen():
# for x in range(100):
# yield x ** 2
# gen = _make_gen()
sum(x ** 2 for x in range(100))
dict((i, i **2) for i in range(5))
# #### itertools module
import itertools
first_letter = lambda x: x[0]
names = ['Alan', 'Adam', 'Wes', 'Will', 'Albert', 'Steven']
for letter, names in itertools.groupby(names, first_letter):
print(letter, list(names)) # names is a generator
# ### Errors and Exception Handling
float('1.2345')
float('something')
def attempt_float(x):
try:
return float(x)
except:
return x
attempt_float('1.2345')
attempt_float('something')
float((1, 2))
def attempt_float(x):
try:
return float(x)
except ValueError:
return x
attempt_float((1, 2))
def attempt_float(x):
try:
return float(x)
except (TypeError, ValueError):
return x
# f = open(path, 'w')
#
# try:
# write_to_file(f)
# finally:
# f.close()
# f = open(path, 'w')
#
# try:
# write_to_file(f)
# except:
# print('Failed')
# else:
# print('Succeeded')
# finally:
# f.close()
# #### Exceptions in IPython
# In [10]: %run examples/ipython_bug.py
# ---------------------------------------------------------------------------
# AssertionError Traceback (most recent call last)
# /home/wesm/code/pydata-book/examples/ipython_bug.py in <module>()
# 13 throws_an_exception()
# 14
# ---> 15 calling_things()
#
# /home/wesm/code/pydata-book/examples/ipython_bug.py in calling_things()
# 11 def calling_things():
# 12 works_fine()
# ---> 13 throws_an_exception()
# 14
# 15 calling_things()
#
# /home/wesm/code/pydata-book/examples/ipython_bug.py in throws_an_exception()
# 7 a = 5
# 8 b = 6
# ----> 9 assert(a + b == 10)
# 10
# 11 def calling_things():
#
# AssertionError:
# ## Files and the Operating System
# %pushd book-materials
path = 'examples/segismundo.txt'
f = open(path)
# for line in f:
# pass
lines = [x.rstrip() for x in open(path)]
lines
f.close()
with open(path) as f:
lines = [x.rstrip() for x in f]
f = open(path)
f.read(10)
f2 = open(path, 'rb') # Binary mode
f2.read(10)
f.tell()
f2.tell()
import sys
sys.getdefaultencoding()
f.seek(3)
f.read(1)
f.close()
f2.close()
with open('tmp.txt', 'w') as handle:
handle.writelines(x for x in open(path) if len(x) > 1)
with open('tmp.txt') as f:
lines = f.readlines()
lines
import os
os.remove('tmp.txt')
# ### Bytes and Unicode with Files
with open(path) as f:
chars = f.read(10)
chars
with open(path, 'rb') as f:
data = f.read(10)
data
data.decode('utf8')
data[:4].decode('utf8')
sink_path = 'sink.txt'
with open(path) as source:
with open(sink_path, 'xt', encoding='iso-8859-1') as sink:
sink.write(source.read())
with open(sink_path, encoding='iso-8859-1') as f:
print(f.read(10))
os.remove(sink_path)
f = open(path)
f.read(5)
f.seek(4)
f.read(1)
f.close()
# %popd
# ## Conclusion
| ch03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# .. highlight:: none
#
# .. _syn-tools-pushfile:
#
# Synapse Tools - pushfile
# ========================
#
# The Synapse ``pushfile`` command can be used to upload files to a storage Axon (see Axon in the :ref:`devopsguide`) and optionally create an associated :ref:`type-file` node in a Cortex.
#
# Large-scale file ingest / upload is best performed using an automated feed / module / API. However, ``pushfile`` can be useful for uploading one-off files.
#
# Syntax
# ------
#
# ``pushfile`` is executed from an operating system command shell. The command usage is as follows:
#
# ::
#
# usage: synapse.tools.pushfile [-h] -a AXON [-c CORTEX] [-r] [-t TAGS] filenames [filenames ...]
#
# Where:
#
# - ``AXON`` is the path to a storage Axon.
#
# - See :ref:`cortex-connect` for the format to specify a path to an Axon and / or Cortex.
# - Axon and Cortex paths can also be specified using aliases defined in the user's ``.syn/aliases.yaml`` file.
#
# - ``CORTEX`` is the optional path to a Cortex where a corresponding ``file:bytes`` node should be created.
#
# - **Note:** while this is an optional parameter, it doesn’t make much sense to store a file in an Axon that can’t be referenced from within a Cortex.
#
# - ``TAGS`` is an optional list of tags to be applied to the ``file:bytes`` node created in the Cortex.
#
# - ``-t`` takes a comma separated list of tags.
# - The tag should be specified by name only (i.e., without the ``#`` character).
#
# - ``-r`` recursively finds all files when a glob pattern is used for a file name.
#
# - ``filenames`` is one or more names (with optional paths), or glob patterns, to the local file(s) to be uploaded.
#
# - If multiple file names are specified, any tag provided with the ``-t`` option will be added to **each** uploaded file.
#
# Example
# -------
#
# Upload the file ``myreport.pdf`` to the specified Axon, create a ``file:bytes`` node in the specified Cortex, and tag the ``file:bytes`` node with the tag ``#sometag`` (replace the Axon and Cortex path below with the path to your Cortex. Note that the command is wrapped for readability):
#
# ::
#
# python -m synapse.tools.pushfile -a tcp://axon.vertex.link:5555/axon00
# -c tcp://cortex.vertex.link:4444/cortex00 -t sometag /home/user/reports/myreport.pdf
#
# Executing the command will result in various status messages (lines are wrapped for readability):
#
# ::
#
# 2019-07-03 11:46:30,567 [INFO] log level set to DEBUG
# [common.py:setlogging:MainThread:MainProcess]
# 2019-07-03 11:46:30,568 [DEBUG] Using selector: EpollSelector
# [selector_events.py:__init__:MainThread:MainProcess]
#
# adding tags: ['sometag']
# Uploaded [myreport.pdf] to axon
# file: myreport.pdf (2606351) added to core
# (sha256:229cdde419ba9549023de39c6a0ca8af74b45fade2d7a22cdc4105a75cd40ab0) as myreport.pdf
#
# - ``adding tags: ['sometag']`` indicates the tag ``#sometag`` was applied to the ``file:bytes`` node.
# - ``Uploaded [myreport.pdf] to axon`` indicates the file was successfully uploaded to the storage Axon.
# - ``file: myreport.pdf (2606351) added to core (sha256:229cdde4...5cd40ab0) as myreport.pdf`` indicates the ``file:bytes`` node was created in the Cortex.
#
# - The message gives the new node’s primary property value (``sha256:229cdde419ba9549023de39c6a0ca8af74b45fade2d7a22cdc4105a75cd40ab0``) and also notes the ``:name`` secondary property value assigned to the node (``myreport.pdf``).
# - ``pushfile`` sets the ``file:bytes:name`` property to the base name of the local file being uploaded.
#
# If a given file already exists in the Axon (deconflicted based on the file’s SHA256 hash), ``pushfile`` will not re-upload the file. However, the command will still process any other options, including:
#
# - creating the ``file:bytes`` node in the Cortex if it does not already exist.
# - applying any specified tag.
# - setting (or overwriting) the ``:name`` property on any existing ``file:bytes`` node with the base name of the local file specified.
#
# For example (lines wrapped for readability):
#
# ::
#
# python -m synapse.tools.pushfile -a tcp://axon.vertex.link:5555/axon00
# -c tcp://cortex.vertex.link:4444/cortex00 -t anothertag,athirdtag
# /home/user/reports/anotherreport.pdf
#
# 2019-07-03 11:59:03,366 [INFO] log level set to DEBUG
# [common.py:setlogging:MainThread:MainProcess]
# 2019-07-03 11:59:03,367 [DEBUG] Using selector: EpollSelector
# [selector_events.py:__init__:MainThread:MainProcess]
#
# adding tags: ['anothertag'. 'athirdtag']
# Axon already had [anotherreport.pdf]
# file: anotherreport.pdf (2606351) added to core
# (sha256:229cdde419ba9549023de39c6a0ca8af74b45fade2d7a22cdc4105a75cd40ab0)
# as anotherreport.pdf
#
# Note the status indicating the Axon already had the specified file. Similarly, the status noting the ``file:bytes`` node was added to the Cortex lists the same SHA256 hash as our first upload (i.e., ``anotherreport.pdf`` has the same SHA256 hash as ``myreport.pdf``) and indicates the ``:name`` property has been updated (as ``anotherreport.pdf``).
#
# The ``file:bytes`` node for the uploaded report can now be viewed in the specified Cortex by lifting (see :ref:`storm-ref-lift`) the file using the SHA256 / primary property value from the ``pushfile`` status output:
#
# ::
#
# file:bytes=sha256:229cdde419ba9549023de39c6a0ca8af74b45fade2d7a22cdc4105a75cd40ab0
#
# file:bytes=sha256:229cdde419ba9549023de39c6a0ca8af74b45fade2d7a22cdc4105a75cd40ab0
# .created = 2019/07/03 18:46:40.542
# :md5 = 23a14d3a4508628e7e09a4c4868dfb17
# :mime = ??
# :name = anotherrepport.pdf
# :sha1 = 99b6b984988581cae681f65b92198ed77609bd11
# :sha256 = 229cdde419ba9549023de39c6a0ca8af74b45fade2d7a22cdc4105a75cd40ab0
# :size = 2606351
# #anothertag
# #athirdtag
# #sometag
# complete. 1 nodes in 3 ms (333/sec).
#
# Viewing the node’s properties, we see that Synapse has set the ``:name`` property and has calculated and set the MD5, SHA1, and SHA256 hash secondary property values, as well as the file’s size in bytes. Similarly the two tags from our two example ``pushfile`` commands have been added to the node.
#
# Alternatively, a glob pattern could be used to upload all PDF files in a given directory:
#
# ::
#
# python -m synapse.tools.pushfile -a tcp://axon.vertex.link:5555/axon00
# -c tcp://cortex.vertex.link:4444/cortex00 -t anothertag,athirdtag
# /home/user/reports/*.pdf
#
| docs/synapse/userguides/syn_tools_pushfile.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Import result scores.
import csv
def read_results(csv_fname):
with open("../%s" % csv_fname, "rt") as csvfile:
reader = csv.reader(csvfile, delimiter=',')
next(reader) # ignore the header
for row in reader:
yield (tuple(row[0].split("-", 8)), float(row[1]))
datasets = ("test-2016", "dev", "test-2017")
results = {}
results["dev"] = dict(read_results("results-dev.csv"))
results["test-2016"] = dict(read_results("results-2016.csv"))
results["test-2017"] = dict(read_results("results-2017.csv"))
# Import matrix densities.
from glob import glob
from math import sqrt
import re
FNAME_SUFFIX = ".log"
M = 466585
def read_densities(fname_prefix):
densities = {}
for log_fname in glob("%s*%s" % (fname_prefix, FNAME_SUFFIX)):
config = tuple(re.split("-{1}", log_fname.replace(fname_prefix, "").replace(FNAME_SUFFIX, "")))
# density = !sed -r -n '/by thresholding/s/- ([0-9.]*).*/\1/p' $log_fname
k = sqrt(float(density[0]) * M**2) # average number of non-zero elements per row / column
densities[config] = (k**2 - M) / M**2 # account for the fact that we don't need to store the diagonal
return densities
densities = {}
densities["mrel"] = read_densities("../datasets/QL-unannotated-data-subtaskA.Mrel-w2v.ql-")
densities["mlev"] = read_densities("../datasets/QL-unannotated-data-subtaskA.Mlev-")
# Load the plotting libraries.
# %matplotlib inline
from matplotlib.pyplot import grid, legend, plot, savefig, show, xlim, xticks, ylim, xticks, xlabel, \
ylabel, figure, axvline, axhline, xscale
linestyles = ['-o', '-x', '-^']
colors = [(r/255., g/255., b/255.) for (r, g, b) in [(31, 119, 180), (255, 127, 14), (44, 160, 44)]]
xlims = (0.0000003, 0.01)
figsize=(7, 3.5)
# # MAP-density plots
# ## Matrix $\mathbf S_{\textrm{rel}}$
figure(figsize=figsize)
for dataset_num, dataset in enumerate(datasets):
X = []
Y = []
for w2v_min_count in (0, 5, 50, 500, 5000):
density = densities["mrel"][(str(w2v_min_count), "100", "0.000000", "2.000000")]
map_score = results[dataset][("soft_terms", "w2v.ql", "mrel", "early", "soft", "none",
str(w2v_min_count), "100", "0.0")]
X.append(density)
Y.append(map_score)
plot(X, Y, linestyles[dataset_num], color=colors[dataset_num], label="%s dataset" % dataset)
axhline(results[dataset][("hard_terms",)], linestyle='-.', color=colors[dataset_num],
label="%s dataset MAP baseline" % dataset)
axvline(densities["mrel"][("5", "100", "0.000000", "2.000000")], linestyle='--',
label="density baseline (min_count=5)")
xscale("log")
xlim(xlims)
xlabel("Matrix density")
ylabel("MAP score")
legend()
grid(True)
savefig("fig1.pdf")
# Figure 1: The MAP score of soft terms (early weighting, soft normalization, no rounding) plotted against the density of the term similarity matrix $\mathbf S_{\textrm{rel}}$ as we hold the parameters $C=100$, and $\theta_3=0.0$ constant and decrease the parameter `min_count` from the value of 5000 (leftmost) to the value of 0 (rightmost) by factors of ten. For every dataset, the baseline MAP score corresponding to hard cosine similarity on terms is displayed for comparison. The baseline density corresponds to the parameter `min_count=5` used by Charlet and Damnati, 2017.
figure(figsize=figsize)
for dataset_num, dataset in enumerate(datasets):
X = []
Y = []
for w2v_knn in (1, 10, 100, 1000, 10000):
density = densities["mrel"][("5", str(w2v_knn), "0.000000", "2.000000")]
map_score = results[dataset][("soft_terms", "w2v.ql", "mrel", "early", "soft", "none",
"5", str(w2v_knn), "0.0")]
X.append(density)
Y.append(map_score)
plot(X, Y, linestyles[dataset_num], color=colors[dataset_num], label="%s dataset" % dataset)
axhline(results[dataset][("hard_terms",)], linestyle='-.', color=colors[dataset_num],
label="%s dataset MAP baseline" % dataset)
axvline(densities["mrel"][("5", "100", "0.000000", "2.000000")], linestyle='--',
label="density baseline ($C=100$)")
xscale("log")
xlim(xlims)
xlabel("Matrix density")
ylabel("MAP score")
legend()
grid(True)
savefig("fig2.pdf")
# Figure 2: The MAP score of soft terms (early weighting, soft normalization, no rounding) plotted against the density of the term similarity matrix $\mathbf S_{\textrm{rel}}$ as we hold the parameters `w2v_min_count=5`, and $\theta_3=0.0$ constant and increase the parameter $C$ from the value of 1 (leftmost) to the value of 10,000 (rightmost) by factors of ten. For every dataset, the baseline MAP score corresponding to hard cosine similarity on terms is displayed for comparison. The baseline density corresponds to the parameter $C=100$ used by Charlet and Damnati, 2017.
figure(figsize=figsize)
for dataset_num, dataset in enumerate(datasets):
X = []
Y = []
for m_threshold in (0.2, 0.4, 0.6, 0.8):
density = densities["mrel"][("5", "100", "%f" % m_threshold, "2.000000")]
map_score = results[dataset][("soft_terms", "w2v.ql", "mrel", "early", "soft", "none",
"5", "100", str(m_threshold))]
X.append(density)
Y.append(map_score)
plot(X, Y, linestyles[dataset_num], color=colors[dataset_num], label="%s dataset" % dataset)
axhline(results[dataset][("hard_terms",)], linestyle='-.', color=colors[dataset_num],
label="%s dataset MAP baseline" % dataset)
axvline(densities["mrel"][("5", "100", "0.000000", "2.000000")], linestyle='--',
label="density baseline ($\\theta_3=0.0$)")
xscale("log")
xlim(xlims)
xlabel("Matrix density")
ylabel("MAP score")
legend()
grid(True)
savefig("fig3.pdf")
# Figure 3: The MAP score of soft terms (early weighting, soft normalization, no rounding) plotted against the density of the term similarity matrix $\mathbf S_{\textrm{rel}}$ as we hold the parameters `w2v_min_count=5`, and $C=100$ constant and decrease the parameter $\theta_3$ from the value of 0.8 (leftmost) to the value of 0.2 (rightmost) by 0.2 at each step. For every dataset, the baseline MAP score corresponding to hard cosine similarity on terms is displayed for comparison. The baseline density corresponds to the parameter $\theta_3=0.0$ used by Charlet and Damnati, 2017; notice that the matrix $\mathbf S_{\textrm{rel}}$ contains no entries below 0.2.
# ## Matrix $\mathbf S_{\textrm{lev}}$
figure(figsize=figsize)
for dataset_num, dataset in enumerate(datasets):
X = []
Y = []
for w2v_knn in (1, 10, 100, 1000):
density = densities["mlev"][(str(w2v_knn), "0.000000")]
map_score = results[dataset][("soft_terms", "w2v.ql", "mlev", "early", "soft", "none",
"5", str(w2v_knn), "0.0")]
X.append(density)
Y.append(map_score)
plot(X, Y, linestyles[dataset_num], color=colors[dataset_num], label="%s dataset" % dataset)
axhline(results[dataset][("hard_terms",)], linestyle='-.', color=colors[dataset_num],
label="%s dataset MAP baseline" % dataset)
axvline(densities["mlev"][("100", "0.000000")], linestyle='--',
label="density baseline ($C=100$)")
xscale("log")
xlim(xlims)
xlabel("Matrix density")
ylabel("MAP score")
legend()
grid(True)
savefig("fig4.pdf")
# Figure 4: The MAP score of soft terms (early weighting, soft normalization, no rounding) plotted against the density of the term similarity matrix $\mathbf S_{\textrm{lev}}$ as we hold the parameters `w2v_min_count=5`, and $\theta_3=0.0$ constant and increase the parameter $C$ from the value of 1 (leftmost) to the value of 1000 (rightmost) by factors of ten. For every dataset, the baseline MAP score corresponding to hard cosine similarity on terms is displayed for comparison. The baseline density corresponds to the parameter $C=100$ used by Charlet and Damnati, 2017.
figure(figsize=figsize)
for dataset_num, dataset in enumerate(datasets):
X = []
Y = []
for m_threshold in (0.0, 0.2, 0.4, 0.6, 0.8):
density = densities["mlev"][("100", "%f" % m_threshold)]
map_score = results[dataset][("soft_terms", "w2v.ql", "mrel", "early", "soft", "none",
"5", "100", str(m_threshold))]
X.append(density)
Y.append(map_score)
plot(X, Y, linestyles[dataset_num], color=colors[dataset_num], label="%s dataset" % dataset)
axhline(results[dataset][("hard_terms",)], linestyle='-.', color=colors[dataset_num],
label="%s dataset MAP baseline" % dataset)
axvline(densities["mlev"][("100", "0.000000")], linestyle='--',
label="density baseline ($\\theta_3=0.0$)")
xscale("log")
xlim(xlims)
xlabel("Matrix density")
ylabel("MAP score")
legend()
grid(True)
savefig("fig5.pdf")
# Figure 5: The MAP score of soft terms (early weighting, soft normalization, no rounding) plotted against the density of the term similarity matrix $\mathbf S_{\textrm{lev}}$ as we hold the parameters `w2v_min_count=5`, and $C=100$ constant and decrease the parameter $\theta_3$ from the value of 0.8 (leftmost) to the value of 0.0 (rightmost) by 0.2 at each step. For every dataset, the baseline MAP score corresponding to hard cosine similarity on terms is displayed for comparison. The baseline density corresponds to the parameter $\theta_3=0.0$ used by Charlet and Damnati, 2017.
| jupyter/map-density_plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Estimators
# Several estimators are available in `sia`, depending on which type of model is being used. For Linear/Gaussian models, `sia.KalmanFilter` estimator is optimal. For Nonlinear/Gaussian, the `sia.ExtendedKalmanFilter` or `sia.ParticleFilter` suboptimal estimators can be used. For a general user-implemented MarkovProcess, only the `sia.ParticleFilter` can be used. Note that models in parenthesis are implicitly supported due to model inheritance. Also note that both discrete time and continuous time variants are supported.
#
# | Estimator | Optimal | Supported dynamics and measurement models |
# | -------------------- | ----------- | -------------------------------------------------------------------------------- |
# | KalmanFilter | Yes | LinearGaussian |
# | ExtendedKalmanFilter | No | Linearizable (NonlinearGaussian, LinearGaussian) |
# | ParticleFilter | No | DynamicsModel, MeasurementModel (Linearizable, NonlinearGaussian, LinearGaussian) |
#
# This example compares these algorithms to estimate the states of a linear/Gaussian model, since it is the most widely supported model type. In practice, the `sia.KalmanFilter` should be used for this type of model.
# +
# Import the libSIA python bindings and numpy
import pysia as sia
import numpy as np
# Import plotting helpers
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme(style="whitegrid")
# Set the generator instance seed for repeatability
generator = sia.Generator.instance()
generator.seed(10)
# -
# To illustrate use of the estimator, we use an example from Crassidis and Junkins 2012, pp. 165, example 3.3. The filter estimates flight attitude rate and bias error. The state $x$ = (attitude, rate bias error), the input $u$ = (measured attitude rate), and the output is $y$ = (measured attitude). The linear system is
# $$
# x_k = \begin{pmatrix} 1 & -\Delta t \\ \Delta t & 0 \end{pmatrix} x_{k-1}
# + \begin{pmatrix} \Delta t \\ 0 \end{pmatrix} u_k
# + w_k \\
# y_k = \begin{pmatrix} 1 & 0 \end{pmatrix} x_k + v_k
# $$
# where $w_k \sim \mathcal{N}(0,Q)$ is process noise and $v_k \sim \mathcal{R}(0,R)$ is measurement noise, both determined via spectral power densities of the continuous time measurement.
# +
dt = 1
sn = 17E-6
su = np.sqrt(10)*1E-10
sv = np.sqrt(10)*1E-7
q00 = sv**2 * dt + su**2 * dt**3 / 3
q01 = - su**2 * dt**2 / 2
q11 = su**2 * dt
r = sn**2
dynamics = sia.LinearGaussianDynamics(
F=np.array([[1, -dt], [0, 1]]),
G=np.array([[dt], [0]]),
Q=np.array([[q00, q01], [q01, q11]]))
measurement = sia.LinearGaussianMeasurement(
H=np.array([[1, 0]]),
R=np.array([[r]]))
# -
# We initialize the estimators around the linear system. The estimators must be initialized with a prior state, which for the Kalman filter and extended Kalman filter is a Gaussian belief. For the particle filter, the prior is a particle belief.
# +
# Initialize KF and EKF using a Gaussian prior
prior = sia.Gaussian(mean=np.array([0, 0]), covariance=np.diag([1E-4, 1E-12]))
kf = sia.KF(dynamics, measurement, state=prior)
ekf = sia.EKF(dynamics, measurement, state=prior)
# Initialize PF using a particle prior
particles = sia.Particles.init(prior, num_particles=1000)
pf = sia.PF(dynamics, measurement, particles=particles, resample_threshold=0.5, roughening_factor=5e-9)
# -
# Use the `sia.Runner` class to simplify the task of simulating the system and performing the estimation step for a map/dictionary of estimators. Internally, this class steps the dynamics model, samples a measurement, and then calls `estimate()` for each of the provided estimators.
# +
# Initialize the runner with a buffer for n_steps
n_steps = 4000
estimators = {"kf": kf, "ekf": ekf, "pf": pf}
runner = sia.Runner(estimators, n_steps)
# Be sure to reset the filters explicitly before each new run
kf.reset(prior)
ekf.reset(prior)
pf.reset(particles)
# Initialize the ground truth state and step/estimate for n_steps
x = np.array([0, 4.8481e-7])
for k in range(0, n_steps):
x = runner.stepAndEstimate(dynamics, measurement, x, np.array([0.0011]))
# -
# We can access the recorder states via the `sia.Recorder` object. Here we plot the state estimate error and 3$\sigma$ bounds recorded by the runner for each of the estimators.
# +
# Extract mean and covariance from buffer
recorder = runner.recorder()
x = recorder.getStates()
y = recorder.getObservations()
# Plot the recorded states
t = np.arange(0, n_steps, 1)
f, ax = plt.subplots(2, 3, sharex=True, figsize=(20, 8))
sns.despine(f, left=True, bottom=True)
ylim = np.array([[2e-5, -2e-5], [1e-7, -1e-7]])
for j in range(len(estimators)):
name = list(estimators.keys())[j]
xe_mu = recorder.getEstimateMeans(name)
xe_var = recorder.getEstimateVariances(name)
for i in range(2):
plt.sca(ax[i, j])
ax[i, j].fill_between(t,
-3 * np.sqrt(xe_var[i, :]),
+3 * np.sqrt(xe_var[i, :]),
alpha=0.2, label="Estimate 3std bounds")
ax[i, j].plot(t, x[i, :] - xe_mu[i, :], lw=1, label="Estimate error")
ax[i, j].legend()
plt.ylim(ylim[i, :])
plt.ylabel("State " + str(i))
plt.xlabel("Timestep k")
plt.title("Estimator " + name)
plt.show()
| docs/estimators/estimators.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="N2_J4Rw2r0SQ" outputId="c2b52308-501a-43de-e27d-6c295a68d738"
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm
# %matplotlib inline
from torch.utils.data import Dataset, DataLoader
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
from torch.nn import functional as F
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
# + [markdown] colab_type="text" id="F6fjud_Fr0Sa"
# # Generate dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 191} colab_type="code" id="CqdXHO0Cr0Sd" outputId="679683f2-e6fa-43a0-fbd1-670a7416bfcd"
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
# + colab={} colab_type="code" id="ddhXyODwr0Sk"
x = np.zeros((5000,2))
# + colab={} colab_type="code" id="DyV3N2DIr0Sp"
x[idx[0],:] = np.random.multivariate_normal(mean = [5,5],cov=[[0.1,0],[0,0.1]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [-6,7],cov=[[0.1,0],[0,0.1]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [-5,-4],cov=[[0.1,0],[0,0.1]],size=sum(idx[2]))
# x[idx[0],:] = np.random.multivariate_normal(mean = [5,5],cov=[[0.1,0],[0,0.1]],size=sum(idx[0]))
# x[idx[1],:] = np.random.multivariate_normal(mean = [6,6],cov=[[0.1,0],[0,0.1]],size=sum(idx[1]))
# x[idx[2],:] = np.random.multivariate_normal(mean = [5.5,6.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [-1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [0,2],cov=[[0.1,0],[0,0.1]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [0,-1],cov=[[0.1,0],[0,0.1]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [0,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [-0.5,-0.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [0.4,0.2],cov=[[0.1,0],[0,0.1]],size=sum(idx[9]))
# + colab={"base_uri": "https://localhost:8080/", "height": 282} colab_type="code" id="hJ8Jm7YUr0St" outputId="aa5734c1-2828-443a-8f7f-d17be0772800"
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
# + colab={} colab_type="code" id="UfFHcZJOr0Sz"
foreground_classes = {'class_0','class_1', 'class_2'}
background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}
# + colab={"base_uri": "https://localhost:8080/", "height": 208} colab_type="code" id="OplNpNQVr0S2" outputId="4248ccb1-74b7-4d72-9f84-8bb3d2c8b185"
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="dwZVmmRBr0S8" outputId="61973860-dcca-417e-c5a2-4deeedf7451d"
a.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 330} colab_type="code" id="OoxzYI-ur0S_" outputId="053cc0f2-9316-4bc2-95f5-64836888910b"
np.reshape(a,(18,1))
# + colab={} colab_type="code" id="y4ruI0cxr0TE"
a=np.reshape(a,(3,6))
# + colab={"base_uri": "https://localhost:8080/", "height": 236} colab_type="code" id="RTUTFhJIr0TI" outputId="ed24273a-029c-4c7e-fb51-93c82ed62f06"
plt.imshow(a)
# + colab={} colab_type="code" id="jqbvfbwVr0TN"
desired_num = 3000
mosaic_list =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list.append(np.reshape(a,(18,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
# + colab={} colab_type="code" id="BOsFmWfMr0TR"
mosaic_list = np.concatenate(mosaic_list,axis=1).T
# print(mosaic_list)
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="C2PnW7aQr0TT" outputId="f4053c42-29f7-47c0-c6d5-8519a8f2d85a"
print(np.shape(mosaic_label))
print(np.shape(fore_idx))
# + colab={} colab_type="code" id="yL0BRf8er0TX"
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label, fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx], self.fore_idx[idx]
batch = 250
msd = MosaicDataset(mosaic_list, mosaic_label , fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
# + colab={} colab_type="code" id="ZVRXgwwNr0Tb"
class Wherenet(nn.Module):
def __init__(self):
super(Wherenet,self).__init__()
self.linear1 = nn.Linear(2,4)
self.linear2 = nn.Linear(4,8)
self.linear3 = nn.Linear(8,1)
def forward(self,z):
x = torch.zeros([batch,9],dtype=torch.float64)
y = torch.zeros([batch,2], dtype=torch.float64)
#x,y = x.to("cuda"),y.to("cuda")
for i in range(9):
x[:,i] = self.helper(z[:,2*i:2*i+2])[:,0]
#print(k[:,0].shape,x[:,i].shape)
x = F.softmax(x,dim=1) # alphas
x1 = x[:,0]
for i in range(9):
x1 = x[:,i]
#print()
y = y+torch.mul(x1[:,None],z[:,2*i:2*i+2])
return y , x
def helper(self,x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = self.linear3(x)
return x
# + colab={} colab_type="code" id="f-Ek05Kxr0Te"
trainiter = iter(train_loader)
input1,labels1,index1 = trainiter.next()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="SxEmWZI6r0Ti" outputId="46966489-348d-4373-d180-bd063654e4fa"
where = Wherenet().double()
where = where
out_where,alphas = where(input1)
out_where.shape,alphas.shape
# + colab={} colab_type="code" id="5_XeIUk0r0Tl"
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(2,4)
self.linear2 = nn.Linear(4,3)
# self.linear3 = nn.Linear(8,3)
def forward(self,x):
x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear2(x)
return x
# + colab={} colab_type="code" id="l35i9bIlr0Tp"
what = Whatnet().double()
# what(out_where)
# + colab={} colab_type="code" id="tMEoCLo1r0Tt"
test_data_required = 1000
mosaic_list_test =[]
mosaic_label_test = []
fore_idx_test=[]
for j in range(test_data_required):
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_test.append(np.reshape(a,(18,1)))
mosaic_label_test.append(fg_class)
fore_idx_test.append(fg_idx)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="2Naetxvbr0Tw" outputId="3357f563-fc56-4b63-e3a8-82c3b1b146a0"
mosaic_list_test = np.concatenate(mosaic_list_test,axis=1).T
print(mosaic_list_test.shape)
# + colab={} colab_type="code" id="Os4KxqrFr0Tz"
test_data = MosaicDataset(mosaic_list_test,mosaic_label_test,fore_idx_test)
test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 382} colab_type="code" id="pPQY-Wpcr0T2" outputId="86823199-1ab6-4bf4-c9d8-1815d94ed46a"
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
col1=[]
col2=[]
col3=[]
col4=[]
col5=[]
col6=[]
col7=[]
col8=[]
col9=[]
col10=[]
col11=[]
col12=[]
col13=[]
criterion = nn.CrossEntropyLoss()
optimizer_where = optim.SGD(where.parameters(), lr=0.01, momentum=0.9)
optimizer_what = optim.SGD(what.parameters(), lr=0.01, momentum=0.9)
nos_epochs = 10
train_loss=[]
test_loss =[]
train_acc = []
test_acc = []
for epoch in range(nos_epochs): # loop over the dataset multiple times
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
running_loss = 0.0
cnt=0
iteration = desired_num // batch
#training data set
for i, data in enumerate(train_loader):
inputs , labels , fore_idx = data
#inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device)
# zero the parameter gradients
optimizer_what.zero_grad()
optimizer_where.zero_grad()
avg_inp,alphas = where(inputs)
outputs = what(avg_inp)
_, predicted = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
loss.backward()
optimizer_what.step()
optimizer_where.step()
running_loss += loss.item()
if cnt % 6 == 5: # print every 6 mini-batches
print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / 6))
running_loss = 0.0
cnt=cnt+1
if epoch % 1 == 0:
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
if epoch % 1 == 0:
col1.append(epoch)
col2.append(argmax_more_than_half)
col3.append(argmax_less_than_half)
col4.append(focus_true_pred_true)
col5.append(focus_false_pred_true)
col6.append(focus_true_pred_false)
col7.append(focus_false_pred_false)
#************************************************************************
#testing data set
with torch.no_grad():
focus_true_pred_true =0
focus_false_pred_true =0
focus_true_pred_false =0
focus_false_pred_false =0
argmax_more_than_half = 0
argmax_less_than_half =0
for data in test_loader:
inputs, labels , fore_idx = data
#inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device)
# print(inputs.shtorch.save(where.state_dict(),"model_epoch"+str(epoch)+".pt")ape,labels.shape)
avg_inp,alphas = where(inputs)
outputs = what(avg_inp)
_, predicted = torch.max(outputs.data, 1)
for j in range (batch):
focus = torch.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
argmax_more_than_half +=1
else:
argmax_less_than_half +=1
if(focus == fore_idx[j] and predicted[j] == labels[j]):
focus_true_pred_true += 1
elif(focus != fore_idx[j] and predicted[j] == labels[j]):
focus_false_pred_true +=1
elif(focus == fore_idx[j] and predicted[j] != labels[j]):
focus_true_pred_false +=1
elif(focus != fore_idx[j] and predicted[j] != labels[j]):
focus_false_pred_false +=1
col8.append(argmax_more_than_half)
col9.append(argmax_less_than_half)
col10.append(focus_true_pred_true)
col11.append(focus_false_pred_true)
col12.append(focus_true_pred_false)
col13.append(focus_false_pred_false)
torch.save(where.state_dict(),"where_model_epoch"+str(epoch)+".pt")
torch.save(what.state_dict(),"what_model_epoch"+str(epoch)+".pt")
print('Finished Training')
torch.save(where.state_dict(),"where_model_epoch"+str(nos_epochs)+".pt")
torch.save(what.state_dict(),"what_model_epoch"+str(epoch)+".pt")
# + colab={} colab_type="code" id="UvP97PKnr0T5"
columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ]
df_train = pd.DataFrame()
df_test = pd.DataFrame()
df_train[columns[0]] = col1
df_train[columns[1]] = col2
df_train[columns[2]] = col3
df_train[columns[3]] = col4
df_train[columns[4]] = col5
df_train[columns[5]] = col6
df_train[columns[6]] = col7
df_test[columns[0]] = col1
df_test[columns[1]] = col8
df_test[columns[2]] = col9
df_test[columns[3]] = col10
df_test[columns[4]] = col11
df_test[columns[5]] = col12
df_test[columns[6]] = col13
# + colab={"base_uri": "https://localhost:8080/", "height": 363} colab_type="code" id="0hAVV2I5r0T7" outputId="885d7615-cd70-4c51-8998-6d64b1e5c6f6"
df_train
# + colab={"base_uri": "https://localhost:8080/", "height": 573} colab_type="code" id="s-ZXousDr0T-" outputId="676acc3a-4bd5-4180-ef2d-da3a6e3b0203"
plt.plot(col1,col2, label='argmax > 0.5')
plt.plot(col1,col3, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.title("On Training set")
plt.show()
plt.plot(col1,col4, label ="focus_true_pred_true ")
plt.plot(col1,col5, label ="focus_false_pred_true ")
plt.plot(col1,col6, label ="focus_true_pred_false ")
plt.plot(col1,col7, label ="focus_false_pred_false ")
plt.title("On Training set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("training data")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 363} colab_type="code" id="LQip8h8jr0UA" outputId="335c6380-bb0e-4775-9008-2ec39c1bee7d"
df_test
# + colab={"base_uri": "https://localhost:8080/", "height": 573} colab_type="code" id="oCJcmk19r0UD" outputId="97aab739-c523-4256-8760-488afece9134"
plt.plot(col1,col8, label='argmax > 0.5')
plt.plot(col1,col9, label='argmax < 0.5')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.title("On Testing set")
plt.show()
plt.plot(col1,col10, label ="focus_true_pred_true ")
plt.plot(col1,col11, label ="focus_false_pred_true ")
plt.plot(col1,col12, label ="focus_true_pred_false ")
plt.plot(col1,col13, label ="focus_false_pred_false ")
plt.title("On Testing set")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.xlabel("epochs")
plt.ylabel("Testing data")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="GJusn9Dsr0UF" outputId="2bbd8f38-8a2d-458c-bf82-09b78ac68b88"
print(x[0])
# + colab={"base_uri": "https://localhost:8080/", "height": 173} colab_type="code" id="KmTzai-gr0UH" outputId="32a41953-bce8-45e3-b4ad-7ba5461b9886"
for i in range(9):
print(x[0,2*i:2*i+2])
# + colab={} colab_type="code" id="YH_sdpkhr0UK"
| 4_synthetic_data_attention/toy_problem_mosaic/toy_problem_Mosaic_type2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Read data and create timeseries using PICES LME
#
# Look at SST, ocean currents, chl-a
# +
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import sys
import pandas as pd
sys.path.append('./../subroutines/')
import piceslocal
adir_data = './../data/'
# -
# ## Read in data (mean, climatology, anomaly) data for PICES region
#mn,clim,anom = pices.get_pices_data('current',13,'1992-01-01','2019-08-01')
#mn,clim,anom = pices.get_pices_data('chl',13,'1992-01-01','2019-08-01')
#mn,clim,anom = pices.get_pices_data('sst',13,'1992-01-01','2019-08-01')
mn,clim,anom = piceslocal.get_pices_data('wind',13,'1992-01-01','2019-08-01')
for a in anom:
anom[a].plot()
# # stop here
#
# The code below here is slow and takes a while to run because it is accessing data online
# # Read in PICES mask
#
# - Each dataset finds a unique and different way to define lat / lon or order them.
# - There is a need for standardization in this area
# - The basic PICES mask is -180 to 180 lon and -90 to 90 lat
# - Below different maps are created for 0 to 360 lon
# - Then each of the two different lon maps are also copied to reverse lat, 90 to -90
ds_pices360 = piceslocal.get_pices_mask()
ds2=ds_pices360.sel(lat=slice(20,70),lon=slice(115,250))
ax=ds2.region_mask.plot()
ds_pices360.sel(lon=slice(115,250),lat=slice(20,70)).region_mask.plot(vmin=11,vmax=24)
ds_pices360.region_mask.plot.hist(bins=np.arange(10.5,27))
# # Don't run this
# # Create local files for currents, winds, chl-a from monthly files on hard drive
# +
#currents
file = './../data/sst.mnmean.nc'
ds_sst=xr.open_dataset(file)
ds_sst.close()
#aggr_url = 'https://coastwatch.pfeg.noaa.gov/erddap/griddap/jplOscar'
aggr_url='F:/data/sat_data/oscar/L4/oscar_1_deg/*.gz'
ds = xr.open_mfdataset(aggr_url,combine='nested').isel(depth=0).rename({'latitude':'lat','longitude':'lon'}).drop({'uf','vf'})
date1 = pd.Series(pd.period_range('1992-10-01', periods=27*12, freq='M'))
init=0
for d in date1:
dstr=str(d.year)+'-'+str(d.month).zfill(2)
dstr2=str(d.year)+'-'+str(d.month).zfill(2)+'-01'
ds2=ds.sel(time=slice(dstr,dstr)).sel(lon=slice(20.0,379.9)).drop({'date'}).load()
ds2 = ds2.assign_coords(lon=np.mod(ds2['lon'], 360)).sortby('lon').sortby('lat',ascending=True)
ds3 = ds2.interp(lat=ds_sst.lat,lon=ds_sst.lon,method='linear').mean('time',keep_attrs=True)
ds3=ds3.assign_coords(time=np.datetime64(dstr2))
ds3 = ds3.sel(lat=slice(20,70),lon=slice(115,250))
if init==0:
ts=ds3
init=init+1
else:
ts=xr.concat((ts,ds3),dim='time')
#break
ts.to_netcdf('./../data/cur.mnmean.nc',encoding={'u': {'dtype': 'int8', 'scale_factor': 0.015, '_FillValue': -128},'v': {'dtype': 'int8', 'scale_factor': 0.015, '_FillValue': -128},'mask': {'dtype': 'int8', 'scale_factor': 1, '_FillValue': -1}})
#ts.to_netcdf('.\data\cur.mnmean.nc',encoding={'u': {'dtype': 'int16', 'scale_factor': 0.001, '_FillValue': -9999},'v': {'dtype': 'int16', 'scale_factor': 0.001, '_FillValue': -9999}},'mask': {'dtype': 'int8', 'scale_factor': 1, '_FillValue': -1})
# -
#chl-a
tstr=[]
aggr_url='f:/data/ocean_color/month/all/*.nc'
ds = xr.open_mfdataset(aggr_url,concat_dim='time',combine='nested')
ds = ds.sortby(ds.lat)
ds.coords['lon'] = np.mod(ds['lon'], 360)
ds = ds.sortby(ds.lon)
ds = ds.drop({'CHL1_flags','CHL1_error'})
date1 = pd.Series(pd.period_range('1997-09-01', periods=265, freq='M'))
for d in date1:
dstr2=str(d.year)+'-'+str(d.month).zfill(2)+'-01'
tstr.append(np.datetime64(dstr2))
ds=ds.assign_coords(time=tstr)
ds=ds.sel(lat=slice(20,70),lon=slice(115,250))
file = pices.get_filename('chl')
ds.to_netcdf(file,encoding={'CHL1_mean': {'dtype': 'int16', 'scale_factor': 0.001, '_FillValue': -9999}})
#wind
file = pices.get_filename('wind')
aggr_url = 'https://coastwatch.pfeg.noaa.gov/erddap/griddap/erdlasFnWind10'
ds = xr.open_dataset(aggr_url).rename({'latitude':'lat','longitude':'lon'}).drop({'taux_mean','tauy_mean','curl','uv_mag_mean'})
ds=ds.sel(lat=slice(20,70),lon=slice(115,250))
ds.to_netcdf(file,encoding={'u_mean': {'dtype': 'int16', 'scale_factor': 0.001, '_FillValue': -9999},'v_mean': {'dtype': 'int16', 'scale_factor': 0.001, '_FillValue': -9999}})
file = './../data/sst.mnmean.nc'
ds=xr.open_dataset(file)
ds.close()
ds=ds.sel(lat=slice(20,70),lon=slice(115,250))
file = pices.get_filename('sst')
ds.to_netcdf(file+'2',encoding={'sst': {'dtype': 'int16', 'scale_factor': 0.01, '_FillValue': -9999}})
# +
#currents aviso
file = './../data/sst.mnmean.nc'
ds_sst=xr.open_dataset(file)
ds_sst.close()
ds_sst = ds_sst.sel(lat=slice(20,70),lon=slice(115,250))
from pathlib import Path
filelist=[]
dir_data = 'F:/data/sat_data/aviso/data/'
for filename in Path(dir_data).rglob('*.nc'):
filelist.append(filename)
ds=xr.open_mfdataset(filelist,combine='nested',concat_dim='time').drop({'ugosa','vgosa','err'}).rename({'latitude':'lat','longitude':'lon'})
#ds = ds.assign_coords(lon=(((ds.lon + 180) % 360) - 180)).sortby('lon').sortby('lat')
#ds = ds.coords['lon'] = np.mod(ds['lon'], 360).sortby('lon').sortby('lat')
ds = ds.sel(lat=slice(20,70),lon=slice(115,250))
ds3 = ds.interp(lat=ds_sst.lat,lon=ds_sst.lon,method='linear') #.mean('time',keep_attrs=True)
ds3 = ds3.resample(time='1M').mean('time',keep_attrs=True)
ds3=ds3.drop({'lat_bnds','lon_bnds','crs'}).drop({'nv'})
ds3=ds3.rename({'ugos':'u','vgos':'v'})
ds3=ds3.load()
ds=ds3.drop({'sla','adt'})
ds.to_netcdf('./../data/cur.mnmean_aviso.nc',encoding={'u': {'dtype': 'int8', 'scale_factor': 0.03, '_FillValue': -128},
'v': {'dtype': 'int8', 'scale_factor': 0.03, '_FillValue': -128}})
ds=ds3.drop({'u','v','sla'})
ds.to_netcdf('./../data/adt.mnmean_aviso.nc',encoding={'adt': {'dtype': 'int8', 'scale_factor': 0.03, '_FillValue': -128}})
ds=ds3.drop({'adt','u','v'})
ds.to_netcdf('./../data/sla.mnmean_aviso.nc',encoding={'sla': {'dtype': 'int8', 'scale_factor': 0.02, '_FillValue': -128}})
# -
| utils/make_data_notebooks/.ipynb_checkpoints/Create data timeseries in PICES regions-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] hide_input=false
# ## Computer Vision Interpret
# + [markdown] hide_input=false
# [`vision.interpret`](/vision.interpret.html#vision.interpret) is the module that implements custom [`Interpretation`](/train.html#Interpretation) classes for different vision tasks by inheriting from it.
# + hide_input=true
from fastai.gen_doc.nbdoc import *
from fastai.vision import *
from fastai.vision.interpret import *
# + hide_input=true
show_doc(SegmentationInterpretation)
# + hide_input=true
show_doc(SegmentationInterpretation.top_losses)
# + hide_input=true
show_doc(SegmentationInterpretation._interp_show)
# + hide_input=true
show_doc(SegmentationInterpretation.show_xyz)
# + hide_input=true
show_doc(SegmentationInterpretation._generate_confusion)
# + hide_input=true
show_doc(SegmentationInterpretation._plot_intersect_cm)
# -
# Let's show how [`SegmentationInterpretation`](/vision.interpret.html#SegmentationInterpretation) can be used once we train a segmentation model.
# ### train
camvid = untar_data(URLs.CAMVID_TINY)
path_lbl = camvid/'labels'
path_img = camvid/'images'
codes = np.loadtxt(camvid/'codes.txt', dtype=str)
get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'
data = (SegmentationItemList.from_folder(path_img)
.split_by_rand_pct()
.label_from_func(get_y_fn, classes=codes)
.transform(get_transforms(), tfm_y=True, size=128)
.databunch(bs=16, path=camvid)
.normalize(imagenet_stats))
data.show_batch(rows=2, figsize=(7,5))
learn = unet_learner(data, models.resnet18)
learn.fit_one_cycle(3,1e-2)
learn.save('mini_train')
# + hide_input=true
jekyll_warn("Following results will not make much sense with this underperforming model but functionality will be explained with ease")
# -
# ### interpret
interp = SegmentationInterpretation.from_learner(learn)
# Since `FlattenedLoss of CrossEntropyLoss()` is used we reshape and then take the mean of pixel losses per image. In order to do so we need to pass `sizes:tuple` to `top_losses()`
top_losses, top_idxs = interp.top_losses(sizes=(128,128))
# + hide_input=true
top_losses, top_idxs
# + hide_input=true
plt.hist(to_np(top_losses), bins=20);plt.title("Loss Distribution");
# -
# Next, we can generate a confusion matrix similar to what we usually have for classification. Two confusion matrices are generated: `mean_cm` which represents the global label performance and `single_img_cm` which represents the same thing but for each individual image in dataset.
#
# Values in the matrix are calculated as:
#
# \begin{align}
# \ CM_{ij} & = IOU(Predicted , True | True) \\
# \end{align}
#
# Or in plain english: ratio of pixels of predicted label given the true pixels
learn.data.classes
mean_cm, single_img_cm = interp._generate_confusion()
# + hide_input=true
mean_cm.shape, single_img_cm.shape
# -
# `_plot_intersect_cm` first displays a dataframe showing per class score using the IOU definition we made earlier. These are the diagonal values from the confusion matrix which is displayed after.
#
# `NaN` indicate that these labels were not present in our dataset, in this case validation set. As you can imagine it also helps you to maybe construct a better representing validation set.
# + hide_input=false
df = interp._plot_intersect_cm(mean_cm, "Mean of Ratio of Intersection given True Label")
# -
# Next let's look at the single worst prediction in our dataset. It looks like this dummy model just predicts everything as `Road` :)
i = top_idxs[0]
df = interp._plot_intersect_cm(single_img_cm[i], f"Ratio of Intersection given True Label, Image:{i}")
# Finally we will visually inspect this single prediction
interp.show_xyz(i, sz=15)
# + hide_input=true
jekyll_warn("""With matplotlib colormaps the max number of unique qualitative colors is 20.
So if len(classes) > 20 then close class indexes may be plotted with the same color.
Let's fix this together :)""")
# + hide_input=true
interp.c2i
# + hide_input=true
show_doc(ObjectDetectionInterpretation)
# + hide_input=true
jekyll_warn("ObjectDetectionInterpretation is not implemented yet. Feel free to implement it :)")
# -
# ## Undocumented Methods - Methods moved below this line will intentionally be hidden
# ## New Methods - Please document or move to the undocumented section
| docs_src/vision.interpret.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: pymedphys-master
# language: python
# name: pymedphys-master
# ---
# +
import zipfile
from urllib import request
import pathlib
import collections
import warnings
import random
import copy
import numpy as np
import matplotlib.pyplot as plt
import imageio
import IPython
import tensorflow as tf
import ipywidgets
# +
# url = 'https://github.com/pymedphys/data/releases/download/mini-lung/mini-lung-medical-decathlon.zip'
# filename = url.split('/')[-1]
# +
# request.urlretrieve(url, filename)
# -
data_path = pathlib.Path('data')
# +
# with zipfile.ZipFile(filename, 'r') as zip_ref:
# zip_ref.extractall(data_path)
# +
image_paths = sorted(data_path.glob('**/*_image.png'))
mask_paths = [
path.parent.joinpath(path.name.replace('_image.png', '_mask.png'))
for path in image_paths
]
# +
image_mask_pairs = collections.defaultdict(lambda: [])
for image_path, mask_path in zip(image_paths, mask_paths):
patient_label = image_path.parent.name
image = imageio.imread(image_path)
mask = imageio.imread(mask_path)
image_mask_pairs[patient_label].append((image, mask))
# -
def get_contours_from_mask(mask, contour_level=128):
if np.max(mask) < contour_level:
return []
with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
fig, ax = plt.subplots()
cs = ax.contour(range(mask.shape[0]), range(mask.shape[0]), mask, [contour_level])
contours = [path.vertices for path in cs.collections[0].get_paths()]
plt.close(fig)
return contours
def display(patient_label, chosen_slice):
image = image_mask_pairs[patient_label][chosen_slice][0]
mask = image_mask_pairs[patient_label][chosen_slice][1]
plt.figure(figsize=(10,10))
plt.imshow(image, vmin=0, vmax=100)
contours = get_contours_from_mask(mask)
for contour in contours:
plt.plot(*contour.T, 'r', lw=3)
def view_patient(patient_label):
def view_slice(chosen_slice):
display(patient_label, chosen_slice)
number_of_slices = len(image_mask_pairs[patient_label])
ipywidgets.interact(view_slice, chosen_slice=ipywidgets.IntSlider(min=0, max=number_of_slices, step=1, value=0));
patient_labels = sorted(list(image_mask_pairs.keys()))
# patient_labels
ipywidgets.interact(view_patient, patient_label=patient_labels);
has_tumour_map = collections.defaultdict(lambda: [])
for patient_label, pairs in image_mask_pairs.items():
for image, mask in pairs:
has_tumour_map[patient_label].append(np.max(mask) >= 128)
# +
tumour_to_slice_map = collections.defaultdict(lambda: collections.defaultdict(lambda: []))
for patient_label, tumour_slices in has_tumour_map.items():
for i, has_tumour in enumerate(tumour_slices):
tumour_to_slice_map[patient_label][has_tumour].append(i)
# +
training = patient_labels[0:50]
test = patient_labels[50:60]
validation = patient_labels[60:]
len(validation)
# -
len(test)
num_images_per_patient = 5
batch_size = len(training) * num_images_per_patient
batch_size
random.uniform(0, 1)
# +
# # random.shuffle?
# +
# tensor_image_mask_pairs = collections.defaultdict(lambda: [])
# for patient_label, pairs in image_mask_pairs.items():
# for image, mask in pairs:
# tensor_image_mask_pairs[patient_label].append((
# tf.convert_to_tensor(image[:,:,None], dtype=tf.float32) / 255 * 2 - 1,
# tf.convert_to_tensor(mask[:,:,None], dtype=tf.float32) / 255 * 2 - 1
# ))
# -
def random_select_from_each_patient(patient_labels, tumour_class_probability):
patient_labels_to_use = copy.copy(patient_labels)
random.shuffle(patient_labels_to_use)
images = []
masks = []
for patient_label in patient_labels_to_use:
if random.uniform(0, 1) < tumour_class_probability:
find_tumour = True
else:
find_tumour = False
slice_to_use = random.choice(tumour_to_slice_map[patient_label][find_tumour])
mask = image_mask_pairs[patient_label][slice_to_use][1]
if find_tumour:
assert np.max(mask) >= 128
else:
assert np.max(mask) < 128
images.append(image_mask_pairs[patient_label][slice_to_use][0])
masks.append(image_mask_pairs[patient_label][slice_to_use][1])
return images, masks
# +
def create_pipeline_dataset(patient_labels, batch_size, grid_size=128):
def image_mask_generator():
while True:
images, masks = random_select_from_each_patient(
patient_labels, tumour_class_probability=0.5)
for image, mask in zip(images, masks):
yield (
tf.convert_to_tensor(image[:,:,None], dtype=tf.float32) / 255 * 2 - 1,
tf.convert_to_tensor(mask[:,:,None], dtype=tf.float32) / 255 * 2 - 1
)
generator_params = (
(tf.float32, tf.float32),
(tf.TensorShape([grid_size, grid_size, 1]), tf.TensorShape([grid_size, grid_size, 1]))
)
dataset = tf.data.Dataset.from_generator(
image_mask_generator, *generator_params
)
dataset = dataset.batch(batch_size)
return dataset
training_dataset = create_pipeline_dataset(training, batch_size)
validation_dataset = create_pipeline_dataset(validation, len(validation))
# -
for image, mask in training_dataset.take(1):
print(image.shape)
print(mask.shape)
# +
# random_select_from_each_patient()
# +
# random_select_from_each_patient()
# +
def display_first_of_batch(image, mask):
plt.figure(figsize=(10,10))
plt.imshow(image[0,:,:,0], vmin=-1, vmax=1)
contours = get_contours_from_mask(mask[0,:,:,0], contour_level=0)
for contour in contours:
plt.plot(*contour.T, 'r', lw=3)
for image, mask in training_dataset.take(1):
display_first_of_batch(image, mask)
# -
# +
def encode(x, convs, filters, kernel, drop=False, pool=True, norm=True):
# Convolution
for _ in range(convs):
x = tf.keras.layers.Conv2D(
filters, kernel, padding="same", kernel_initializer="he_normal"
)(x)
if norm is True:
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
# Skips
skip = x
# Regularise and down-sample
if drop is True:
x = tf.keras.layers.Dropout(0.2)(x)
if pool is True:
x = tf.keras.layers.Conv2D(
filters,
kernel,
strides=2,
padding="same",
kernel_initializer="he_normal",
)(x)
if norm is True:
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
return x, skip
def decode(x, skip, convs, filters, kernel, drop=False, norm=False):
# Up-convolution
x = tf.keras.layers.Conv2DTranspose(
filters, kernel, strides=2, padding="same", kernel_initializer="he_normal"
)(x)
if norm is True:
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
# Concat with skip
x = tf.keras.layers.concatenate([skip, x], axis=3)
# Convolution
for _ in range(convs):
x = tf.keras.layers.Conv2D(
filters, kernel, padding="same", kernel_initializer="he_normal"
)(x)
if norm is True:
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Activation("relu")(x)
if drop is True:
x = tf.keras.layers.Dropout(0.2)(x)
return x
def create_network(grid_size=128, output_channels=1):
inputs = tf.keras.layers.Input((grid_size, grid_size, 1))
encoder_args = [
# convs, filter, kernel, drop, pool, norm
(2, 32, 3, False, True, True), # 64, 32
(2, 64, 3, False, True, True), # 32, 64
(2, 128, 3, False, True, True), # 16, 128
(2, 256, 3, False, True, True), # 8, 256
]
decoder_args = [
# convs, filter, kernel, drop, norm
(2, 128, 3, True, True), # 16, 512
(2, 64, 3, True, True), # 32, 256
(2, 32, 3, False, True), # 64, 128
(2, 16, 3, False, True), # 128, 64
]
x = inputs
skips = []
for args in encoder_args:
x, skip = encode(x, *args)
skips.append(skip)
skips.reverse()
for skip, args in zip(skips, decoder_args):
x = decode(x, skip, *args)
outputs = tf.keras.layers.Conv2D(
output_channels,
1,
activation="sigmoid",
padding="same",
kernel_initializer="he_normal",
)
x = outputs(x)
return tf.keras.Model(inputs=inputs, outputs=x)
# +
tf.keras.backend.clear_session()
model = create_network()
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.MeanAbsoluteError(),
metrics=['accuracy']
)
# +
# tf.keras.utils.plot_model(model, show_shapes=True, dpi=64)
# +
# model.summary()
# +
def show_a_prediction():
for image, mask in training_dataset.take(10):
plt.figure(figsize=(10,10))
plt.imshow(image[0,:,:,0], vmin=-1, vmax=1)
contours = get_contours_from_mask(mask[0,:,:,0], contour_level=0)
for contour in contours:
plt.plot(*contour.T, 'k--', lw=1)
predicted_mask = model.predict(image[0:1, :, :, 0:1])
predicted_contours = get_contours_from_mask(predicted_mask[0,:,:,0], contour_level=0)
for contour in predicted_contours:
plt.plot(*contour.T, 'r', lw=3)
plt.show()
show_a_prediction()
# -
class DisplayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
IPython.display.clear_output(wait=True)
show_a_prediction()
# +
EPOCHS = 5
STEPS_PER_EPOCH = 1
VALIDATION_STEPS = 1
model_history = model.fit(
training_dataset, epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_steps=VALIDATION_STEPS,
validation_data=validation_dataset,
callbacks=[
DisplayCallback(),
# tensorboard_callback
],
use_multiprocessing=True,
shuffle=False,
)
# -
| prototyping/auto-segmentation/sb/04-mini-data/050-begin-creating-unet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# Panel can be used to make a first pass at an app or dashboard in minutes, while also allowing you to fully customize the app's behavior and appearance or flexibly integrate GUI support into long-term, large-scale software projects. To make all these different ways of using Panel possible, four different APIs are available:
#
# * **Interact functions**: Auto-generates a full UI (including widgets) given a function
# * **Reactive functions**: Linking functions or methods to widgets using the ``pn.depends`` decorator, declaring that the function should be re-run when those widget values change
# * **Parameterized class**: Declare parameters and their ranges in Parameterized classes, then get GUIs (and value checking!) for free
# * **Callbacks**: Generate a UI by manually declaring callbacks that update panels or panes
#
# Each of these APIs has its own benefits and drawbacks, so this section will go through each one in turn, while working through an example app and pointing out the benefits and drawback along the way. For a quick overview you can also review the API gallery examples, e.g. the [stocks_hvplot](../gallery/apis/stocks_hvplot.ipynb) app.
#
# To start with let us define some imports, load the `autompg` dataset, and define a plotting function we will be reusing throughout this user guide.
# +
import hvplot.pandas
from bokeh.sampledata.autompg import autompg
def autompg_plot(x='mpg', y='hp', color='#058805'):
return autompg.hvplot.scatter(x, y, c=color, padding=0.1)
columns = list(autompg.columns[:-2])
# -
# Given values for the x and y axes and a color, this function can be used to generate an interactive plot without needing any Panel components:
autompg_plot()
# But if we want to let a user control the axes and the color with widgets rather than writing Python code, we can use one of the Panel APIs as shown in the rest of the notebook.
import panel as pn
pn.extension()
# ## Interact Functions
#
# The ``interact`` function will automatically generate a UI (including widgets) by inspecting the arguments of the function given to it or using additional hints you provide in the ``interact`` function call. If you have worked with the [``ipywidgets``](https://github.com/jupyter-widgets/ipywidgets) package you may already be familiar with this approach (in fact the implementation is modeled on that reference implementation). The basic idea is that given a function that returns some object, Panel will inspect the arguments to that function, try to infer appropriate widgets for those arguments, and then re-run that function to update the output whenever one of the widgets generates an event. For more detail on how interact creates widgets and other ways of using it, see the Panel [interact user guide](./interact.ipynb). This section instead focuses on when and why to use this API, laying out its benefits and drawbacks.
#
# The main benefit of this approach is convenience and ease of use. You start by writing some function that returns an object, be that a plot, a dataframe, or anything else that Panel can render. Then with a single call to `pn.interact()`, you can immediately get an interactive UI. Unlike ipywidgets, the ``pn.interact`` call will return a Panel, which can then be further modified by laying out the widgets and output separately or combining these components with other panes if you wish. Thus even though `pn.interact` itself is limited in flexibility, you can unpack and reconfigure the results from it to generate fairly complex GUIs in very little code.
#
# #### Pros:
#
# + Easy to use.
# + Doesn't typically require modifying existing visualization code.
#
# #### Cons:
#
# - Most of the behavior is implicit, with magic happening by introspection, making it difficult to see how to modify the appearance or functionality of the resulting object.
# - Layouts can be customized, but requires indexing into the panel returned by `interact`.
#
# In the example below, ``pn.interact`` infers the initial value for `x` and `y` from the `autompg_plot` function default arguments and their widget type and range from the `columns` list provided to `interact`. `interact` would have no way of knowing that a color picker would be useful for the `color` argument (as each color is simply a string), and so here we explicitly create a color-picker widget and pass that as the value for the color so that we can control the color as well. Finally, we unpack the result from `interact` and rearrange it in a different layout with a title, to create the final app. See the Panel [interact user guide](./interact.ipynb) for details of how to control the widgets and how to rearrange the layout.
# +
color = pn.widgets.ColorPicker(name='Color', value='#4f4fdf')
layout = pn.interact(autompg_plot, x=columns, y=columns, color=color)
pn.Row(pn.Column('## MPG Explorer', layout[0]), layout[1])
# -
# ## Reactive Functions
#
# The reactive programming API is very similar to the ``interact`` function but makes it possible to explicitly declare the inputs to the function using the ``depends`` decorator and also makes the layout of the different components more explicit. The ``pn.depends`` decorator is a powerful way to declare the parameters a function depends on. By decorating a function with `pn.depends`, we declare that when those parameters change the function should be called with the new values of those parameters. This approach makes it very explicit which parameters the function depends on and ties it directly to the objects that control it. Once a function has been annotated in this way, it can be laid out alongside the widgets.
#
# #### Pros:
#
# + Very clear mapping from the inputs to the arguments of the function.
# + Very explicit layout of each of the different components.
#
# #### Cons:
#
# - Mixes the definition of the function with the GUI elements it depends on.
#
# In this model, we declare all the widgets we will need first, then declare a function linked to those widgets using the ``pn.depends`` decorator, and finally lay out the widgets and the ``autompg_plot`` function explicitly.
# +
x = pn.widgets.Select(value='mpg', options=columns, name='x')
y = pn.widgets.Select(value='hp', options=columns, name='y')
color = pn.widgets.ColorPicker(name='Color', value='#AA0505')
@pn.depends(x.param.value, y.param.value, color.param.value)
def autompg_plot(x, y, color):
return autompg.hvplot.scatter(x, y, c=color)
pn.Row(
pn.Column('## MPG Explorer', x, y, color),
autompg_plot)
# -
# ## Parameterized Classes
#
# The [Param](http://param.pyviz.org) library allows expressing the parameters of a class (or a hierarchy of classes) completely independently of a GUI implementation. Panel and other libraries can then take those parameter declarations and turn them into a GUI to control the parameters. This approach allows the parameters controlling some computation to be captured specifically and explicitly (but as abstract parameters, not as widgets). Then thanks to the ``param.depends`` decorator, it is then possible to directly express the dependencies between the parameters and the computation defined in some method on the class, all without ever importing Panel or any other GUI library. The resulting objects can then be used in both GUI and non-GUI contexts (batch computations, scripts, servers).
#
# The parameterized approach is a powerful way to encapsulate computation in self-contained classes, taking advantage of object-oriented programming patterns. It also makes it possible to express a problem completely independently from Panel or any other GUI code, while still getting a GUI for free as a last step. For more detail on using this approach see the [Param user guide](./Parameters.ipynb).
#
# Pros:
#
# + Declarative way of expressing parameters and dependencies between parameters and computation
# + The resulting code is not tied to any particular GUI framework and can be used in other contexts as well
#
# Cons:
#
# - Requires writing classes
# - Less explicit about widgets to use for each parameter; can be harder to customize behavior than if widgets are instantiated explicitly
#
# In this model we declare a subclass of ``param.Parameterized``, declare the parameters we want at the class level, make an instance of the class, and finally lay out the parameters and plot method of the class.
# +
import param
class MPGExplorer(param.Parameterized):
x = param.Selector(objects=columns)
y = param.Selector(default='hp', objects=columns)
color = param.Color(default='#0f0f0f')
@param.depends('x', 'y', 'color') # optional in this case
def plot(self):
return autompg_plot(self.x, self.y, self.color)
explorer = MPGExplorer()
pn.Row(explorer.param, explorer.plot)
# -
# ## Callbacks
#
# The callback API in panel is the lowest-level approach, affording the greatest amount of flexibility but also quickly growing in complexity because each new interactive behavior requires additional callbacks that can interact in complex ways. Nonetheless, callbacks are important to know about, and can often be used to complement the other approaches. For instance, one specific callback could be used in addition to the more reactive approaches the other APIs provide.
#
# For more details on defining callbacks see the [Links user guide](./Links.ipynb).
#
# #### Pros:
#
# + Complete and modular control over specific events
#
# #### Cons:
#
# - Complexity grows very quickly with the number of callbacks
# - Have to handle initializing the plots separately
#
# In this approach we once again define the widgets. Unlike in other approaches we then have to define the actual layout, to ensure that the callback we define has something that it can update or replace. In this case we use a single callback to update the plot, but in many cases multiple callbacks might be required.
# +
x = pn.widgets.Select(value='mpg', options=columns, name='x')
y = pn.widgets.Select(value='hp', options=columns, name='y')
color = pn.widgets.ColorPicker(name='Color', value='#880588')
layout = pn.Row(
pn.Column('## MPG Explorer', x, y, color),
autompg_plot(x.value, y.value, color.value))
def update(event):
layout[1].object = autompg_plot(x.value, y.value, color.value)
x.param.watch(update, 'value')
y.param.watch(update, 'value')
color.param.watch(update, 'value')
layout
# -
# ## Summary
#
# As we have seen, each of these four APIs allows building the same basic application. The choice of the appropriate API depends very much on the use case. To build a quick throwaway GUI the ``interact`` approach can be completely sufficient. A more more explicit, flexible, and maintainable version of that approach is to define a reactive function that links directly to a set of widgets. When writing libraries or other code that might be used independently of the actual GUI, a Parameterized class can be a great way to organize the code. Finally, if you need low-level control or want to complement any of the other approaches, defining explicit callbacks can be the best approach. Nearly all of the functionality of Panel can be accessed using any of the APIs, but each makes certain things much easier than others. Choosing the API is therefore a matter of considering the tradeoffs and of course also a matter of preference.
| examples/user_guide/APIs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/"
HOUSING_PATH = "datasets/housing"
HOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + "/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data()
# +
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
# -
housing = load_housing_data()
housing.describe()
# %matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20, 15))
plt.show()
# +
import numpy as np
import hashlib
def split_train_test(data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
def test_set_check(identifier, test_ratio, hash):
return hash(np.int64(identifier)).digest()[-1] < 256 * test_ratio
def split_train_test_by_id(data, test_ratio, id_column, hash=hashlib.md5):
ids = data[id_column]
in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio, hash))
return data.loc[~in_test_set], data.loc[in_test_set]
# -
housing_with_id = housing.reset_index()
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, "index")
print("Train set size: %d; Test set size %d" % (len(train_set), len(test_set)))
housing["income_cat"] = np.ceil(housing["median_income"] / 1.5)
housing["income_cat"].where(housing["income_cat"] < 5, 5.0, inplace=True)
housing["income_cat"].hist()
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing["income_cat"]):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
for set_ in (strat_train_set, strat_test_set):
set_.drop("income_cat", axis=1, inplace=True)
housing = strat_train_set.copy()
housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.4,
s=housing["population"]/100, label="population", figsize=(10,7),
c="median_house_value", cmap=plt.get_cmap("jet"), colorbar=True)
plt.legend()
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
# +
from pandas.plotting import scatter_matrix
interesting_attributes = ["median_house_value", "median_income", "total_rooms", "housing_median_age"]
scatter_matrix(housing[interesting_attributes], figsize=(32, 8))
plt.show()
# -
housing.plot(kind="scatter", x="median_income", y="median_house_value", alpha=0.1)
housing["rooms_per_household"] = housing["total_rooms"]/housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"]
housing["population_per_household"] = housing["population"]/housing["households"]
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
housing = strat_train_set.drop("median_house_value", axis=1)
housing_labels = strat_train_set["median_house_value"].copy()
from sklearn.preprocessing import Imputer
imputer = Imputer(strategy="median")
housing_num = housing.drop("ocean_proximity", axis=1)
imputer.fit(housing_num)
X = imputer.transform(housing_num)
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
# +
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, household_ix = 3, 4, 5, 6
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room=True):
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, household_ix]
population_per_household = X[:, population_ix] / X[:, household_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
# -
attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(housing.values)
# +
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([
('imputer', Imputer(strategy="median")),
('attribs_adder', CombinedAttributesAdder()),
('std_scale', StandardScaler())
])
housing_num_tr = num_pipeline.fit_transform(housing_num)
# -
from sklearn.base import BaseEstimator, TransformerMixin
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
return X[self.attribute_names].values
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
num_pipeline = Pipeline([
('selector', DataFrameSelector(num_attribs)),
('imputer', Imputer(strategy="median")),
('attrib_adder', CombinedAttributesAdder()),
('std_scale', StandardScaler()),
])
# +
# Definition of the CategoricalEncoder class, copied from PR #9151.
# Just run this cell, or copy it to your code, do not try to understand it (yet).
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.utils import check_array
from sklearn.preprocessing import LabelEncoder
from scipy import sparse
class CategoricalEncoder(BaseEstimator, TransformerMixin):
"""Encode categorical features as a numeric array.
The input to this transformer should be a matrix of integers or strings,
denoting the values taken on by categorical (discrete) features.
The features can be encoded using a one-hot aka one-of-K scheme
(``encoding='onehot'``, the default) or converted to ordinal integers
(``encoding='ordinal'``).
This encoding is needed for feeding categorical data to many scikit-learn
estimators, notably linear models and SVMs with the standard kernels.
Read more in the :ref:`User Guide <preprocessing_categorical_features>`.
Parameters
----------
encoding : str, 'onehot', 'onehot-dense' or 'ordinal'
The type of encoding to use (default is 'onehot'):
- 'onehot': encode the features using a one-hot aka one-of-K scheme
(or also called 'dummy' encoding). This creates a binary column for
each category and returns a sparse matrix.
- 'onehot-dense': the same as 'onehot' but returns a dense array
instead of a sparse matrix.
- 'ordinal': encode the features as ordinal integers. This results in
a single column of integers (0 to n_categories - 1) per feature.
categories : 'auto' or a list of lists/arrays of values.
Categories (unique values) per feature:
- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories are sorted before encoding the data
(used categories can be found in the ``categories_`` attribute).
dtype : number type, default np.float64
Desired dtype of output.
handle_unknown : 'error' (default) or 'ignore'
Whether to raise an error or ignore if a unknown categorical feature is
present during transform (default is to raise). When this is parameter
is set to 'ignore' and an unknown category is encountered during
transform, the resulting one-hot encoded columns for this feature
will be all zeros.
Ignoring unknown categories is not supported for
``encoding='ordinal'``.
Attributes
----------
categories_ : list of arrays
The categories of each feature determined during fitting. When
categories were specified manually, this holds the sorted categories
(in order corresponding with output of `transform`).
Examples
--------
Given a dataset with three features and two samples, we let the encoder
find the maximum value per feature and transform the data to a binary
one-hot encoding.
>>> from sklearn.preprocessing import CategoricalEncoder
>>> enc = CategoricalEncoder(handle_unknown='ignore')
>>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])
... # doctest: +ELLIPSIS
CategoricalEncoder(categories='auto', dtype=<... 'numpy.float64'>,
encoding='onehot', handle_unknown='ignore')
>>> enc.transform([[0, 1, 1], [1, 0, 4]]).toarray()
array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.],
[ 0., 1., 1., 0., 0., 0., 0., 0., 0.]])
See also
--------
sklearn.preprocessing.OneHotEncoder : performs a one-hot encoding of
integer ordinal features. The ``OneHotEncoder assumes`` that input
features take on values in the range ``[0, max(feature)]`` instead of
using the unique values.
sklearn.feature_extraction.DictVectorizer : performs a one-hot encoding of
dictionary items (also handles string-valued features).
sklearn.feature_extraction.FeatureHasher : performs an approximate one-hot
encoding of dictionary items or strings.
"""
def __init__(self, encoding='onehot', categories='auto', dtype=np.float64,
handle_unknown='error'):
self.encoding = encoding
self.categories = categories
self.dtype = dtype
self.handle_unknown = handle_unknown
def fit(self, X, y=None):
"""Fit the CategoricalEncoder to X.
Parameters
----------
X : array-like, shape [n_samples, n_feature]
The data to determine the categories of each feature.
Returns
-------
self
"""
if self.encoding not in ['onehot', 'onehot-dense', 'ordinal']:
template = ("encoding should be either 'onehot', 'onehot-dense' "
"or 'ordinal', got %s")
raise ValueError(template % self.handle_unknown)
if self.handle_unknown not in ['error', 'ignore']:
template = ("handle_unknown should be either 'error' or "
"'ignore', got %s")
raise ValueError(template % self.handle_unknown)
if self.encoding == 'ordinal' and self.handle_unknown == 'ignore':
raise ValueError("handle_unknown='ignore' is not supported for"
" encoding='ordinal'")
X = check_array(X, dtype=np.object, accept_sparse='csc', copy=True)
n_samples, n_features = X.shape
self._label_encoders_ = [LabelEncoder() for _ in range(n_features)]
for i in range(n_features):
le = self._label_encoders_[i]
Xi = X[:, i]
if self.categories == 'auto':
le.fit(Xi)
else:
valid_mask = np.in1d(Xi, self.categories[i])
if not np.all(valid_mask):
if self.handle_unknown == 'error':
diff = np.unique(Xi[~valid_mask])
msg = ("Found unknown categories {0} in column {1}"
" during fit".format(diff, i))
raise ValueError(msg)
le.classes_ = np.array(np.sort(self.categories[i]))
self.categories_ = [le.classes_ for le in self._label_encoders_]
return self
def transform(self, X):
"""Transform X using one-hot encoding.
Parameters
----------
X : array-like, shape [n_samples, n_features]
The data to encode.
Returns
-------
X_out : sparse matrix or a 2-d array
Transformed input.
"""
X = check_array(X, accept_sparse='csc', dtype=np.object, copy=True)
n_samples, n_features = X.shape
X_int = np.zeros_like(X, dtype=np.int)
X_mask = np.ones_like(X, dtype=np.bool)
for i in range(n_features):
valid_mask = np.in1d(X[:, i], self.categories_[i])
if not np.all(valid_mask):
if self.handle_unknown == 'error':
diff = np.unique(X[~valid_mask, i])
msg = ("Found unknown categories {0} in column {1}"
" during transform".format(diff, i))
raise ValueError(msg)
else:
# Set the problematic rows to an acceptable value and
# continue `The rows are marked `X_mask` and will be
# removed later.
X_mask[:, i] = valid_mask
X[:, i][~valid_mask] = self.categories_[i][0]
X_int[:, i] = self._label_encoders_[i].transform(X[:, i])
if self.encoding == 'ordinal':
return X_int.astype(self.dtype, copy=False)
mask = X_mask.ravel()
n_values = [cats.shape[0] for cats in self.categories_]
n_values = np.array([0] + n_values)
indices = np.cumsum(n_values)
column_indices = (X_int + indices[:-1]).ravel()[mask]
row_indices = np.repeat(np.arange(n_samples, dtype=np.int32),
n_features)[mask]
data = np.ones(n_samples * n_features)[mask]
out = sparse.csc_matrix((data, (row_indices, column_indices)),
shape=(n_samples, indices[-1]),
dtype=self.dtype).tocsr()
if self.encoding == 'onehot-dense':
return out.toarray()
else:
return out
# -
cat_pipeline = Pipeline([
('selector', DataFrameSelector(cat_attribs)),
('label_binarizer', CategoricalEncoder(encoding="onehot-dense")),
])
from sklearn.pipeline import FeatureUnion
full_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline),
])
housing_prepared = full_pipeline.fit_transform(housing)
# +
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
# -
some_data = housing.iloc[:5]
some_labels = housing_labels.iloc[:5]
some_data_prepared = full_pipeline.transform(some_data)
print("Predictions:", lin_reg.predict(some_data_prepared))
print("Labels:", list(some_labels))
from sklearn.metrics import mean_squared_error
housing_predictions = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_labels, housing_predictions)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
# +
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(housing_prepared, housing_labels)
housing_predictions = tree_reg.predict(housing_prepared)
tree_mse = mean_squared_error(housing_labels, housing_predictions)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
# +
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
tree_rmse_scores = np.sqrt(-scores)
def display_scores(scores):
print("Scores: ", scores)
print("Mean:", scores.mean())
print("Standard deviation:", scores.std())
display_scores(tree_rmse_scores)
# +
lin_scores = cross_val_score(lin_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
lin_rmse_scores = np.sqrt(-lin_scores)
display_scores(lin_rmse_scores)
# +
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared, housing_labels)
forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
forest_rmse_scores = np.sqrt(-forest_scores)
display_scores(forest_rmse_scores)
# -
from sklearn.neural_network import MLPRegressor
nn_reg = MLPRegressor(solver="lbfgs")
nn_reg.fit(housing_prepared, housing_labels)
housing_predictions = nn_reg.predict(housing_prepared)
nn_mse = mean_squared_error(housing_labels, housing_predictions)
nn_rmse = np.sqrt(nn_mse)
print("Score on training set:", nn_rmse)
nn_scores = cross_val_score(nn_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
nn_rmse_scores = np.sqrt(-nn_scores)
display_scores(nn_rmse_scores)
from sklearn.svm import SVR
svm_reg = SVR()
svm_reg.fit(housing_prepared, housing_labels)
housing_predictions = svm_reg.predict(housing_prepared)
svm_mse = mean_squared_error(housing_labels, housing_predictions)
svm_rmse = np.sqrt(svm_mse)
print("Score on training set:", svm_rmse)
svm_scores = cross_val_score(svm_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
svm_rmse_scores = np.sqrt(-svm_scores)
display_scores(svm_rmse_scores)
from sklearn.model_selection import GridSearchCV
param_grid = [
{
'bootstrap': [False],
'n_estimators': [3, 10, 30, 40, 50, 100],
'max_features': [4, 6, 8],
},
]
grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring='neg_mean_squared_error')
grid_search.fit(housing_prepared, housing_labels)
grid_search.best_params_
# +
final_model = grid_search.best_estimator_
X_test = strat_test_set.drop("median_house_value", axis=1)
y_test = strat_test_set["median_house_value"].copy()
X_test_prepared = full_pipeline.transform(X_test)
final_predictions = final_model.predict(X_test_prepared)
final_mse = mean_squared_error(y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
final_rmse
# -
| Housing.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .java
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Java
// language: java
// name: java
// ---
// ## This demonstrates Tribuo regression for comparison with scikit-learn regression
// %jars ../../jars/tribuo-json-4.1.0-jar-with-dependencies.jar
// %jars ../../jars/tribuo-regression-liblinear-4.1.0-jar-with-dependencies.jar
// %jars ../../jars/tribuo-regression-sgd-4.1.0-jar-with-dependencies.jar
// %jars ../../jars/tribuo-regression-xgboost-4.1.0-jar-with-dependencies.jar
// %jars ../../jars/tribuo-regression-tree-4.1.0-jar-with-dependencies.jar
// %jars ../../jars/tribuo-regression-libsvm-4.1.0-jar-with-dependencies.jar
// %jars ../../jars/tribuo-regression-xgboost-4.1.0-jar-with-dependencies.jar
import java.nio.file.Paths;
import java.nio.file.Files;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.tribuo.*;
import org.tribuo.data.csv.CSVLoader;
import org.tribuo.datasource.ListDataSource;
import org.tribuo.evaluation.TrainTestSplitter;
import org.tribuo.math.optimisers.*;
import org.tribuo.regression.*;
import org.tribuo.regression.evaluation.*;
import org.tribuo.regression.liblinear.LibLinearRegressionTrainer;
import org.tribuo.regression.sgd.RegressionObjective;
import org.tribuo.regression.liblinear.LinearRegressionType;
import org.tribuo.regression.liblinear.LinearRegressionType.LinearType;
import org.tribuo.regression.sgd.linear.LinearSGDTrainer;
import org.tribuo.regression.sgd.objectives.SquaredLoss;
import org.tribuo.regression.rtree.CARTRegressionTrainer;
import org.tribuo.regression.libsvm.LibSVMRegressionTrainer;
import org.tribuo.regression.libsvm.SVMRegressionType.SVMMode;
import org.tribuo.regression.xgboost.XGBoostRegressionTrainer;
import org.tribuo.util.Util;
var regressionFactory = new RegressionFactory();
var csvLoader = new CSVLoader<>(regressionFactory);
// +
var startTime = System.currentTimeMillis();
// This dataset is prepared in the notebook: scikit-learn Regressor - Data Cleanup
// unzip cleanedCars.zip
// WARNING! This dataset takes a very long time to load.
// This issue is now resolved on the main branch, which looks like will be version 4.2
var carsSource = csvLoader.loadDataSource(Paths.get("../../data/cleanedCars.csv"), "price_usd");
var endTime = System.currentTimeMillis();
System.out.println("Loading took " + Util.formatDuration(startTime,endTime));
var startTime = System.currentTimeMillis();
var splitter = new TrainTestSplitter<>(carsSource, 0.8f, 0L);
var endTime = System.currentTimeMillis();
System.out.println("Splitting took " + Util.formatDuration(startTime,endTime));
Dataset<Regressor> trainData = new MutableDataset<>(splitter.getTrain());
Dataset<Regressor> evalData = new MutableDataset<>(splitter.getTest());
System.out.println(String.format("Training data size = %d, number of features = %d",trainData.size(),trainData.getFeatureMap().size()));
System.out.println(String.format("Testing data size = %d, number of features = %d",evalData.size(),evalData.getFeatureMap().size()));
// -
public Model<Regressor> train(String name, Trainer<Regressor> trainer, Dataset<Regressor> trainData) {
// Train the model
var startTime = System.currentTimeMillis();
Model<Regressor> model = trainer.train(trainData);
var endTime = System.currentTimeMillis();
System.out.println("Training " + name + " took " + Util.formatDuration(startTime,endTime));
// Evaluate the model on the training data
// This is a useful debugging tool to check the model actually learned something
RegressionEvaluator eval = new RegressionEvaluator();
var evaluation = eval.evaluate(model,trainData);
// We create a dimension here to aid pulling out the appropriate statistics.
// You can also produce the String directly by calling "evaluation.toString()"
var dimension = new Regressor("DIM-0",Double.NaN);
// Don't report training scores
//System.out.printf("Evaluation (train):%n RMSE %f%n MAE %f%n R^2 %f%n",
// evaluation.rmse(dimension), evaluation.mae(dimension), evaluation.r2(dimension));
return model;
}
public void evaluate(Model<Regressor> model, Dataset<Regressor> testData) {
// Evaluate the model on the test data
RegressionEvaluator eval = new RegressionEvaluator();
var evaluation = eval.evaluate(model,testData);
// We create a dimension here to aid pulling out the appropriate statistics.
// You can also produce the String directly by calling "evaluation.toString()"
var dimension = new Regressor("DIM-0",Double.NaN);
System.out.printf("Evaluation (test):%n RMSE: %f%n MAE: %f%n R^2: %f%n",
evaluation.rmse(dimension), evaluation.mae(dimension), evaluation.r2(dimension));
}
// +
var lrsgd = new LinearSGDTrainer(
new SquaredLoss(), // loss function
SGD.getLinearDecaySGD(0.01), // gradient descent algorithm
50, // number of training epochs
trainData.size()/4, // logging interval
1, // minibatch size
1L // RNG seed
);
//var lr = new LibLinearRegressionTrainer();
var lr = new LibLinearRegressionTrainer(
new LinearRegressionType(LinearType.L2R_L2LOSS_SVR),
1.0, // cost penalty
1000, // max iterations
0.1, // termination criteria
0.1 // epsilon
);
var cart = new CARTRegressionTrainer(10);
var xgb = new XGBoostRegressionTrainer(75);
// -
System.out.println(lrsgd.toString());
System.out.println(lr.toString());
System.out.println(cart.toString());
System.out.println(xgb.toString());
// +
var lrsgdModel = train("Linear Regression (SGD)", lrsgd, trainData);
// run 1
// time 10.59 s
// run 2
// time 9.96 s
// run 3
// time 9.11 s
// +
evaluate(lrsgdModel,evalData);
// run 1
// RMSE: NaN
// MAE: NaN
// R^2: NaN
// run 2
// RMSE: NaN
// MAE: NaN
// R^2: NaN
// run 3
// RMSE: NaN
// MAE: NaN
// R^2: NaN
// -
// +
var lrModel = train("Linear Regression",lr,trainData);
// run 1
// time 6.60 s
// run 2
// time 6.96 s
// run 3
// time 6.92 s
// +
evaluate(lrModel,evalData);
// run 1
// RMSE: 4125.63
// MAE: 2624.56
// R^2: 0.59
// run 2
// RMSE: 4125.63
// MAE: 2624.56
// R^2: 0.59
// run 3
// RMSE: 4125.63
// MAE: 2624.56
// R^2: 0.59
// -
// +
var cartModel = train("CART",cart,trainData);
// run 1
// time 7.41 s
// run 2
// time 7.59 s
// run 3
// time 8.07 s
// +
evaluate(cartModel,evalData);
// run 1
// RMSE: 2453.70
// MAE: 1469.94
// R^2: 0.86
// run 2
// RMSE: 2453.70
// MAE: 1469.94
// R^2: 0.86
// run 3
// RMSE: 2453.70
// MAE: 1469.94
// R^2: 0.86
// -
// +
var xgbModel = train("XGBoost", xgb, trainData);
// run 1
// time 2min 31s
// run 2
// time 2min 29s
// run 3
// time 2min 26s
// +
evaluate(xgbModel, evalData);
// run 1
// RMSE: 1883.17
// MAE: 1164.05
// R^2: 0.92
// run 2
// RMSE: 1883.17
// MAE: 1164.05
// R^2: 0.92
// run 3
// RMSE: 1883.17
// MAE: 1164.05
// R^2: 0.92
// -
// +
// Setup parameters for SVR
import com.oracle.labs.mlrg.olcut.config.Option;
import com.oracle.labs.mlrg.olcut.config.Options;
import org.tribuo.common.libsvm.KernelType;
import org.tribuo.common.libsvm.SVMParameters;
import org.tribuo.regression.libsvm.SVMRegressionType;
public class LibSVMOptions implements Options {
@Override
public String getOptionsDescription() {
return "Trains and tests a LibSVM regression model on the specified datasets.";
}
@Option(longName="coefficient",usage="Intercept in kernel function.")
public double coeff = 1.0;
@Option(charName='d',longName="degree",usage="Degree in polynomial kernel.")
public int degree = 3;
@Option(charName='g',longName="gamma",usage="Gamma value in kernel function.")
public double gamma = 0.0;
@Option(charName='k',longName="kernel",usage="Type of SVM kernel.")
public KernelType kernelType = KernelType.RBF;
@Option(charName='t',longName="type",usage="Type of SVM.")
public SVMRegressionType.SVMMode svmType = SVMMode.EPSILON_SVR;
@Option(longName="standardize",usage="Standardize the regression outputs internally to the SVM")
public boolean standardize = false;
}
// +
// setup for SVR trainer
var svnOptions = new LibSVMOptions();
var parameters = new SVMParameters<>(new SVMRegressionType(svnOptions.svmType),
svnOptions.kernelType);
parameters.setGamma(0.0);
parameters.setCoeff(1.0);
parameters.setDegree(3);
var svr = new LibSVMRegressionTrainer(parameters, false);
// -
| notebooks/regressor/Tribuo Regressor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Random forest model using a single day's observations
# ## Import libraries
# Accelerate scikit learn
from sklearnex import patch_sklearn
patch_sklearn()
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
from sklearn.ensemble import RandomForestClassifier
# ## Load data
data = pd.read_csv('../output/sim_1_zero_onset.csv')
# ## Random forest model fitted to individual days
# +
results = []
days = np.arange(0,50,1)
for day in days:
mask = data['day'] == day
day_data = data[mask]
# Remove infants who have died
mask = day_data['died'] == 0
day_data = day_data[mask]
# Use boot strap k_folds
for k in range(30):
# Split into train and test
ids = list(set(day_data['patient_id']))
random.shuffle(ids)
count_ids = len(ids)
train_count = int(count_ids) * 0.8
train_ids = ids[0:(int(train_count))]
test_ids = ids[(int(train_count)):]
f = lambda x: x in train_ids
mask = day_data['patient_id'].map(f)
train = day_data[mask]
f = lambda x: x in test_ids
mask = day_data['patient_id'].map(f)
test = day_data[mask]
# Get X and y
X_fields = ['gi', 'pulmonary', 'brain']
X_train = train[X_fields]
X_test = test[X_fields]
y_train = train['condition']
y_test = test['condition']
# Fit model
model = RandomForestClassifier(n_jobs=-1)
model.fit(X_train,y_train)
# Get accuracy and probabiltiies
y_pred_test = model.predict(X_test)
y_pred_prob = []
prob = model.predict_proba(X_test)
for indx, p in enumerate(prob):
y_pred_prob.append(p[y_pred_test[indx]])
accuracy = np.mean(y_pred_test == y_test)
mean_probability = np.mean(y_pred_prob)
day_results = dict()
day_results['day'] = day
day_results['k_fold'] = k
day_results['accuracy'] = accuracy
day_results['mean_probability'] = mean_probability
results.append(day_results)
results = pd.DataFrame(results)
# Calculate mean results
cols_without_kfold = list(results); cols_without_kfold.remove('k_fold')
cols_to_average = ['day']
av_results = results[cols_without_kfold].groupby(cols_to_average).mean()
av_results = av_results.reset_index()
# +
# Set up figure
fig = plt.figure(figsize=(5,5), facecolor='w')
ax = fig.add_subplot(111)
ax.plot(av_results['day'], av_results['accuracy']*100, label='Accuracy')
ax.plot(av_results['day'], av_results['mean_probability']*100,
label='Reported probability of predicted class')
ax.set_xlabel('Day after condition starts')
ax.set_ylabel('Learning machine performance (%)')
ax.set_title('Random forest using data on day')
plt.grid()
plt.legend()
plt.savefig('./rf_single_day.png', dpi=300)
plt.show()
| sim_1/ml_models/01_day_zero_rf_singleday.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Measuring Cosine Similarity between Document Vectors
import nltk
nltk.download('stopwords')
nltk.download('wordnet')
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.wordnet import WordNetLemmatizer
import pandas as pd
import re
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
# ## Building a corpus of sentences
sentences = ["We are reading about Natural Language Processing Here",
"Natural Language Processing making computers comprehend language data",
"The field of Natural Language Processing is evolving everyday"]
corpus = pd.Series(sentences)
corpus
# ## Data preprocessing pipeline
def text_clean(corpus, keep_list):
'''
Purpose : Function to keep only alphabets, digits and certain words (punctuations, qmarks, tabs etc. removed)
Input : Takes a text corpus, 'corpus' to be cleaned along with a list of words, 'keep_list', which have to be retained
even after the cleaning process
Output : Returns the cleaned text corpus
'''
cleaned_corpus = pd.Series()
for row in corpus:
qs = []
for word in row.split():
if word not in keep_list:
p1 = re.sub(pattern='[^a-zA-Z0-9]',repl=' ',string=word)
p1 = p1.lower()
qs.append(p1)
else : qs.append(word)
cleaned_corpus = cleaned_corpus.append(pd.Series(' '.join(qs)))
return cleaned_corpus
def lemmatize(corpus):
lem = WordNetLemmatizer()
corpus = [[lem.lemmatize(x, pos = 'v') for x in x] for x in corpus]
return corpus
def stem(corpus, stem_type = None):
if stem_type == 'snowball':
stemmer = SnowballStemmer(language = 'english')
corpus = [[stemmer.stem(x) for x in x] for x in corpus]
else :
stemmer = PorterStemmer()
corpus = [[stemmer.stem(x) for x in x] for x in corpus]
return corpus
def stopwords_removal(corpus):
wh_words = ['who', 'what', 'when', 'why', 'how', 'which', 'where', 'whom']
stop = set(stopwords.words('english'))
for word in wh_words:
stop.remove(word)
corpus = [[x for x in x.split() if x not in stop] for x in corpus]
return corpus
def preprocess(corpus, keep_list, cleaning = True, stemming = False, stem_type = None, lemmatization = False, remove_stopwords = True):
'''
Purpose : Function to perform all pre-processing tasks (cleaning, stemming, lemmatization, stopwords removal etc.)
Input :
'corpus' - Text corpus on which pre-processing tasks will be performed
'keep_list' - List of words to be retained during cleaning process
'cleaning', 'stemming', 'lemmatization', 'remove_stopwords' - Boolean variables indicating whether a particular task should
be performed or not
'stem_type' - Choose between Porter stemmer or Snowball(Porter2) stemmer. Default is "None", which corresponds to Porter
Stemmer. 'snowball' corresponds to Snowball Stemmer
Note : Either stemming or lemmatization should be used. There's no benefit of using both of them together
Output : Returns the processed text corpus
'''
if cleaning == True:
corpus = text_clean(corpus, keep_list)
if remove_stopwords == True:
corpus = stopwords_removal(corpus)
else :
corpus = [[x for x in x.split()] for x in corpus]
if lemmatization == True:
corpus = lemmatize(corpus)
if stemming == True:
corpus = stem(corpus, stem_type)
corpus = [' '.join(x) for x in corpus]
return corpus
# Preprocessing with Lemmatization here
preprocessed_corpus = preprocess(corpus, keep_list = [], stemming = False, stem_type = None,
lemmatization = True, remove_stopwords = True)
preprocessed_corpus
# ## Cosine Similarity Calculation
def cosine_similarity(vector1, vector2):
vector1 = np.array(vector1)
vector2 = np.array(vector2)
return np.dot(vector1, vector2) / (np.sqrt(np.sum(vector1**2)) * np.sqrt(np.sum(vector2**2)))
# ## CountVectorizer
vectorizer = CountVectorizer()
bow_matrix = vectorizer.fit_transform(preprocessed_corpus)
print(vectorizer.get_feature_names())
print(bow_matrix.toarray())
# ## Cosine similarity between the document vectors built using CountVectorizer
for i in range(bow_matrix.shape[0]):
for j in range(i + 1, bow_matrix.shape[0]):
print("The cosine similarity between the documents ", i, "and", j, "is: ",
cosine_similarity(bow_matrix.toarray()[i], bow_matrix.toarray()[j]))
# ## TfidfVectorizer
vectorizer = TfidfVectorizer()
tf_idf_matrix = vectorizer.fit_transform(preprocessed_corpus)
print(vectorizer.get_feature_names())
print(tf_idf_matrix.toarray())
print("\nThe shape of the TF-IDF matrix is: ", tf_idf_matrix.shape)
# ## Cosine similarity between the document vectors built using TfidfVectorizer
for i in range(tf_idf_matrix.shape[0]):
for j in range(i + 1, tf_idf_matrix.shape[0]):
print("The cosine similarity between the documents ", i, "and", j, "is: ",
cosine_similarity(tf_idf_matrix.toarray()[i], tf_idf_matrix.toarray()[j]))
| Chapter04/Cosine Similarity.ipynb |