code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MNIST
# +
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
# %matplotlib inline
print ("packs loaded")
# -
# # Download and Extract MNIST dataset
print ("Download and Extract MNIST dataset")
mnist = input_data.read_data_sets('data/', one_hot=True)
print
print (" tpye of 'mnist' is %s" % (type(mnist)))
print (" number of trian data is %d" % (mnist.train.num_examples))
print (" number of test data is %d" % (mnist.test.num_examples))
# What does the data of MNIST look like?
print ("What does the data of MNIST look like?")
trainimg = mnist.train.images
trainlabel = mnist.train.labels
testimg = mnist.test.images
testlabel = mnist.test.labels
print
print (" type of 'trainimg' is %s" % (type(trainimg)))
print (" type of 'trainlabel' is %s" % (type(trainlabel)))
print (" type of 'testimg' is %s" % (type(testimg)))
print (" type of 'testlabel' is %s" % (type(testlabel)))
print (" shape of 'trainimg' is %s" % (trainimg.shape,))
print (" shape of 'trainlabel' is %s" % (trainlabel.shape,))
print (" shape of 'testimg' is %s" % (testimg.shape,))
print (" shape of 'testlabel' is %s" % (testlabel.shape,))
# +
# How does the training data look like?
print ("How does the training data look like?")
nsample = 5
randidx = np.random.randint(trainimg.shape[0], size=nsample)
for i in randidx:
curr_img = np.reshape(trainimg[i, :], (28, 28)) # 28 by 28 matrix
curr_label = np.argmax(trainlabel[i, :] ) # Label
plt.matshow(curr_img, cmap=plt.get_cmap('gray'))
plt.title("" + str(i) + "th Training Data "
+ "Label is " + str(curr_label))
print ("" + str(i) + "th Training Data "
+ "Label is " + str(curr_label))
# -
# Batch Learning?
print ("Batch Learning? ")
batch_size = 100
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
print ("type of 'batch_xs' is %s" % (type(batch_xs)))
print ("type of 'batch_ys' is %s" % (type(batch_ys)))
print ("shape of 'batch_xs' is %s" % (batch_xs.shape,))
print ("shape of 'batch_ys' is %s" % (batch_ys.shape,))
# Get Random Batch with 'np.random.randint'
print ("5. Get Random Batch with 'np.random.randint'")
randidx = np.random.randint(trainimg.shape[0], size=batch_size)
batch_xs2 = trainimg[randidx, :]
batch_ys2 = trainlabel[randidx, :]
print ("type of 'batch_xs2' is %s" % (type(batch_xs2)))
print ("type of 'batch_ys2' is %s" % (type(batch_ys2)))
print ("shape of 'batch_xs2' is %s" % (batch_xs2.shape,))
print ("shape of 'batch_ys2' is %s" % (batch_ys2.shape,))
randidx
# tested; Gopal
| tests/tf/basic_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.021894, "end_time": "2021-04-15T20:36:13.806547", "exception": false, "start_time": "2021-04-15T20:36:13.784653", "status": "completed"} tags=[]
# # <center> Scraping historical tweets without a Twitter Developer Account
#
# 
# + [markdown] papermill={"duration": 0.019831, "end_time": "2021-04-15T20:36:13.846405", "exception": false, "start_time": "2021-04-15T20:36:13.826574", "status": "completed"} tags=[]
# The tool we will use:
# - snscrape
#
# What you need:
# - Python 3.8
#
# What you don't need:
# - a Twitter Developer Account
#
#
# For a research project related to public discourse about results on international large scale assessments I needed to scrape historical tweets, going back all the way to the begining of Twitter. This is how I discovered **snscrape**, a wonderful tool, easy to setup and use.
#
# I didn't find snscrape from the start, initially I was reading through the intricate details of Twitter Developer Account, application procedure, different levels of access, limits etc etc. But luckily a friend recommended snscrape and suddenly the task of collecting tweets became extremely easy.
#
# Snscrape is a popular tool with social scientists for Tweets collection, at least in 2021. Apparently, it bypasses several limitations of the Twitter API.
# The prettiest thing is that you don't need Twitter developer account credentials (like you do with <a href='https://www.tweepy.org/'>Tweepy</a>, for example)
#
#
# + [markdown] papermill={"duration": 0.021217, "end_time": "2021-04-15T20:36:13.888764", "exception": false, "start_time": "2021-04-15T20:36:13.867547", "status": "completed"} tags=[]
# ## Table of contents
#
#
# 1. [Installing snscrape](#1.-Installing-snscrape)
# 2. [How to use snscrape](#2.-How-to-use-snscrape)
# 3. [Calling snscrape CLI commands from Python Notebook](#3.-Calling-snscrape-CLI-commands-from-Python-Notebook)
# 4. [Using snscrape Python wrapper](#4.-Using-snscrape-Python-wrapper)
# 5. [Tweets meta-information gathered with snscrape](#5.-Tweets-meta-information-gathered-with-snscrape)
# 6. [Dataset manipulation: JSON, CSV and Pandas DataFrame](#6.-Dataset-manipulation:-JSON,-CSV-and-Pandas-DataFrame)
# 7. [Basic exploration of our collected dataset of tweets](#7.-Basic-exploration-of-our-collected-dataset-of-tweets)
# 8. [Bonus: Publishing your Jupyter Notebook on Medium](#8.-Bonus:-Publishing-your-Jupyter-Notebook-on-Medium)
# 9. [What next ? Sentiment analysis](#9.-What-next-?-Sentiment-analysis)
# + [markdown] papermill={"duration": 0.019919, "end_time": "2021-04-15T20:36:13.928744", "exception": false, "start_time": "2021-04-15T20:36:13.908825", "status": "completed"} tags=[]
# We begin with some standard library imports.
# + papermill={"duration": 0.028662, "end_time": "2021-04-15T20:36:13.978900", "exception": false, "start_time": "2021-04-15T20:36:13.950238", "status": "completed"} tags=[]
import os
import subprocess
import json
import csv
import uuid
from IPython.display import display_javascript, display_html, display
import pandas as pd
import numpy as np
from datetime import datetime, date, time
# + [markdown] papermill={"duration": 0.020757, "end_time": "2021-04-15T20:36:14.020124", "exception": false, "start_time": "2021-04-15T20:36:13.999367", "status": "completed"} tags=[]
# ## 1. Installing snscrape
# + [markdown] papermill={"duration": 0.021053, "end_time": "2021-04-15T20:36:14.063402", "exception": false, "start_time": "2021-04-15T20:36:14.042349", "status": "completed"} tags=[]
# Snscrape is available from its <a href='https://github.com/JustAnotherArchivist/snscrape'>official github project repository</a>.
#
# Snscrape has two versions:
# - released version, which you can install by running this line in a commant line terminal: **pip3 install snscrape** (for a Windows machine)
# - **development version**, which is said to have richer functionality, so this is the one I'll be using.
# I will use the latter.
# + [markdown] papermill={"duration": 0.019837, "end_time": "2021-04-15T20:36:14.103513", "exception": false, "start_time": "2021-04-15T20:36:14.083676", "status": "completed"} tags=[]
# First, let's check the current Python version, as snscrape documentation mentions **it requires Python 3.8**
# + papermill={"duration": 0.027976, "end_time": "2021-04-15T20:36:14.151796", "exception": false, "start_time": "2021-04-15T20:36:14.123820", "status": "completed"} tags=[]
from platform import python_version
print(python_version())
# + [markdown] papermill={"duration": 0.020357, "end_time": "2021-04-15T20:36:14.193413", "exception": false, "start_time": "2021-04-15T20:36:14.173056", "status": "completed"} tags=[]
# If you don't see 3.8.x in your case, please upgrade your Python version before you continue this tutorial, otherwise you will **not be able to install snscrape**.
# + [markdown] papermill={"duration": 0.020484, "end_time": "2021-04-15T20:36:14.234583", "exception": false, "start_time": "2021-04-15T20:36:14.214099", "status": "completed"} tags=[]
# ### Installing the development version of snscrape.
# This will not work when ran in Kaggle, because Kaggle only offers Python 3.7 for the time being
# + _kg_hide-output=true papermill={"duration": 9.790146, "end_time": "2021-04-15T20:36:24.045258", "exception": false, "start_time": "2021-04-15T20:36:14.255112", "status": "completed"} tags=[]
pip install git+https://github.com/JustAnotherArchivist/snscrape.git
# + _kg_hide-output=true papermill={"duration": 0.074091, "end_time": "2021-04-15T20:36:24.142883", "exception": false, "start_time": "2021-04-15T20:36:24.068792", "status": "completed"} tags=[]
import snscrape.modules.twitter as sntwitter
# + [markdown] papermill={"duration": 0.023295, "end_time": "2021-04-15T20:36:24.190543", "exception": false, "start_time": "2021-04-15T20:36:24.167248", "status": "completed"} tags=[]
# ## 2. How to use snscrape
#
# - through its command line interface (CLI) in the command prompt terminal.
# - use Python to run the CLI commands from a Jupyter notebook, for example (if you don't want to use the terminal to run commands)
# - or use the official snscrape Python wrapper. The Python wrapper is not well documented, unfortunately.
#
# Parameters you can use:
# - --jsonl : get the data into jsonl format
# - --progress
# - --max-results : limit the number of tweets to collect
# - --with-entity : Include the entity (e.g. user, channel) as the first output item (default: False)
# - --since DATETIME : Only return results newer than DATETIME (default: None)
# - --progress : Report progress on stderr (default: False)
# + _kg_hide-output=true papermill={"duration": 0.056726, "end_time": "2021-04-15T20:36:24.270969", "exception": false, "start_time": "2021-04-15T20:36:24.214243", "status": "completed"} tags=[]
#Run the snscrape help to see what options / parameters we can use
cmd = 'snscrape --help'
#This is similar to running os.system(cmd), which would show the output of running the command in the Terminal
#window from where I started my Jupyter Notebook (which is what I used to develop this code)
#By using subprocees, I capture the commands's output into a variable, whose content I can then print here.
output = subprocess.check_output(cmd, shell=True)
print(output.decode("utf-8"))
# + [markdown] papermill={"duration": 0.023868, "end_time": "2021-04-15T20:36:24.319234", "exception": false, "start_time": "2021-04-15T20:36:24.295366", "status": "completed"} tags=[]
# ## 3. Calling snscrape CLI commands from Python Notebook
#
# Notice I make use of a few snscrape parameters:
# - --max-results, to limit the search
# - --jsonl, to have my results saved directly into a json file
# - --since yyyy-mm-dd, so collect tweets starting with this date
# - twitter-search will tell snscrape what the actual text to search is.
# Notice I use the 'until:yyyy-mm-dd'. This is a workaround for the fact that sncrape does not have support for an --until DATETIME parameters.
# So I'm using Twitter's search <strong>until</strong> feature. That is, I am using a feature already built-in in Twitter search.
# For more <strong>search operators</strong> that you can use and pass on to snscrape as part of the text to search for, see the <a href='https://developer.twitter.com/en/docs/twitter-api/v1/rules-and-filtering/search-operators'>Twitter documentation on search operators</a>.
# + _kg_hide-output=true papermill={"duration": 0.042259, "end_time": "2021-04-15T20:36:24.385660", "exception": false, "start_time": "2021-04-15T20:36:24.343401", "status": "completed"} tags=[]
#out_folder = '../working'
#in_folder = '../input'
#json_filename = out_folder + '/pisa2018-query-tweets.json'
#Using the OS library to call CLI commands in Python
#os.system(f'snscrape --max-results 5000 --jsonl --progress --since 2018-12-01 twitter-search "#pisa2018 lang:fr until:2019-12-31" > {json_filename}')
# + [markdown] papermill={"duration": 0.024112, "end_time": "2021-04-15T20:36:24.434259", "exception": false, "start_time": "2021-04-15T20:36:24.410147", "status": "completed"} tags=[]
# ## 4. Using snscrape Python wrapper
# + papermill={"duration": 0.032224, "end_time": "2021-04-15T20:36:24.490899", "exception": false, "start_time": "2021-04-15T20:36:24.458675", "status": "completed"} tags=[]
start = date(2016, 12, 5)
start = start.strftime('%Y-%m-%d')
stop = date(2016, 12, 14)
stop = stop.strftime('%Y-%m-%d')
keyword = 'pisa2018'
# + _kg_hide-output=true papermill={"duration": 0.047768, "end_time": "2021-04-15T20:36:24.563206", "exception": false, "start_time": "2021-04-15T20:36:24.515438", "status": "completed"} tags=[]
maxTweets = 1000
#We are going to write the data into a csv file
#filename = out_folder + '/' + keyword + start + '-' + stop + '.csv'
#csvFile = open(filename, 'a', newline='', encoding='utf8')
#We write to the csv file by using csv writer
#csvWriter = csv.writer(csvFile)
#csvWriter.writerow(['id','date','tweet'])
#I will use the following Twitter search operators:
# since - start date for Tweets collection
# stop - stop date for Tweets collection
# -filter:links - not very clear what this does, from Twitter search operators documentation: https://developer.twitter.com/en/docs/twitter-api/v1/rules-and-filtering/search-operators
# but it looks like this will exclude tweets with links from the search results
# -filter:replies - removes @reply tweets from search results
#for i,tweet in enumerate(sntwitter.TwitterSearchScraper(keyword + 'since:' + start + ' until:' + \
#stop + ' -filter:links -filter:replies').get_items()):
#if i > maxTweets :
#break
#csvWriter.writerow([tweet.id, tweet.date, tweet.content])
#csvFile.close()
# + [markdown] papermill={"duration": 0.024569, "end_time": "2021-04-15T20:36:24.612934", "exception": false, "start_time": "2021-04-15T20:36:24.588365", "status": "completed"} tags=[]
# ## 5. Tweets meta-information gathered with snscrape
# + [markdown] papermill={"duration": 0.024366, "end_time": "2021-04-15T20:36:24.662390", "exception": false, "start_time": "2021-04-15T20:36:24.638024", "status": "completed"} tags=[]
# Let's have a look at all the information that is available for every single tweet scraped using snscrape.
#
# For this code I am using one example file that I made precidely for this, which contains a single JSON object. If you want to use a JSON file created with the steps above, you need to make some changes before you can run json.loads on it, as explained in <a href='https://stackoverflow.com/questions/21058935/python-json-loads-shows-valueerror-extra-data'>this stackoverflow discussion</a>.
#
# The solution for pretty printing JSON data inside a Jupyter Notebook comes from <a href='https://gist.github.com/nerevar/a068ee373e22391ad3a1413b3e554fb5'>this github project</a>.
#
# Click on the + icons to expand the contents of that particular item.
# + papermill={"duration": 0.050474, "end_time": "2021-04-15T20:36:24.737718", "exception": false, "start_time": "2021-04-15T20:36:24.687244", "status": "completed"} tags=[]
filename = '../input/example/example.json'
with open(filename) as json_file:
data = json.load(json_file)
class RenderJSON(object):
def __init__(self, json_data):
if isinstance(json_data, dict) or isinstance(json_data, list):
self.json_str = json.dumps(json_data)
else:
self.json_str = json_data
self.uuid = str(uuid.uuid4())
def _ipython_display_(self):
display_html('<div id="{}" style="color: #000000; background-color: #ffffff; height: 600px; width:100%;font: 12px/18px monospace !important;"></div>'.format(self.uuid), raw=True)
display_javascript("""
require(["https://rawgit.com/caldwell/renderjson/master/renderjson.js"], function() {
renderjson.set_show_to_level(2);
document.getElementById('%s').appendChild(renderjson(%s))
});
""" % (self.uuid, self.json_str), raw=True)
RenderJSON([data])
# + [markdown] papermill={"duration": 0.025513, "end_time": "2021-04-15T20:36:24.789214", "exception": false, "start_time": "2021-04-15T20:36:24.763701", "status": "completed"} tags=[]
# ## 6. Dataset manipulation: JSON, CSV and Pandas DataFrame
# + [markdown] papermill={"duration": 0.025404, "end_time": "2021-04-15T20:36:24.840966", "exception": false, "start_time": "2021-04-15T20:36:24.815562", "status": "completed"} tags=[]
# ### Converting JSON to Pandas DataFrame
#
# Pandas DataFrame is **the** data structure of choice in Data Science, so we read the JSON file into a DataFrame.
#
# Then we save it as CSV, since CSV is the most common file type for Data Science small projects.
# + papermill={"duration": 0.11607, "end_time": "2021-04-15T20:36:24.983475", "exception": false, "start_time": "2021-04-15T20:36:24.867405", "status": "completed"} tags=[]
filename = 'pisa2018-query-tweets'
file = in_folder + '/pisa2018-keyword-in-tweeter-archive/' + filename
tweets_df = pd.read_json(file +'.json', lines=True)
# + papermill={"duration": 0.034999, "end_time": "2021-04-15T20:36:25.044760", "exception": false, "start_time": "2021-04-15T20:36:25.009761", "status": "completed"} tags=[]
tweets_df.shape
# + papermill={"duration": 0.066426, "end_time": "2021-04-15T20:36:25.137435", "exception": false, "start_time": "2021-04-15T20:36:25.071009", "status": "completed"} tags=[]
tweets_df.head(3)
# + [markdown] papermill={"duration": 0.027621, "end_time": "2021-04-15T20:36:25.192258", "exception": false, "start_time": "2021-04-15T20:36:25.164637", "status": "completed"} tags=[]
# ### Saving DataFrame to CSV
# + papermill={"duration": 0.063773, "end_time": "2021-04-15T20:36:25.283132", "exception": false, "start_time": "2021-04-15T20:36:25.219359", "status": "completed"} tags=[]
tweets_df.to_csv(out_folder + '/' + filename +'.csv', index = False)
# + [markdown] papermill={"duration": 0.026915, "end_time": "2021-04-15T20:36:25.337161", "exception": false, "start_time": "2021-04-15T20:36:25.310246", "status": "completed"} tags=[]
# ## 7. Basic exploration of our collected dataset of tweets
# + [markdown] papermill={"duration": 0.026732, "end_time": "2021-04-15T20:36:25.391036", "exception": false, "start_time": "2021-04-15T20:36:25.364304", "status": "completed"} tags=[]
# ### Basic introduction to tweets
#
# Tweets are 280 character messages (hence the name 'microblogging'). Just like on other social media platforms, you need to create an account and then you can start participating to the tweetverse.
#
# Tweets act as short status updates. Tweets appear on timelines. Timelines are collections of tweets sorted in a chronological order. On your account's home page, you're shown a timeline where tweets from people you follow will be displayed.
#
# You can post your own brand new tweet, retweet an already existing tweet (which means ou just share the exact same tweet) or quote an existing tweet (similar to retweeting, but you can add your own comment to it).
#
# You can also reply to someone else's tweets or 'like' them.
#
# Tweets often contain **entities**, which are mentions of:
# - other users, which appear in the form of @other_user
# - places
# - urls
# - media that was attached to the tweet
# - hashtags, that look like #example_hashtag. Hashtags are just a way to apply a label on a tweet. If I'm tweeting something about results of PISA, the Programme for International Student Assessment, I will likely use #oecdpisa in my tweet, for example.
# + [markdown] papermill={"duration": 0.026781, "end_time": "2021-04-15T20:36:25.444769", "exception": false, "start_time": "2021-04-15T20:36:25.417988", "status": "completed"} tags=[]
# ### Counting the number of Tweets we scraped
#
# The following cell is overkill in this particular scenario, but imagine you just scraped 1 million tweets and you want to know how many you got. The cell below is a very efficient way to count in that case.
# + papermill={"duration": 0.037837, "end_time": "2021-04-15T20:36:25.510026", "exception": false, "start_time": "2021-04-15T20:36:25.472189", "status": "completed"} tags=[]
num = sum(1 for line in open(file +'.json'))
print(num)
# + [markdown] papermill={"duration": 0.027293, "end_time": "2021-04-15T20:36:25.565073", "exception": false, "start_time": "2021-04-15T20:36:25.537780", "status": "completed"} tags=[]
# ### Check tweets for a particular text
# + papermill={"duration": 0.040892, "end_time": "2021-04-15T20:36:25.633669", "exception": false, "start_time": "2021-04-15T20:36:25.592777", "status": "completed"} tags=[]
substring = 'justesse'
count = 0
f = open(file + '.json', 'r')
for i, line in enumerate(f):
if substring in line:
count = count + 1
obj = json.loads(line)
print(f'Tweet number {count}: {obj["content"]}')
print(count)
f.close()
# + [markdown] papermill={"duration": 0.028312, "end_time": "2021-04-15T20:36:25.691572", "exception": false, "start_time": "2021-04-15T20:36:25.663260", "status": "completed"} tags=[]
# The actual content of the tweet is available through test_df['content'] or test_df.content
#
# renderedContent seems to contain the same information as content.
# + papermill={"duration": 0.03796, "end_time": "2021-04-15T20:36:25.758530", "exception": false, "start_time": "2021-04-15T20:36:25.720570", "status": "completed"} tags=[]
tweets_df.iloc[0].content
# + [markdown] papermill={"duration": 0.028131, "end_time": "2021-04-15T20:36:25.814664", "exception": false, "start_time": "2021-04-15T20:36:25.786533", "status": "completed"} tags=[]
# Links mentioned in the tweet are also listed separately in the outlinks column.
# + papermill={"duration": 0.037951, "end_time": "2021-04-15T20:36:25.880984", "exception": false, "start_time": "2021-04-15T20:36:25.843033", "status": "completed"} tags=[]
tweets_df.iloc[0].outlinks
# + [markdown] papermill={"duration": 0.029288, "end_time": "2021-04-15T20:36:25.940049", "exception": false, "start_time": "2021-04-15T20:36:25.910761", "status": "completed"} tags=[]
# We can gauge the popularity of a tweet through these features:
# - replyCount
# - retweetCount
# - likeCount
# - quoteCount
# + papermill={"duration": 0.039321, "end_time": "2021-04-15T20:36:26.007776", "exception": false, "start_time": "2021-04-15T20:36:25.968455", "status": "completed"} tags=[]
popularity_columns = ['replyCount', 'retweetCount', 'likeCount', 'quoteCount']
tweets_df.iloc[0][popularity_columns]
# + [markdown] papermill={"duration": 0.029852, "end_time": "2021-04-15T20:36:26.066356", "exception": false, "start_time": "2021-04-15T20:36:26.036504", "status": "completed"} tags=[]
# Find the most retweeted tweet in our dataset.
# + papermill={"duration": 0.039931, "end_time": "2021-04-15T20:36:26.134971", "exception": false, "start_time": "2021-04-15T20:36:26.095040", "status": "completed"} tags=[]
tweets_df.iloc[tweets_df.retweetCount.idxmax()][['content','retweetCount']]
# + [markdown] papermill={"duration": 0.029013, "end_time": "2021-04-15T20:36:26.193082", "exception": false, "start_time": "2021-04-15T20:36:26.164069", "status": "completed"} tags=[]
# ## 8. Bonus: Publishing your Jupyter Notebook on Medium
# + _kg_hide-output=true papermill={"duration": 8.523554, "end_time": "2021-04-15T20:36:34.745521", "exception": false, "start_time": "2021-04-15T20:36:26.221967", "status": "completed"} tags=[]
pip install jupyter_to_medium
# + _kg_hide-output=true papermill={"duration": 0.043699, "end_time": "2021-04-15T20:36:34.822681", "exception": false, "start_time": "2021-04-15T20:36:34.778982", "status": "completed"} tags=[]
#uncomment and customize the code below in order to publish your notebook on your Medium account
'''
import jupyter_to_medium as jtm
jtm.publish('Scraping historical tweets without a Twitter Developer Account.ipynb',
integration_token='<PASSWORD>',
pub_name=None,
title='Scraping historical tweets without a Twitter Developer Account',
tags=['scraping with Python', 'Twitter archive'],
publish_status='draft',
notify_followers=False,
license='all-rights-reserved',
canonical_url=None,
chrome_path=None,
save_markdown=False,
table_conversion='chrome'
)
'''
# + [markdown] papermill={"duration": 0.033398, "end_time": "2021-04-15T20:36:34.889491", "exception": false, "start_time": "2021-04-15T20:36:34.856093", "status": "completed"} tags=[]
# And that's about it for a quick intro to scraping tweets without the need to apply for a Twitter Developer Account and with no limitations for the maximum number of tweets we can get or for how far back in time we can go.
#
# ## 9. What next ? Sentiment analysis
#
# What to do next with the tweets you just scraped ? In my case, I was very interested in <a href='https://www.kaggle.com/mishki/twitter-sentiment-analysis-using-nlp-techniques'>NLP for sentiment analysis of tweets</a>, or you may try topic modelling using Latent Dirichlet Allocation (LDA) or build a network graph from this data and use network analysis methods on it.
| third_project/scraping-historical-tweets-without-twitter-api.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Computer vision pipeline
# A computer vision pipeline is a series of steps that most computer vision applications will go through. Many vision applications start off by acquiring images and data, then processing that data, performing some analysis and recognition steps, then finally performing an action. The general pipeline and a specific example of a pipeline applied to facial expression recognition is pictured below.
# 
# ### Standardizing Data
# Pre-processing images is all about standardizing input images so that you can move further along the pipeline and analyze images in the same way. In machine learning tasks, the pre-processing step is often one of the most important. For example, for a traffic sign classification task:
# 
# If the images are different sizes, or even cropped differently, then this counting tactic will likely fail. So, it's important to pre-process these images so that they are standardized before they move along the pipeline. In the example below, you can see that the images are pre-processed into a standard square size.
#
# The algorithm counts up the number of red pixels in a given image and if there are enough of them, it classifies an image as a stop sign. In this example, we are just extracting a color feature and skipping over selecting an area of interest (we are looking at the whole image). In practice, you'll often see a classification pipeline that looks like this:
# 
# # Images as Grids of Pixels
# ### Import resources
# +
import numpy as np
import matplotlib.pyplot as plt
import cv2 # computer vision library
# %matplotlib inline
# -
# ### Read in and display the image
# +
# Read in the image
image = cv2.imread('images/waymo_car.jpg')
# Print out the image dimensions
print('Image dimensions:', image.shape)
# Change from color to grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
plt.imshow(gray_image, cmap='gray')
# +
# Print specific grayscale pixel values
# What is the pixel value at x = 400 and y = 300 (on the body of the car)?
x = 400
y = 300
print(gray_image[y,x])
# +
#Find the maximum and minimum grayscale values in this image
max_val = np.amax(gray_image)
min_val = np.amin(gray_image)
print('Max: ', max_val)
print('Min: ', min_val)
# +
# Display a specific part of the image
sub_image = gray_image[50:100, 200:300]
print(sub_image)
plt.imshow(sub_image, cmap='gray')
# +
# Create a 5x5 image using just grayscale, numerical values
tiny_image = np.array([[0, 20, 30, 150, 120],
[200, 200, 250, 70, 3],
[50, 180, 85, 40, 90],
[240, 100, 50, 255, 10],
[30, 0, 75, 190, 220]])
plt.imshow(tiny_image, cmap='gray')
# +
# Code to display a grayscale gradient
gradient_size = 5
gradient = np.zeros((gradient_size, gradient_size))
scale = 255 / (gradient.size-1)
for x in range(gradient.shape[0]):
for y in range(gradient.shape[1]):
gradient[y, x] = (x * 5 + y) * scale
print(gradient)
plt.imshow(gradient, cmap='gray')
| Notebooks/1. CV. Images as Numerical Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
# # Adult Income : Exploratory Analysis And Precition
#
# This notebook has been created to help you go through the steps of a Machine Learning project Life-Cicle, from Business Understanding to presenting the final result to the Business.
#
# ## 1. Business Understanding
# ## 2. Data aquisition
# Automatique Data aquisition
# Convert data into a Pandas Data Frame
#
# ## 3- Data Munging
# Treating missing values
# Working with outliers
#
# ## 4- Exploratory Data Analysis
# Univariate Analysis
# Bivariate analysis
#
# ## 5- Feature Engineering
# Derived Features
# Categorical Feature encoding
#
# ## 6- Preparation, Models and Evaluation
# Preparation
# Models and Evaluation
#
# ## 7- Next Step
#
#
# + [markdown] _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
# ## 1- Business Understanding
# Our data contains an individual's annual income results based on various factors (Education level, Occupation,Gender, Age, etc.).
# Given a new individual, our goal is to predict if that person makes more or less than 50K.
# -
# ## 2- Data Acquisition
# We are going to acquire our dataset into **text** format, after downloading it from the **[UCI Machine Learning](https://archive.ics.uci.edu/ml/datasets/adult)** website. Here are the following libraries that we will be using to acquire the dataset and perform all the preprocessing and analysis.
import requests
import os
# This function will be used to acquire the data from the UCI website
def aquire_data(path_to_data, data_urls):
if not os.path.exists(path_to_data):
os.mkdir(path_to_data)
for url in data_urls:
data = requests.get(url).content
filename = os.path.join(path_to_data, os.path.basename(url))
with open(filename, 'wb') as file:
file.write(data)
# +
data_urls = ["https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.names",
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test"]
aquire_data('data', data_urls)
# -
# Check the success of accessing the data
print('Output n° {}\n'.format(1))
# ! find data
# We can notice that all our data have been acquired from the UCI website. Here we have :
# * **adult.names**: which corresponds to the different column names
# * **adult.data**: corresponds to all the observations in the training data.
# * **data.test**: corresponds to all the observation in the test data
#
column_names = ["Age", "Workclass", "fnlwgt", "Education", "Education-Num",
"Martial Status", "Occupation", "Relationship", "Race", "Sex",
"Capital-Gain", "Capital-Loss", "Hours-per-week", "Country", "Income"]
# ### Convert Data into a Pandas Data Frame
import pandas as pd
import numpy as np
# Here we are going to acquire the training and the test datasets.
# The corresponding column names have been specified in the previous **column_names** variable. Then, we use the regular expression **' \*, \*'** to trim all the whitespaces we can encounter in our datasets. As all the missing values have been specificied by **?**, so, **na_values** is used to take them into consideration during the data loading. Finally we specify **engine='python'** to avoid the warning that comes after using regular expression syntax.
train = pd.read_csv('data/adult.data', names=column_names, sep=' *, *', na_values='?',
engine='python')
test = pd.read_csv('data/adult.test', names=column_names, sep=' *, *', skiprows=1,
engine='python', na_values='?')
test.Income.unique()
train.Income.unique()
# We need to transform the **Income** column value for test data, in order to remove the **"."** at the end
test.Income = np.where(test.Income == '<=50K.', '<=50K', '>50K')
# Concatenate train and test. We will split it before the training phase
df = pd.concat((train, test), axis=0)
df.Income.unique()
# +
print('Output n° {}\n'.format(2))
'''
First 5 observations
'''
df.head()
# +
print('Output n° {}\n'.format(3))
'''
Last 5 observations
'''
df.tail()
# +
print('Output n° {}\n'.format(4))
print('Our data contains {} observations and {} columns.'.format(df.shape[0],
df.shape[1]))
# -
# ## 3- Data Munging
# In this step, we will perform two main tasks.
# * **Dealing with missing values**
# During data collection, it is very common to face missing data problem, that can occur for many reasons (confidentiality, error,etc.). So, it is very important to understand those problems, in order to fill them using appropriate techniques before applying any Machine Learning algorithm.
#
#
# * **Dealing with outliers** *
# Outliers are those values that are far away from the normal values that can be observed in the whole data. They can introduce high bias in our final model performance, and can even lead us to taking wrong conclusion during the analysis step.
#
# #### A- Treating missing values
# We will use pandas **isnull()** function to look at all the missing values for each column.
print('Output n° {}\n'.format(5))
print(df.isnull().sum())
# To the left, we have the name of the features and the number of missing values to the right. We can see that:
# * **Workclass** has 1836 missing values
# * **Occupation** has 1843 missing values
# * **Country** has 583 missing values
#
# To deal with all the missing data, we couuld think of removing all the records (rows/observations) with those missing values. But, this technique could not be a better choice for our case, because we could lose much more data. To do so, we will use the following technique :
# * Replace missing data of categorical columns data with the mode value (most occuring category) of that column.
# * Replace missing numerical columns data with the median value of that column. Here we could use the mean instead of median, but the mean is very prompt to outliers (extreme values).
#
# To be able to identify which columns has which type, we can use pandas dtype() function.
#
#
print('Output n° {}\n'.format(6))
print(df.dtypes)
# To the left, we have the columns name, and their corresponding types to the right. So, we can see that the columns with missing values (discussed previously) are all categorical data (object).
# Then, we can have a look at all the distincs (unique) values in each columns with pandas **unique()** function.
# Workclass
print('Output n° {}\n'.format(7))
print('Number of missing values: {}'.format(len(df['Workclass'].unique())))
print(df['Workclass'].unique())
# Workclass has 9 unique values including **nan** (missing value)
# Occupation
print(print('Output n° {}\n'.format(8)))
print('Number of missing values: {}'.format(len(df['Occupation'].unique())))
print(df['Occupation'].unique())
# The Occupation column has 15 unique values, including **nan**
# Country
print('Output n° {}\n'.format(9))
print('Number of missing values: {}'.format(len(df['Country'].unique())))
print(df['Country'].unique())
# The Country column has 42 unique values, including **nan**
# We know all the columns with missing values, and their type. We also have an idea of the unique values of each of those columns, now, we can perform the missing values replacement process.
#
# To do so, we will create a helper function that will perform this task for all the columns using python **statistics** built-in function.
import statistics as stat
def fill_categorical_missing(data, column):
data.loc[data[column].isnull(), column] = stat.mode(data[column])
# +
cols_to_fill = ['Workclass', 'Occupation', 'Country']
for col in cols_to_fill:
fill_categorical_missing(df, col)
print('Output n° {}\n'.format(10))
# Check the final data if there is any missing values
print(df.isnull().sum())
# -
# We can see that all the values to the right are equal to zero, which means that we have no missing values in our dataset.
# ### B- Dealing with outliers
# To be able to identify outliers in our dataset, we will use **seaborn** **boxplot** to all our numerical columns, and show the final result with **matplotlib**'s **show()** function.
# We the help of the **Output n°6 (i.e print(df.dtypes))**, we can see all our numrical columns; But a better way to look at them is to apply pandas **describe** function, which gives more statistical information about all the numerical columns.
#
# In this part, we are going to use the copy of our training dataset for outliers analysis, then create a helper function that will finally be applied to the original training data for outliers removal.
df_cp = df.copy()
df_cp.head()
df_cp.describe()
# We have 6 numerical columns (Age to Hours-per-week). To the left, we have many statistical information such as :
# * **count**: for the total number of observation for each column.
# * mean: the mean value of each column
# * std: the standard deviation
# * 25%, 50% and 75% are quantiles.
#
# With the quantiles, min and max, the dataset can be splitted into 4 buckets:
# * Bucket 1: below 25% (e.g) for **Age** column, 25% of people are under **28 years old**.
# * Bucket 2: between 25% and 50% (e.g), 25% of them (50%-25%) are between **28 and 37 years old**.
# * Bucket 3: between 50% and 75% (e.g), 25% of them are between **37 and 48 years old** .
# * Bucket 4: between above 75% (e.g), 25% of them are over **48 years old**.
#
# **Then all the values beyond 1.5xIQR are considered as outliers. **
# IQR = Inter Quartile Range = 75th - 25th.
#
# This images gives a better understanding of a boxplot.
# 
#
# Then we will create a helper function that will remove all the outliers from our dataset. But, before that, let have a look at the boxplot.
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
# Age
sns.boxplot(y='Age', data=df_cp)
plt.show()
# Let calculate 0-100th percentile to find a correct percentile value for removal of outliers
def ten_to_ten_percentiles(data, column):
for i in range(0,100,10):
var = data[column].values
var = np.sort(var, axis=None)
print('{} percentile value is {}'.format(i, var[int(len(var) * (float(i)/100))]))
print('100 percentile value is {}'.format(var[-1]))
ten_to_ten_percentiles(df_cp, 'Age')
# We could see from the boxplot of Age that there is no extreme value. Then after checking with percentile values, we have a confirmation of our remark.
#calculating column values at each percntile 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100
def percentiles_from_90(data, column):
for i in range(90,100):
var = data[column].values
var = np.sort(var, axis=None)
print('{} percentile value is {}'.format(i, var[int(len(var) * (float(i)/100))]))
print('100 percentile value is {}'.format(var[-1]))
# Going deeper with the percentile values, we can have more information. So, here is a function that will give us the percentile values for each values from 99 to 100 percentile.
#calculating colunm values at each percntile 99.0,99.1,99.2,99.3,99.4,99.5,99.6,99.7,99.8,99.9,100
def percentiles_from_99(data, column):
for i in np.arange(0.0, 1.0, 0.1):
var =data[column].values
var = np.sort(var,axis = None)
print("{} percentile value is {}".format(99+i,var[int(len(var)*(float(99+i)/100))]))
print("100 percentile value is ",var[-1])
# Education-Num
sns.boxplot(y='Education-Num', data=df_cp)
plt.show()
ten_to_ten_percentiles(df_cp, 'Education-Num')
# There is no anomalies with Education number.
# Capital-Gain
sns.boxplot(y='Capital-Gain', data=df_cp)
plt.show()
ten_to_ten_percentiles(df_cp, 'Capital-Gain')
percentiles_from_90(df_cp, 'Capital-Gain')
percentiles_from_99(df_cp, 'Capital-Gain')
# Removing the outliers based on 99.5th percentile of Capital-Gain
df_cp = df_cp[df_cp['Capital-Gain']<=34095]
# Capital-Gain
sns.boxplot(y='Capital-Gain', data=df_cp)
plt.show()
# Capital-Loss
sns.boxplot(y='Capital-Loss', data=df_cp)
plt.show()
ten_to_ten_percentiles(df_cp, 'Capital-Loss')
percentiles_from_90(df_cp, 'Capital-Loss')
percentiles_from_99(df_cp, 'Capital-Loss')
# No special extreme value here as we could notice for Capital-Gain.
# Hours-per-week
sns.boxplot(y='Hours-per-week', data=df_cp)
plt.show()
ten_to_ten_percentiles(df_cp, 'Hours-per-week')
# There is no special extreme value here.
# Now, we are going to create a helper function in order to remove all the outliers, based in our previous univariate analysis.
def remove_outliers(data):
a = data.shape[0]
print("Number of salary records = {}".format(a))
temp_data = data[data['Capital-Gain']<=34095]
b = temp_data.shape[0]
print('Number of outliers from the Capital-Gain column= {}'.format(a - b))
data = data[(data['Capital-Gain']<=34095)]
print('Total outlies removed = {}'.format(a-b))
print('-----'*10)
return data
# +
print('Removing all the outliers from the data')
print('-----'*10)
df_no_outliers = remove_outliers(df)
proportion_remaing_data = float(len(df_no_outliers)) / len(df)
print('Proportion of observation that remain after removing outliers = {}'.format(proportion_remaing_data))
# -
# After removing the outliers from out data, still 99.49% of the dataset remain present.
# ## 4- Exploratory Data Analysis
# First thing first!
# Let's take a look at the number of people who make more that 50K and those who don't
df_no_outliers.Income.unique()
palette = {"<=50K":"r", ">50K":"g"}
sns.countplot(x="Income", data=df_no_outliers, hue="Income", palette=palette)
# We can notice that we have 24720 adults who make less than 50K dollars and only 7841 of them make more than 50K dollars. So,only 24% of adult make more than 50K dollars.
# #### A- Numerical Data
# For this part, we will be performing centrality measure (mean, median) and dispersion measures (range, percentiles, variance, standard deviation).
# All those information can be found with pandas **describe()** function.
df_no_outliers.describe()
# From this result, we can see that our features are in different scales, so that information will be useful for feature engineering step. For simple visualization purpose, we can plot the probability density of all those features.
# ##### A.1- Univariate Analysis
# Age
df_no_outliers.Age.plot(kind='kde', title='Density plot for Age', color='c')
# Here, we have a positive skewed distribution for Age feature.
# Capital-Gain
df_no_outliers['Capital-Gain'].plot(kind='kde', title='Density plot for Capital-Gain', color='c')
# Capital-Loss
df_no_outliers['Capital-Loss'].plot(kind='kde', title='Density plot for Capital-Loss', color='c')
# Capital-Loss
df_no_outliers['Hours-per-week'].plot(kind='kde', title='Density plot for Hours-per-week', color='c')
# We need to deal with the problem of distribution for all our numerical data values in the feature engineering part.
# ##### A.2- Bivariate analysis
# We will try to determine the correlation between some numerical data.
# Capital-Gain and Education-Num
# use scatter plot for bi-variate distribution
df_no_outliers.plot.scatter(x='Education-Num', y='Capital-Gain', color='c', title='scatter plot : Education-Num vs Capital-Gain');
# We have a positive relationship between the number of year of education and the Capital Gain. The more educated you are, your are likely to have more capital.
# Hours-per-week and Education-Num
# use scatter plot for bi-variate distribution
df_no_outliers.plot.scatter(x='Education-Num', y='Hours-per-week', color='c', title='scatter plot : Education-Num vs Hours-per-week');
# There is no interesting pattern.
# Capital-Gain and Hours-per-week
# use scatter plot for bi-variate distribution
df_no_outliers.plot.scatter(x='Hours-per-week', y='Capital-Gain', color='c', title='scatter plot : Hours-per-week vs Capital-Gain');
# We can not identify any interesting pattern from this visualization.
# Capital-Gain and Capital-Loss
# use scatter plot for bi-variate distribution
df_no_outliers.plot.scatter(x='Capital-Gain', y='Capital-Loss', color='c', title='scatter plot : Capital-Loss vs Capital-Gain');
# People without any capital Gain lose a lot of money, which is obvious, because without any capital Gain, you would need to borrow with interest, and then keep **"surviving".**
numerical_cols = ['int64']
plt.figure(figsize=(10, 10))
sns.heatmap(
df_no_outliers.select_dtypes(include=numerical_cols).corr(),
cmap=plt.cm.RdBu,
vmax=1.0,
linewidths=0.1,
linecolor='white',
square=True,
annot=True
)
# From the correlation matrix, we can see that the level of relationship is very low between the numerical features.
#
# #### B- Categorical Data
# There are many explorations we can do in order to have a better understanding of the data.
# Here are some possibilities we could have:
# * B.1- Income VS Occupation for countries in each continent
# * B.2- Income VS Workclass for countries in each continent
# * B.3- Income VS Marital Status for countries in each continent
# * B.4- Mean Capital Gain VS Martial Status for each continent
#
df_no_outliers.head()
# We have many countries from different continent. For better visualization, it might be interesting to create a new column **Continent** in order to easily group information per continent and the corresponding countries.
df_no_outliers['Country'].unique()
# There is country name called **South** which is definitly an error. It could be considered as **continent**, then we could associate in with the corresponding continent. But, here is the problem: we have both **South-America**, **South-Asia** that could be possible values. In order to avoid including more errors in our data, it might be better to remove the corresponding observations in case that action does not lead to loosing too much data.
# +
south_df = df_no_outliers[df_no_outliers['Country']=='South']
a = south_df.shape[0]
b = df_no_outliers.shape[0]
print('{} rows corresponds to South, which represents {}% of the data'.format(a, (1.0*a/b)*100))
# -
# We can remove all the corresponding rows for **Country == South** because, it corresponds to only 0.244% of the original dataset.
south_index = south_df.index
df_no_outliers.drop(south_index, inplace=True)
# We are going to perform the following preprocessing:
# * Outlying-US(Guam-USVI-etc) ==> Outlying-US
# * Trinadad&Tobago ==> Trinadad-Tobago
# * Hong ==> Hong-Kong
# Changing the corresponding values.
df_no_outliers.loc[df_no_outliers['Country']=='Outlying-US(Guam-USVI-etc)', 'Country'] = 'Outlying-US'
df_no_outliers.loc[df_no_outliers['Country']=='Trinadad&Tobago', 'Country'] = 'Trinadad-Tobago'
df_no_outliers.loc[df_no_outliers['Country']=='Hong', 'Country'] = 'Hong-Kong'
# Check if the process worked
df_no_outliers['Country'].unique()
# We can clearly see that the changes have been made.
# +
asia = ['India', 'Iran', 'Philippines', 'Cambodia', 'Thailand', 'Laos', 'Taiwan',
'China', 'Japan', 'Vietnam', 'Hong-Kong']
america = ['United-States', 'Cuba', 'Jamaica', 'Mexico', 'Puerto-Rico', 'Honduras',
'Canada', 'Columbia', 'Ecuador', 'Haiti', 'Dominican-Republic',
'El-Salvador', 'Guatemala', 'Peru', 'Outlying-US', 'Trinadad-Tobago',
'Nicaragua', '']
europe = ['England', 'Germany', 'Italy', 'Poland', 'Portugal', 'France', 'Yugoslavia',
'Scotland', 'Greece', 'Ireland', 'Hungary', 'Holand-Netherlands']
# -
# Now, create a dictionary to map each country to a Corresponding continent.
continents = {country: 'Asia' for country in asia}
continents.update({country: 'America' for country in america})
continents.update({country: 'Europe' for country in europe})
# Then use Pandas map function to map continents to countries
df_no_outliers['Continent'] = df_no_outliers['Country'].map(continents)
# Here, we have the continents corresponding to all the existing contries in our dataset.
df_no_outliers['Continent'].unique()
# ## B.1- Income VS Occupation for countries in each continent
# I created a helper fonction in order to preprocess for each country in one shot.
def Occupation_VS_Income(continent):
choice = df_no_outliers[df_no_outliers['Continent']==continent]
countries = list(choice['Country'].unique())
for country in countries:
pd.crosstab(choice[choice['Country']==country].Occupation, choice[choice['Country']==country].Income).plot(kind='bar',
title='Income VS Occupation in {}'.format(country))
# ### B.1.1- For Asia
Occupation_VS_Income('Asia')
# ### B.1.2- For America
Occupation_VS_Income('America')
# ### B.1.3- For Europe
Occupation_VS_Income('Europe')
# ## B.2- Income VS Workclass for countries in each continent
def Workclass_VS_Income(continent):
choice = df_no_outliers[df_no_outliers['Continent']==continent]
countries = list(choice['Country'].unique())
for country in countries:
pd.crosstab(choice[choice['Country']==country].Workclass, choice[choice['Country']==country].Income).plot(kind='bar',
title='Income VS Workclass in {}'.format(country))
# ### B.2.1- For Asia
Workclass_VS_Income('Asia')
# ### B.2.2- For America
Workclass_VS_Income('America')
# ### B.2.3- For Europe
Workclass_VS_Income('Europe')
# ## B.3- Income VS Marital Status for countries in each continent
def MaritalStatus_VS_Income(continent):
choice = df_no_outliers[df_no_outliers['Continent']==continent]
countries = list(choice['Country'].unique())
for country in countries:
pd.crosstab(choice[choice['Country']==country]['Martial Status'], choice[choice['Country']==country].Income).plot(kind='bar',
title='Income VS Workclass in {}'.format(country))
# ### B.3.1- For Asia
MaritalStatus_VS_Income('Asia')
# ## B.4- Mean Capital Gain VS Martial Status for each continent
# To accomplish this task; I will create a new dataframe containing the grouping result of Continent, Contient, Marital Status and the **mean value of Capital Gain**
# reset_index(): to convert to aggregation result to a pandas dataframe.
agg_df = df_no_outliers.groupby(['Continent','Country', 'Martial Status'])['Capital-Gain'].mean().reset_index()
agg_df['Mean_Capital_Gain'] = agg_df['Capital-Gain']
agg_df.drop('Capital-Gain', axis=1, inplace=True)
agg_df.head()
import seaborn as sns
def Mean_TotCapital_VS_Marital_Status(continent):
choice = agg_df[agg_df['Continent']==continent]
countries = list(choice['Country'].unique())
for country in countries:
df_c = choice[choice['Country']==country]
ax = sns.catplot(x='Martial Status', y='Mean_Capital_Gain',
kind='bar', data=df_c)
ax.fig.suptitle('Country: {}'.format(country))
ax.fig.autofmt_xdate()
# ### B.4.1- For Asia
Mean_TotCapital_VS_Marital_Status('Asia')
# ### B.4.2- For America
Mean_TotCapital_VS_Marital_Status('America')
# ### B.4.3- For Europe
Mean_TotCapital_VS_Marital_Status('Europe')
# ## 5- Feature Engineering
# This is one of the most crucial aspect for a Data Science project. It is a process of transforming the raw data to better representative
# features in order to create better predictive models.
#
# #### A- Derived Features
# Sometimes, it is important to perform some transformations on the features/columns in order to reduce the number of original data columns.
# Let's start looking at our columns.
# ##### A.1- Education and Education-Num
edu = df_no_outliers.Education.unique()
eduNum = df_no_outliers['Education-Num'].unique()
print('Education: \nTotal category:{}\nValues: {}\n'.format(len(edu),list(edu)))
print('Education Num: \nTotal Education-Num:{}\nValues: {}'.format(len(eduNum),
list(eduNum)))
# We can see that The **Education-Num** seems to be the numerical representation of **Education**, and also the same Total number (16). To do so, we will need only one of them, not both columns.
# Let's check some observations (rows) to verify our hypothesis if there is a corrrespondance between **Education-Num** and **Education**.
# Then we can simply visualize the two columns in order to check the correspondance between them.
ax = sns.catplot(x='Education', y='Education-Num', kind='bar', data=df_no_outliers)
ax.fig.suptitle('Numerical Representation of Educations')
ax.fig.autofmt_xdate()
# From the previous plot, we can see that
# * Bachelor <==> 13
# * HS-grad <==> 9
# * 7th-8th <==> 4
# * 9th <==> 5
# * Preschool <==> 1
# * etc.
# Based on those information, we will need only one column to represent the **level of education**, and in our case,
# we will choose **Education-Num** (remove **Education** column) which corresponds to the numerical representation.
# Finally remove the Education column
df_no_outliers.drop('Education', axis=1, inplace=True)
# ##### A.2- Capital-Loss and Capital-Gain
# From those two features, we can create a new column called **Capital-State** that will be the difference between Capital-Gain and Capital-Loss.
# Then we will remove those two features.
df_no_outliers['Capital-State'] = df_no_outliers['Capital-Gain'] - df_no_outliers['Capital-Loss']
# Then remove Capital-Gain and Capital-Loss.
df_no_outliers.drop(['Capital-Gain', 'Capital-Loss'], axis=1, inplace=True)
'''
Let not forget to drop the 'Continent' column we added for
visualization purpose.
'''
df_no_outliers.drop('Continent', axis=1, inplace=True)
df_no_outliers.head(3)
# ##### A.3- Age State (Adult or Child)
# A person older than 18 is an adult. Otherwise he/she is a child.
# AgeState based on Age
df_no_outliers['AgeState'] = np.where(df_no_outliers['Age'] >= 18, 'Adult', 'Child')
# AgeState Counts
df_no_outliers['AgeState'].value_counts()
sns.countplot(x='AgeState', data=df_no_outliers)
# **fnlwgt** column is not an important feature.
df_no_outliers.drop('fnlwgt', axis=1, inplace=True)
df_no_outliers.head()
# Information about our data
df_no_outliers.info()
# #### B- Categorical Feature encoding
# A machine learning model only works with numerical features. To do so, we need to encode all our categorical features. Those features are represented by **object** with the help of the previous **info** command.
# We are going to perform the **One Hot Ending** method on all the categorical features by using Pandas **get_dummies()** function.
# We are not going to take in consideration **Income** column, because it is the column we try to predict.
# Columns: Workclass, Martial Status Occupation, Relationship, Race, Sex, Country, AgeState
df_no_outliers = pd.get_dummies(df_no_outliers, columns=['Workclass', 'Martial Status', 'Occupation',
'Relationship', 'Race', 'Sex', 'Country', 'AgeState'])
df_no_outliers['Income'].unique()
'''
1: For those who make more than 50K
0: For those who don't
'''
df_no_outliers['Income'] = np.where(df_no_outliers['Income'] =='>50K', 1, 0)
# Reorder columns : In order to have 'Income' as last feature.
columns = [column for column in df_no_outliers.columns if column != 'Income']
columns = columns + ['Income']
df = df_no_outliers[columns]
# Information about our data
df.info()
# ## 6- Preparation, Models and Evaluation
# #### 6.1- Data Preparation
# We need to split our dataset for training and testing data.
# 80% of the data will be used for training and 20% for testing.
y = df.Income.ravel()
X = df.drop('Income', axis=1).as_matrix().astype('float')
print('X shape: {} | y shape: {}'.format(X.shape, y.shape))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
print('X train shape: {} | y shape: {}'.format(X_train.shape, y_train.shape))
print('X test shape: {} | y shape: {}'.format(X_test.shape, y_test.shape))
# #### 6.2- Models & Evaluation
# Before building any machine learning model. It is important to build a baseline model first, in order judge the performance of the upcoming models.
# ##### Baseline Model
from sklearn.dummy import DummyClassifier
dummy_clf = DummyClassifier(strategy='most_frequent', random_state=0)
# Train the model
dummy_clf.fit(X_train, y_train)
print('Score of baseline model : {0:.2f}'.format(dummy_clf.score(X_test, y_test)))
# ##### Logistic Regression
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
lr_clf = LogisticRegression(random_state=0)
parameters = {'C':[1.0, 10.0, 50.0, 100.0, 1000.0], 'penalty' : ['l1','l2']}
lr_clf = GridSearchCV(lr_clf, param_grid=parameters, cv=3)
lr_clf.fit(X_train, y_train)
lr_clf.best_params_
print('Best score : {0:.2f}'.format(lr_clf.best_score_))
print('Score for logistic regression - on test : {0:.2f}'.format(lr_clf.score(X_test, y_test)))
# ## 7- Next Step
# * Feature Normalization and Standardization
# * Feature selection
# * Use different models: Ensemble Technics
#
| eda-feature-engineering-machine-learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial
# This Python notebook demonstrates how OASIS can be used to efficiently evaluate a classifier, based on an example dataset from the entity resolution domain.
# We begin by loading the required packages (including OASIS) and setting the random seeds for reproducability.
import sys
sys.path.append('D:\\PycharmProjects\\magellan\\oasis')
# +
import numpy as np
import random
import oasis
import matplotlib.pyplot as plt
# %matplotlib inline
np.random.seed(319158)
random.seed(319158)
# -
# ## Example dataset
# The dataset we shall use for this tutorial is derived from the `Amazon-GoogleProducts` dataset available from [here](http://dbs.uni-leipzig.de/en/research/projects/object_matching/fever/benchmark_datasets_for_entity_resolution). It is described in the following publication:
#
# > <NAME>, <NAME>, and <NAME>. "Evaluation of entity resolution approaches on real-world match problems." *Proceedings of the VLDB Endowment* 3.1-2 (2010): 484-493.
#
# The dataset consists of product listings from two e-commerce websites: *Amazon* and *Google Products* (which no longer exists as of 2017). Our goal is to train a classifier to identify pairs of records across the two data sources which refer to the same products. This involves forming the cross join of the two data sources and classifying each pair of records as a "match" or "non-match". Since the focus of this notebook is evaluation, we shall not demonstrate how to build the classifier here. Instead, we shall load the data from a classifier we prepared earlier.
# ### Loading the data
# Using our pre-trained classifier, we calculated predictions and scores on a test set containing 676,267 record pairs. The data is stored in HDF5 format and is available in the GitHub repository.
#
# Below, we make use of the ``Data`` class in the OASIS package to read the HDF file into memory.
data = oasis.Data()
data.read_h5('Amazon-GoogleProducts-test.h5')
data.calc_true_performance() #: calculate true precision, recall, F1-score
# ## Evaluating the classifier
# Our goal is to estimate the F1-score of the classifier by sequentially labelling items in the test set. This example is somewhat contrived since we already know the ground truth labels (they are included with the test set). However, we can simulate the labelling by defining an oracle which looks up the labels as follows:
def oracle(idx):
return data.labels[idx]
# In the following experiments, we shall adopt the parameter settings below:
alpha = 0.5 #: corresponds to F1-score
n_labels = 5000 #: stop sampling after querying this number of labels
max_iter = 1e6 #: maximum no. of iterations that can be stored
# ### OASIS
# Here we use the OASIS method to estimate the F1-score. The first step is to initialise the sampler.
smplr = oasis.OASISSampler(alpha, data.preds, data.scores, oracle, max_iter=max_iter)
# Next we query ``n_labels`` sequentially.
smplr.sample_distinct(n_labels)
# Finally, we plot the history of estimates to check for convergence. Since we already know the true value of the F1-score for this example (because we were given all of the labels in advance), we have indicated it on the plot using a red line.
# +
def plt_estimates(smplr, true_value):
plt.plot(smplr.estimate_[smplr.queried_oracle_])
plt.axhline(y=true_value, color='r')
plt.xlabel("Label budget")
plt.ylabel("Estimate of F1-score")
plt.ylim(0,1)
plt.show()
plt_estimates(smplr, data.F1_measure)
# -
# ### Other samplers
# For comparison, we repeat the evaluation using two alternative sampling methods available in the OASIS package.
# First, we test the basic passive sampling method. It performs poorly due to the extreme class imbalance. Of the 5000 labels queried, none of them correspond to a true positive, yielding an incorrect estimate for the F1-score of 0.0.
pass_smplr = oasis.PassiveSampler(alpha, data.preds, oracle, max_iter=max_iter)
pass_smplr.sample_distinct(n_labels)
plt_estimates(pass_smplr, data.F1_measure)
# The non-adaptive importance sampling method fares better, yielding a decent estimate after consuming 5000 labels. However, it takes longer to converge than OASIS.
is_smplr = oasis.ImportanceSampler(alpha, data.preds, data.scores, oracle, max_iter=max_iter)
is_smplr.sample_distinct(n_labels)
plt_estimates(is_smplr, data.F1_measure)
| docs/tutorial/tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/take_all.png" height="300" width="1200">
#
# # <center> Науки о данных <br> <br> Собираем данные в python </center>
#
# ## Agenda
#
# * Азы всех азов
# * Что делать, если сервер разозлился
# * Что такое API
# * Что такое Selenium
# * Хитрости
# # 1. Азы всех азов
#
# Чтобы усвоить азы всех азов, прочитайте [статейку с хабра.](https://habr.com/ru/company/ods/blog/346632/) Там в соавторах один из семинаристов, что как бы намекает на то, что контент годный.
# ## Зачем собирать данные автоматически?
#
# <br>
#
# <br>
#
# <center>
# <img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/aaaaaa.png" width="500">
# ## Что такое HTML?
#
# **HTML (HyperText Markup Language)** — это такой же язык разметки как Markdown или LaTeX. Он является стандартным для написания различных сайтов. Команды в таком языке называются **тегами**. Если открыть абсолютно любой сайт, нажать на правую кнопку мышки, а после нажать `View page source`, то перед вами предстанет HTML скелет этого сайта.
#
#
# HTML-страница это ни что иное, как набор вложенных тегов. Можно заметить, например, следующие теги:
#
# - `<title>` – заголовок страницы
# - `<h1>…<h6>` – заголовки разных уровней
# - `<p>` – абзац (paragraph)
# - `<div>` – выделения фрагмента документа с целью изменения вида содержимого
# - `<table>` – прорисовка таблицы
# - `<tr>` – разделитель для строк в таблице
# - `<td>` – разделитель для столбцов в таблице
# - `<b>` – устанавливает жирное начертание шрифта
#
# Обычно команда `<...>` открывает тег, а `</...>` закрывает его. Все, что находится между этими двумя командами, подчиняется правилу, которое диктует тег. Например, все, что находится между `<p>` и `</p>` — это отдельный абзац.
#
# Теги образуют своеобразное дерево с корнем в теге `<html>` и разбивают страницу на разные логические кусочки. У каждого тега есть свои потомки (дети) — те теги, которые вложены в него и свои родители.
#
# Например, HTML-древо страницы может выглядеть вот так:
#
#
# ````
# <html>
# <head> Заголовок </head>
# <body>
# <div>
# Первый кусок текста со своими свойствами
# </div>
# <div>
# Второй кусок текста
# <b>
# Третий, жирный кусок
# </b>
# </div>
# Четвёртый кусок текста
# </body>
# </html>
# ````
#
# Можно работать с этим html как с текстом, а можно как с деревом. Обход этого дерева и есть парсинг веб-страницы. Мы всего лишь будем находить нужные нам узлы среди всего этого разнообразия и забирать из них информацию!
#
# <center>
# <img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/tree.png" width="450">
# ## Качаем цены на книги
#
# * Хотим собрать [цены на книги](http://books.toscrape.com)
# * Руками долго, напишем код на петухоне
#
# Доступ к веб-станицам позволяет получать модуль requests. Подгрузим его. Если у вас не установлен этот модуль, то придётся напрячься и установить: `pip install requests`.
# +
import requests
url = 'http://books.toscrape.com/catalogue/page-1.html'
response = requests.get(url)
response
# -
# Благословенный 200 ответ - соединение установлено и данные получены, всё чудесно! Если попытаться перейти на несуществующую страницу, то можно получить, например, знаменитую ошибку 404.
requests.get('http://books.toscrape.com/big_scholarship')
# Внутри response лежит html-разметка странички, которую мы парсим.
response.content[:1000]
# Выглядит неудобоваримо, как насчет сварить из этого дела что-то покрасивее? Например, прекрасный суп.
#
# <img align="center" src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/alisa.jpg" height="200" width="200">
#
# Пакет **[`bs4`](https://www.crummy.com/software/BeautifulSoup/)**, a.k.a **BeautifulSoup** был назван в честь стишка про красивый суп из Алисы в стране чудес. Эта совершенно волшебная библиотека, которая из сырого и необработанного HTML (или XML) кода страницы выдаст вам структурированный массив данных, по которому очень удобно искать необходимые теги, классы, атрибуты, тексты и прочие элементы веб страниц.
#
# > Пакет под названием `BeautifulSoup` — скорее всего, не то, что вам нужно. Это третья версия (*Beautiful Soup 3*), а мы будем использовать четвертую. Так что нам нужен пакет `beautifulsoup4`. Чтобы было совсем весело, при импорте нужно указывать другое название пакета — `bs4`, а импортировать функцию под названием `BeautifulSoup`. В общем, сначала легко запутаться, но эти трудности нужно преодолеть однажды, а потом будет проще.
# +
from bs4 import BeautifulSoup
# распарсили страничку в дерево
tree = BeautifulSoup(response.content, 'html.parser')
# -
# Внутри переменной `tree` теперь лежит дерево из тегов, по которому мы можем совершенно спокойно бродить.
tree.html.head.title
# Можно вытащить из того места, куда мы забрели, текст с помощью метода `text`.
tree.html.head.title.text
# С текстом можно работать классическими питоновскими методами. Например, можно избавиться от лишних отступов.
tree.html.head.title.text.strip()
# Более того, зная адрес элемента, мы сразу можем найти его. Например, вот так в коде страницы мы можем найти где именно для каждой книги лежит основная информация. Видно, что она находится внутри тега `article`, для которого прописан класс `product_pod` (грубо говоря, в html класс задаёт оформление соотвествующего кусочка страницы).
#
# Вытащим инфу о книге из этого тега.
books = tree.find_all('article', {'class' : 'product_pod'})
books[0]
# Полученный после поиска объект также обладает структурой bs4. Поэтому можно продолжить искать нужные нам объекты уже в нём.
type(books[0])
books[0].find('p', {'class': 'price_color'}).text
# Обратите внимание, что для поиска есть как минимум два метода: `find` и `find_all`. Если несколько элементов на странице обладают указанным адресом, то метод `find` вернёт только самый первый. Чтобы найти все элементы с таким адресом, нужно использовать метод `find_all`. На выход будет выдан список.
#
# Кроме содержимого у тегов часто есть атрибуты. Например, у названия книги есть атрибуты `title` и `href`:
books[0].h3
# Их тоже можно вытащить.
books[0].h3.a.get('href')
books[0].h3.a.get('title')
# А ещё по этим атрибутам можно искать интересующие нас кусочки страницы.
tree.find_all('a', {'title': 'A Light in the Attic'})
# Собственно говоря, это всё.
#
# Обратите внимание, что на сайте все книги лежат на разных страничках. Если попробовать потыкать их, можно заметить, что в ссылке будет меняться атрибут `page`. Значит, если мы хотим собрать все книги, надо создать кучу ссылок с разным `page` внутри цикла. Когда качаешь данные с более сложных сайтов, в ссылке часто есть огромное количество атрибутов, которые регулируют выдачу.
#
# Давайте запишем весь код для сбора книг в виде функции. На вход она будет принимать номер странички, которую надо скачать.
def get_page(p):
# изготовили ссылку
url = 'http://books.toscrape.com/catalogue/page-{}.html'.format(p)
# сходили по ней
response = requests.get(url)
# построили дерево
tree = BeautifulSoup(response.content, 'html.parser')
# нашли в нём всё самое интересное
books = tree.find_all('article', {'class' : 'product_pod'})
infa = [ ]
for book in books:
infa.append({'price': book.find('p', {'class': 'price_color'}).text,
'href': book.h3.a.get('href'),
'title': book.h3.a.get('title')})
return infa
# Осталось только пройтись по всем страничкам от page-1 до page-50 циклом и данные у нас в кармане.
# +
infa = []
for p in range(1,51):
infa.extend(get_page(p))
# +
import pandas as pd
df = pd.DataFrame(infa)
print(df.shape)
df.head()
# -
# Кстати говоря, если перейти по ссылке в саму книгу, там о ней будет куча дополнительной информации. Можно пройтись по всем ссылкам и выкачать себе по ним дополнительную информацию.
# # 2. Что делать, если сервер разозлился
#
# * Вы решили собрать себе немного данных
# * Сервер не в восторге от ковровой бомбардировки автоматическими запросами
# * Error 403, 404, 504, $\ldots$
# * Капча, требования зарегистрироваться
# * Заботливые сообщения, что с вашего устройства обнаружен подозрительный трафик
#
# <center>
# <img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/doge.jpg" width="450">
# ## а) быть терпеливым
#
# * Слишком частые запросы раздражают сервер
# * Ставьте между ними временные задержки
import time
time.sleep(3) # и пусть весь мир подождёт 3 секунды
# ## б) быть похожим на человека
#
#
# Запрос нормального человека через браузер выглядит так:
#
# <center>
# <img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/browser_get.png" width="600">
#
# С ним на сервер попадает куча информации! Запрос от питона выглядит так:
#
#
# <center>
# <img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/python_get.jpg" width="250">
#
# Заметили разницу? Очевидно, что нашему скромному запросу не тягаться с таким обилием мета-информации, которое передается при запросе из обычного браузера. К счастью, никто нам не мешает притвориться человечными и пустить пыль в глаза сервера при помощи генерации фейкового юзер-агента. Библиотек, которые справляются с такой задачей, существует очень и очень много, лично мне больше всего нравится [fake-useragent.](https://pypi.org/project/fake-useragent/) При вызове метода из различных кусочков будет генерироваться рандомное сочетание операционной системы, спецификаций и версии браузера, которые можно передавать в запрос:
# !pip install fake_useragent
from fake_useragent import UserAgent
UserAgent().chrome
# Например, https://knowyourmeme.com/ не захочет пускать к себе python и выдаст ошибку 403. Она выдается сервером, если он доступен и способен обрабатывать запросы, но по некоторым личным причинам отказывается это делать.
# +
url = 'https://knowyourmeme.com/'
response = requests.get(url)
response
# -
# А если сгенерировать User-Agent, вопросов у сервера не возникнет.
response = requests.get(url, headers={'User-Agent': UserAgent().chrome})
response
# __Другой пример:__ если захотите спарсить ЦИАН, он начнет вам выдавать капчу. Один из вариантов обхода: менять ip через тор. Однако на практически каждый запрос из-под тора, ЦИАН будет выдавать капчу. Если добавить в запрос `User_Agent`, то капча будет вылезать намного реже.
# ## в) общаться через посредников
#
# <center>
# <img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/proxy.jpeg" width="400">
# Посмотрим на свой ip-адрес без прокси.
r = requests.get('https://httpbin.org/ip')
print(r.json())
# А теперь попробуем посмотреть, что будет если подключить прокси.
# +
proxies = {
'http': '192.168.3.11:47592',
'https': '192.168.3.11:47592'
}
r = requests.get('https://httpbin.org/ip', proxies=proxies)
print(r.json())
# -
# Запрос работал немного подольше, ip адрес сменился. Большая часть проксей, которые вы найдёте работают криво. Иногда запрос идёт очень долго и выгоднее сбросить его и попробовать другую проксю. Это можно настроить опцией `timeout`. Например, так если сервер не будет отвечать секунду, код упадёт.
import requests
requests.get('http://www.google.com', timeout=1)
# У requests есть довольно много разных интересных примочек. Посмотреть на них можно в [гайде из документации.](https://requests.readthedocs.io/en/master/user/advanced/)
#
#
# __Где можно попытаться раздобыть списки прокси:__
#
# * https://qna.habr.com/q/591069
# * https://getfreeproxylists.blogspot.com/
# * Большая часть бесплатных прокси обычно не работает. Пишите парсер, который будет собирать списки из проксей и пытаться применить их.
# ## г) уходить глубже
#
# <center>
# <img src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/tor.jpg" width="600">
#
# Можно попытаться обходить злые сервера через тор. Есть аж несколько способов, но мы про это говорить не будем. Лучше подробно почитать в статье на Хабре. Ссылка на неё в конце тетрадки. Ещё в самом начале была. А ещё в середине [наверняка есть.](https://habr.com/ru/company/ods/blog/346632/)
# ## Совместить всё?
#
# 1. Начните с малого
# 2. Если продолжает банить, накидывайте новые примочки
# 3. Каждая новая примочка бьёт по скорости
# 4. [Разные примочки для requests](http://docs.python-requests.org/en/v0.10.6/user/advanced/)
# # 3. API
#
# __API (Application Programming Interface__ — это уже готовый код, который можно всунуть в свой код! Многие сервисы, в том числе Google и Вконтакте, предоставляют свои уже готовые решения для вашей разработки.
#
# Примеры:
#
# * [Контактовский API](https://vk.com/dev/methods)
# * [API twitter](https://developer.twitter.com/en/docs.html)
# * [API youtube](https://developers.google.com/youtube/v3/)
# * [API google maps](https://developers.google.com/maps/documentation/)
# * [Aviasales](https://www.aviasales.ru/API)
# * [Yandex Translate](https://yandex.ru/dev/translate/)
#
# Оно есть почти везде! На этом семинаре мы посмотрим на два примера: на API контакта и google maps.
# ## 3.1 API vk
#
# Зачем может понадобиться доступ к API контакта, думаю, объяснять не надо. Социальная сетка — это тонны различной полезной информации, которую можно заиспользовать для своих целей. [В документации](https://vk.com/dev/manuals) очень подробно описано как можно работать с API контакта и к чему это приводит.
#
# Но для начала к API нужно получить доступ. Для этого придётся пройти пару бюрократических процедур (о, боже, эти два предложения были так бюрократически сформулированы, что мне захотелось отстоять в очереди).
#
# Первая такая процедура заключается в создании своего приложения. Для этого переходим по [ссылке](http://vk.com/editapp?act=create) и проходимся по необходимым шагам:
#
# <img align="center" src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/app_creation_1.png" width="500">
#
# После подтверждения своей личности по номеру телефона, попадаем на страницу свежесозданного приложения
#
# <img align="center" src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/app_creation_2.png" width="500">
#
# Слева нам будем доступна вкладка с настройками, перейдя в неё мы увидим все необходимые нам для работы с приложением параметры:
# <img align="center" src="https://raw.githubusercontent.com/hse-econ-data-science/eds_spring_2020/master/sem05_parsing/image/app_creation_3.png" width="500">
#
# Отсюда в качестве токена можно забрать сервисный ключ доступа. Для работы с частью методов API этого вполне достаточно (обычно в заголовке такого метода стоит соответствующая пометка). Иногда нужны дополнительные доступы. Для того, чтобы получить их, необходимо сделать ещё пару странных манипуляций:
#
# Переходим по ссылке вида (на месте звездочек должен стоять ID созданного вами приложения):
#
# > https://oauth.vk.com/authorize?client_id=**********&scope=8198&redirect_uri=https://oauth.vk.com/blank.html&display=page&v=5.16&response_type=token
#
# В итоге по этому запросу будет сформирована ссылка следующего вида:
# > https://oauth.vk.com/blank.html#access_token=<PASSWORD>&expires_in=<PASSWORD>&user_id=*******
#
# Первый набор знаков — `access token`, т.е. маркер доступа. Вторая цифра (`expires_in=`) время работы маркера доступа в секундах (одни сутки). По истечению суток нужно будет получить новый маркер доступа. Последняя цифра (`user_id=`) ваш ID Вконтакте. Нам в дальнейшем понадобится маркер доступа. Для удобства сохраним его в отдельном файле или экспортируем в глобальную область видимости. В целях безопасности ваших данных не стоит нигде светить токенами и тем более выкладывать их в открытый доступ. __Так можно и аккаунта случайно лишиться.__ Берегите токен смолоду.
#
# Обратите внимание на ссылку, по которой мы делали запрос на предоставление токена. Внутри неё находится странный параметр `scope=8198.` Это мы просим доступ к конкретным разделам. Подробнее познакомиться с взаимно-однозначным соответствием между числами и правами можно [в документации.](https://vk.com/dev/permissions) Например, если мы хотим получить доступ к друзьям, фото и стенам, мы подставим в scope цифру 2+4++8192=8198.
# +
# мой номер странички
myid = '' # вставить номер странички
# версия используемого API
version = '5.103'
# подгружаем токен из файлика на компьютере
with open('secret_token.txt') as f:
token = f.read()
# -
# Чтобы скачать что-то из контакта, надо сделать ссылку и сходить по ней пакетом `requests`. Ссылка должна будет включать в себя метод (что мы просим у вк) и параметры (насколько много и как именно). Мы будем просто заменять эти две штуки и выкачивать разные вещи.
# +
method = 'users.get'
parameters = 'user_ids='
url = 'https://api.vk.com/method/' + method + '?' + parameters + '&v=' + version + '&access_token=' + token
response = requests.get(url)
response.json()
# -
# В ответ на наш запрос vk выкидывает JSON с информацией. JSON очень похож на птонячие словарики. Смысл квадратных и фигурных скобок такой же. Правда, есть и отличия: например, в Python одинарные и двойные кавычки ничем не отличаются, а в JSON можно использовать только двойные.
#
# Мы видим, что полученный нами JSON представляет собой словарь, значения которого — строки или числа, а также списки или словари, значения которых в свою очередь также могут быть строками, числами, списками, словарями и т.д. То есть получается такая довольно сложная структура данных, из которой можно вытащить всё то, что нас интересует.
response.json()['response'][0]['first_name']
# [В документации](https://vk.com/dev/manuals) очень подробно описано какие есть методы и какие у них бывают параметры. Давайте завернём код выше в функцию и попробуем что-нибудь скачать.
def vk_download(method, parameters):
url = 'https://api.vk.com/method/' + method + '?' + parameters + '&access_token=' + token + '&v=' + version
response = requests.get(url)
infa = response.json()
return infa
# Например, все лайки с [хайер скул оф мемс.](https://vk.com/hsemem)
group_id = '-51126445' # взяли из ссылки на группу
wall = vk_download('wall.get', 'owner_id={}&count=100'.format(group_id))
wall = wall['response']
wall['items'][0]
wall['items'][0]['likes']['count']
likes = [item['likes']['count'] for item in wall['items']]
likes[:10]
# За один запрос скачалось всего-лишь $100$ постов с лайками. В паблике их целых
wall['count']
# [Документация](https://vk.com/dev/manuals) говорит, что есть параметр `offset`, с помощью которого можно указать какие именно посты из группы нужно скачать. Например, если мы укажем `offset = 100`, скачается вторая сотня. Наше дело за малым: написать цикл.
# +
import time
likes = [ ] # сюда буду сохранять лайки
for offset in range(0, 4800, 100):
time.sleep(0.4) # вк согласен работать 3 раза в секунду,
# между запросами python спит 0.4 секунды
wall = vk_download('wall.get', 'owner_id={}&count=100&offset={}'.format(group_id, offset))
likes.extend([item['likes']['count'] for item in wall['response']['items']])
# -
# Лайки в наших руках. Можем даже посмотреть на их распределение и попробовать что-то с ними сделать.
len(likes)
import matplotlib.pyplot as plt
plt.hist(likes);
# В принципе похожим образом можно скачать что угодно. Обратите внимание, что у вк есть специальный метод [`execute`,](https://vk.com/dev/execute) который иногда помогает ускорить скачку в $25$ раз. [В этом очень старом туториале](https://github.com/DmitrySerg/OpenData/blob/master/RussianElections2018/Part_1_Parsing_VK.ipynb) даже есть пример использования.
# +
# Задание на занятии
import requests
url = 'https://www.imdb.com/search/title/?count=100&groups=top_1000&sort=user_rating' # сохраняем
page = requests.get(url) # загружаем страницу по ссылке
page.text
from bs4 import BeautifulSoup
soup = BeautifulSoup(page.text, 'lxml')
# проинспектируем элемент, увидим, что название
# достанем список всех фильмов, до предпоследнего, так как в последнем лежит что-то не то
[i.find_all('a')[0].get_text() for i in soup.find_all('h3')[:-1]]
# достанем актеров
s = soup.find_all('p', {'class': ''})[0].text
import re
directors = re.split(r'Stars:', soup.find_all('p', {'class': ''})[0].text)[0]
actors = re.split(r'Stars:', soup.find_all('p', {'class': ''})[0].text)[1].replace('\n', ' ')
# рейтинг
soup.find_all('div', {'class':"inline-block ratings-imdb-rating"})[0].find('strong').text
# количество голосов
soup.find_all('span', {'name':"nv"})[0]
#print(soup.prettify())
# -
# ## 3.2 API Google maps
#
# API для карт может понадобиться для различных полугеографических исследований. Например, мы хотим проверить гипотезу о том, что хороший кофе повышает цену квартиры. Одним из регрессоров хотим взять число кофеен в окрестностях. Это количество кофеен надо откуда-то взять. Google maps вам в помощь!
#
# Снова всё начинается с [получения ключа.](https://developers.google.com/maps/documentation/directions/start) Тут всё намного проще. Переходим по ссылке, жмём Get started, соглашаемся со всем, кроме оплаты. Получаем ключ доступа, сохраняем его в файлик рядом с блокнотом.
# подгружаем токен
with open('google_token.txt') as f:
google_token = f.read()
# Формируем ссылку для запроса по заветам [документации](https://developers.google.com/maps/documentation) и получаем ответ в виде JSON.
# +
mainpage = 'https://maps.googleapis.com/maps/api/place/nearbysearch/json?'
location = '55.86,37.54'
radius = '3000'
keyword = 'кофейня'
parameters = 'location='+location+'&radius='+radius+'&keyword='+keyword+'&language=ru-Ru'+'&key='+ google_token
itog_url = mainpage + parameters
itog_url
# +
response = requests.get(itog_url)
response.json()['results'][0]
# -
# Из json по сотвествующим ключам тащим самое интересное. Например, названия заведений:
[item['name'] for item in response.json()['results']]
# [В этом старом недописанном гайде](https://nbviewer.jupyter.org/github/FUlyankin/Parsers/blob/master/Parsers%20/Google_maps_API.ipynb) есть ещё пара примеров по работе с google maps.
# # 4. Selenium
#
# Это инструмент для роботизированного управления браузером. Для его коректной работы нужно скачать драйвер: [для хрома](https://sites.google.com/a/chromium.org/chromedriver/downloads) или [для фаерфокса.](https://github.com/mozilla/geckodriver/releases)
pip install webdriver-manager
# +
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(ChromeDriverManager().install())
# -
# После выполнения верхнего блока у вас откроется ещё один браузер. Можно пойти в нём на стартовую гугла.
ref = 'http://google.com'
driver.get(ref)
# Найти по html-коду строку для ввода запроса, кликнуть на неё.
stroka = driver.find_element_by_name("q")
stroka.click()
# Написать в неё что-нибудь.
stroka.send_keys('Вконтакте')
# Найти кнопку для гугления и нажать её.
# находим кнопку для гугления и жмём её
button = driver.find_element_by_name('btnK')
button.click()
# У нас на стринчке есть поисковая выдача. Заберём её в bs4 и найдём все сайты.
# +
from bs4 import BeautifulSoup
bs = BeautifulSoup(driver.page_source)
dirty_hrefs = bs.find_all('div',attrs={'class':'g'})[0].find_all('a')[0].attrs.get('href')
dirty_hrefs
#driver.page_source
#clean_hrefs = [href.a['href'] for href in dirty_hrefs]
#clean_hrefs
# -
# Закроем браузер.
driver.close()
# Вообще selenium придумывали для тестировщиков, а не для парсинга. Для парсеров имеет смысл использовать только в крайнем случае. Он очень медленный. Если у вас очень-очень-очень-очень не получается обмануть сервер через requests или вы сталкиваетесь с какой-то специфической защитой от ботов, seleium может помочь. Ещё для selenium __важно__ не забывать ставить временные задержки, чтобы страница успевала прогрузиться. Либо можно дописывать п
# олноценные код, который будет ждать прогрузки и только тогда тыкать на кнопки и тп.
#
# Есть [перевод на русский документации на хабре.](https://habr.com/ru/post/248559/)
#
# В моей практике полезен был пару раз:
#
# * Надо было скачать много инфы о поисковых запросах из [Google Trends,](https://trends.google.ru/trends/?geo=RU) а API сильно ограничивал меня.
# * Надо было понять через поисковик какой у различных организаций ИНН по их наименованию (помогло только для крупных компаний)
# # 5. Хитрости:
#
# ### Хитрость 1: Не стесняйтесь пользоваться `try-except`
#
# Эта конструкция позволяет питону в случае ошибки сделать что-нибудь другое либо проигнорировать её. Например, мы хотим найти логарифм от всех чисел из списка:
# +
from math import log
a = [1,2,3,-1,-5,10,3]
for item in a:
print(log(item))
# -
# У нас не выходит, так как логарифм от отрицательных чисел не берётся. Чтобы код не падал при возникновении ошибки, мы можем его немного изменить:
# +
from math import log
a = [1,2,3,-1,-5,10,3]
for item in a:
try:
print(log(item)) # попробуй взять логарифм
except:
print('я не смог') # если не вышло, сознайся и работай дальше
# -
# __Как это использовать при парсинге?__ Интернет создаёт человек. У многих людей руки очень кривые. Предположим, что мы на ночь поставили парсер скачивать цены, он отработал час и упал из-за того, что на како-нибудь одной странице были криво проставлены теги, либо вылезло какое-то редкое поле, либо вылезли какие-то артефакты от старой версии сайта, которые не были учтены в нашем парсере. Гораздо лучше, чтобы код проигнорировал эту ошибку и продолжил работать дальше.
# ### Хитрость 2: pd.read_html
#
# Если на странице, которую вы спарсили, среди тэгов `<tr>` и `<td>` прячется таблица, чаще всего можно забрать её себе без написания цикла, который будет перебирать все стобцы и строки. Поможет в этом `pd.read_html`. Например, вот так можно забрать себе [табличку с сайта ЦБ](https://cbr.ru/currency_base/daily/)
# +
import pandas as pd
df = pd.read_html('https://cbr.ru/currency_base/daily/', header=None)[0]
df.head()
# -
# Команда пытается собрать в массив все таблички c веб-страницы. Если хочется, можно сначала через bs4 найти нужную таблицу, а потом уже распарсить её:
# +
import requests
resp = requests.get('https://cbr.ru/currency_base/daily/')
tree = BeautifulSoup(resp.content, 'html.parser')
# нашли табличку
table = tree.find_all('table', {'class' : 'data'})[0]
# распарсили её
df = pd.read_html(str(table), header=None)[0]
df.head()
# -
# ### Хитрость 3: используйте пакет tqdm
#
# > Код уже работает час. Я вообще без понятия когда он закончит работу. Было бы круто узнать, сколько ещё ждать...
#
# Если в вашей голове возникла такая мысль, пакет `tqdm` ваш лучший друг. Установите его: ```pip install tqdm```
pip install tqdm
# +
from tqdm import tqdm_notebook
import time
a = list(range(30))
# 30 раз будем спать по секунде
for i in tqdm_notebook(a):
time.sleep(1)
# -
# Мы обмотали тот вектор, по которому идёт цикл в `tqdm_notebook`. Это даёт нам красивую зелёную строку, которая показывает насколько сильно мы продвинулись по коду. Обматывайте свои самые большие и долгие циклы в `tqdm_notebook` и всегда понимайте сколько осталось до конца.
# ### Хитрость 4: распаралеливание
#
# Если сервер не очень настроен вас банить, можно распаралелить свои запросы к нему. Самый простой способ сделать это — библиотека `joblib`.
# +
from joblib import Parallel, delayed
from tqdm import tqdm_notebook
def simple_function(x):
return x**2
result = []
for i in range(10000000):
result.append(simple_function(i))
nj = -1 # паралель на все ядра
resultP = Parallel(n_jobs=nj)(
delayed(simple_function)(item) # какую функцию применяем
for item in tqdm_notebook(range(10000000))) # к каким объектам применям
# tqdm_notebook в последней строчке будет создавать зелёный бегунок с прогрессом
#resultP
# -
# На самом деле это не самый эффективный способ паралелить в python. Он жрёт много памяти и работает медленнее, чем [стандартный multiprocessing.](https://docs.python.org/3/library/multiprocessing.html) Но зато две строчки, КАРЛ! Две строчки!
# ### Хитрость 5: selenium без браузера
#
# Селениум можно настроить так, чтобы физически браузер не открывался.
# +
from selenium import webdriver
from selenium.webdriver.firefox.options import Options
options = Options()
options.headless = True
driver = webdriver.Firefox(options=options)
ref = 'http://google.com'
driver.get(ref)
driver.close()
# -
# ### Ещё хитрости:
#
# * __Сохраняйте то, что парсите по мере скачки!__ Прямо внутрь цикла запихните код, который сохраняет файл!
# * Когда код упал в середине списка для скачки, не обязательно запускать его с самого начала. Просто сохраните тот кусок, который уже скачался и дозапустите код с места падения.
# * Засовывать цикл для обхода ссылок внутрь функции - не самая хорошая идея. Предположим, что надо обойти $100$ ссылок. Функция должна вернуть на выход объекты, которые скачались по всему этому добру. Она берёт и падает на $50$ объекте. Конечно же то, что уже было скачано, функция не возвращает. Всё, что вы накачали - вы теряете. Надо запускать заново. Почему? Потому что внутри функции своё пространство имён. Если бы вы делали это циклом влоб, то можно было бы сохранить первые $50$ объектов, которые уже лежат внутри листа, а потом продолжить скачку.
# * Можно ориентироваться на html-страничке с помощью `xpath`. Он предназначен для того, чтобы внутри html-странички можно было быстро находить какие-то элементы. [Подробнее можно почитать тут.](https://devhints.io/xpath)
# * Не ленитесь листать документацию. Из неё можно узнать много полезных штук.
# # 6. Почиташки
#
# * [Парсим мемы в python](https://habr.com/ru/company/ods/blog/346632/) - подробная статья на Хабре, по которой можно научиться ... парсить (ВНЕЗАПНО)
# * [Тетрадки <NAME>](https://github.com/ischurov/pythonhse) про python для анализа данных. В [лекции 9](https://nbviewer.jupyter.org/github/ischurov/pythonhse/blob/master/Lecture%209.ipynb) и [лекции 10](https://nbviewer.jupyter.org/github/ischurov/pythonhse/blob/master/Lecture%2010.ipynb) есть про парсеры.
# * [Книга про парсинг](https://github.com/FUlyankin/Parsers/blob/master/Ryan_Mitchell_Web_Scraping_with_Python-_Collecting_Data_from_the_Modern_Web_2015.pdf) на случай если вам совсем скучно и хочется почитать что-то длинное и на английском
# * [Продвинутое использование requests](https://2.python-requests.org/en/master/user/advanced/)
# * [Перевод документации по selenium на русский на хабре](https://habr.com/ru/post/248559/)
#
#
# * [Страничка с парсерами одного из семинаристов,](https://fulyankin.github.io/Parsers/) на ней много недописанного и кривого, но есть и интересное:
# * [Более подробно про selenium](https://nbviewer.jupyter.org/github/FUlyankin/Parsers/blob/master/sems/3_Selenium_and_Tor/4.1%20Selenium%20.ipynb)
# * [Немного устаревший гайд по парсинг вконтакте](https://nbviewer.jupyter.org/github/FUlyankin/ekanam_grand_research/blob/master/0.%20vk_parser_tutorial.ipynb)
# * [Немного устаревший гайд про google maps](https://nbviewer.jupyter.org/github/FUlyankin/Parsers/blob/master/Parsers%20/Google_maps_API.ipynb)
#
#
#
| lect10/sem08_parsing_full.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Packages
import re
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
# +
# Load in movie metadata
data = pd.read_csv("movie_metadata.csv");
# Use only rows with no data missing
data = data.dropna();
# Use only numeric data
data = data.select_dtypes(include=[np.number])
# Separate gross from data
labels = data['gross'];
data = data.drop(columns=['gross']);
data = data.values;
# +
#Constants
minLoss = 1e-5; # minimum loss
bestCLF = None; # best classifier
bestI = 0; # best iteration
#Lists
depth = []; # depth of tree
allCLFs = []; # all the classifiers
r2 = []; # All the coefficients of determinations (R^2)
def run_reg(regularization):
""" Performs regularization based on the array. Instaniaties the Linear Regression model,
splits the data, fits the data to the regression model, and then saves all the R2 score
(coefficient of determination) to the list. Also records all the classifiers' scores.
Inputs:
regularization: array of n size
Outputs:
r2: list of all the r2 coefficients
"""
for i in regularization:
#instantiate the classifier at each depth i, use SVD instead of auto (avoid ill-conditioned warning)
reg = linear_model.Ridge(alpha=i, solver='svd');
# Split the training and testing data up
xtrain, xtest, ytrain, ytest = train_test_split(data, labels, test_size=0.1);
reg.fit(xtrain, ytrain);
r2.append(reg.score(xtest, ytest));
allCLFs.append(reg);
return r2
# -
# Plot of R^2 vs alpha with 10 different alphas
regularization = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9];
r2 = run_reg(regularization)
plt.plot(r2);
plt.xlabel('Regularization alpha');
plt.ylabel('R^2');
plt.title('Ridge Regression R-Squared');
# Plot of R^2 vs alpha with 100 different alphas
regularization = np.linspace(0, 1, num=100);
r2 = run_reg(regularization)
plt.plot(r2);
plt.xlabel('Regularization alpha x10');
plt.ylabel('R^2');
plt.title('Ridge Regression R-Squared');
| Scripts/LinearRegressor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: torch
# language: python
# name: torch
# ---
# +
import numpy as np
import pandas as pd
# viz
import matplotlib.pyplot as plt
# notebook settings
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
pd.set_option('display.max_columns', 1000)
# -
# ## Sample Prep
samples = pd.read_csv('../data/TCGA/rna-seq_pan/meta/gdc_sample_sheet.2019-12-12.tsv', sep="\t")
# get file type
samples['data'] = [val[1] for i,val in samples['File Name'].str.split(".").items()]
samples.head()
# Samples with RNAseq adjacent normal tissue
samples[samples['Sample Type']=='Solid Tissue Normal']['data'].value_counts()
samples['project'] = [val[1] for i,val in samples['Project ID'].str.split("-").items()]
samples['project'].value_counts()
# all cases with adjacent normal tissue
cases = samples[samples['Sample Type']=='Solid Tissue Normal']['Case ID']
# disparity in cases
samples[(samples['Case ID'].isin(cases)) & (samples['Sample Type']=='Primary Tumor') & (samples['data']=='FPKM')]['Case ID'].nunique()
samples[(samples['Case ID'].isin(cases)) & (samples['Sample Type']=='Solid Tissue Normal') & (samples['data']=='FPKM')]['Case ID'].nunique()
# divide, join, subset
case_tumor = samples[(samples['Case ID'].isin(cases)) & (samples['Sample Type']=='Primary Tumor') &
(samples['data']=='FPKM')]
case_norm = samples[(samples['Case ID'].isin(cases)) & (samples['Sample Type']=='Solid Tissue Normal') &
(samples['data']=='FPKM')]
cases = case_norm[case_norm['Case ID'].isin(case_tumor['Case ID'])]['Case ID']
cases.shape
case_tumor = case_tumor[case_tumor['Case ID'].isin(cases)]
case_norm = case_norm[case_norm['Case ID'].isin(cases)]
cases = pd.concat([case_tumor, case_norm])
case_tumor.shape
case_norm.shape
cases.shape
# ## Dataset Prep
# +
from sklearn.model_selection import train_test_split
target = 'Sample Type'
cases[target] = cases[target].astype('category')
train, test = train_test_split(cases)
train[target].value_counts()
test[target].value_counts()
# +
import torch
from torch.optim import lr_scheduler
import torch.optim as optim
from torch.autograd import Variable
from trainer import fit
import visualization as vis
import numpy as np
cuda = torch.cuda.is_available()
print("Cuda is available: {}".format(cuda))
classes = train[target].cat.categories.values
# +
from tcga_datasets import TCGA, SiameseTCGA
root_dir = "../data/TCGA/rna-seq_pan/"
batch_size = 1
train_dataset = TCGA(root_dir, samples=train, train=True, target=target)
test_dataset = TCGA(root_dir, samples=test, train=False, target=target)
kwargs = {'num_workers': 10, 'pin_memory': True} if cuda else {'num_workers': 10}
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False, **kwargs)
# -
# ## Siamese Network
# +
# Step 1 set up dataloader
root_dir = "../data/TCGA"
siamese_train_dataset = SiameseTCGA(train_dataset) # Returns pairs of images and target same/different
siamese_test_dataset = SiameseTCGA(test_dataset)
batch_size = 2
kwargs = {'num_workers': 10, 'pin_memory': True} if cuda else {}
siamese_train_loader = torch.utils.data.DataLoader(siamese_train_dataset, batch_size=batch_size, shuffle=True, **kwargs)
siamese_test_loader = torch.utils.data.DataLoader(siamese_test_dataset, batch_size=batch_size, shuffle=False, **kwargs)
# Set up the network and training parameters
from tcga_networks import EmbeddingNet, SiameseNet
from losses import ContrastiveLoss
from metrics import AccumulatedAccuracyMetric
# Step 2
embedding_net = EmbeddingNet()
# Step 3
model = SiameseNet(embedding_net)
if cuda:
model.cuda()
# Step 4
margin = 1.
loss_fn = ContrastiveLoss(margin)
lr = 1e-3
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.StepLR(optimizer, 8, gamma=0.1, last_epoch=-1)
n_epochs = 3
# print training metrics every log_interval * batch_size
log_interval = 50
# -
train_loss, val_loss = fit(siamese_train_loader, siamese_test_loader, model, loss_fn, optimizer, scheduler,
n_epochs, cuda, log_interval)
plt.plot(range(0, n_epochs), train_loss, 'rx-')
plt.plot(range(0, n_epochs), val_loss, 'bx-')
train_embeddings_cl, train_labels_cl = vis.extract_embeddings(train_loader, model)
vis.plot_embeddings(train_embeddings_cl, train_labels_cl, classes)
val_embeddings_baseline, val_labels_baseline = vis.extract_embeddings(test_loader, model)
vis.plot_embeddings(val_embeddings_baseline, val_labels_baseline, classes)
# ## Saliency
(ex1, ex2), target = siamese_test_dataset[0]
foo_test_loader = torch.utils.data.DataLoader(siamese_test_dataset, batch_size=1, shuffle=False, **kwargs)
len(foo_test_loader)
model.parameters()
def saliency_eval(val_loader, model, loss_fn, cuda):
model.eval()
val_loss = 0
saliency_pairs = (torch.zeros(len(val_loader), 60483), torch.zeros(len(val_loader), 60483), torch.zeros(len(val_loader)))
for batch_idx, (data, target) in enumerate(val_loader):
model.zero_grad()
target = target if len(target) > 0 else None
if not type(data) in (tuple, list):
data = (data,)
if cuda:
data = tuple(d.cuda() for d in data)
if target is not None:
target = target.cuda()
# need to instantiate input with require_grad!
data = (Variable(data[0], requires_grad=True), Variable(data[1], requires_grad=True))
outputs = model(*data)
if type(outputs) not in (tuple, list):
outputs = (outputs,)
loss_inputs = outputs
if target is not None:
target = (target,)
loss_inputs += target
loss_outputs = loss_fn(*loss_inputs)
loss = loss_outputs[0] if type(loss_outputs) in (tuple, list) else loss_outputs
loss.backward()
saliency_pairs[0][batch_idx] = data[0].grad
saliency_pairs[1][batch_idx] = data[1].grad
saliency_pairs[2][batch_idx] = loss.item()
val_loss += loss.item()
return val_loss, saliency_pairs
val_loss, saliency_pairs = saliency_eval(foo_test_loader, model, loss_fn, cuda)
saliency_pairs[0].shape
| .ipynb_checkpoints/2019.12.18_saliency_rna-seq_pan_tumor-v-normal-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="cTX19_zosgjz" colab_type="code" colab={}
"""The file needed to run this notebook can be accessed from the following folder using a UTS email account:
https://drive.google.com/drive/folders/1y6e1Z2SbLDKkmvK3-tyQ6INO5rrzT3jp
"""
# + [markdown] id="mamiXm5pltDZ" colab_type="text"
# # Object Detection Using RFCN
#
# ## Tutorial:
# 1. Image annotation using LabelImg
# 2. Conversion of annotation & images into tfrecords
# 3. Configuration of SSD config model file
# 4. Training the model
# 5. Using trained model for inference
#
# ## Tasks for this week:
#
# 1. installation of Google Object Detection API and required packages
# 2. Conversion of images and xml into tfrecord. i.e. train tfrecord, test tfrecord
# 3. Training: Transfer learning from already trained models
# 4. Freezing a trained model and export it for inference
#
# + [markdown] id="4IG-E4uMo8al" colab_type="text"
# ##Task-1: Installation of Google Object Detection API and required packages
#
#
#
# + [markdown] id="CrgipiyWnlOF" colab_type="text"
# ### Step 1: Import packages
#
#
#
#
#
# + id="ADZ2V-GaMdo4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} executionInfo={"status": "ok", "timestamp": 1592382207863, "user_tz": -600, "elapsed": 3801, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="1fa84f6b-a8ca-470e-b0a2-62e25ffe1687"
# %tensorflow_version 1.x
# !pip install numpy==1.17.4
# + id="cZzYq6qGXOUw" colab_type="code" colab={}
import os
import re
import tensorflow as tf
# + id="-XoB3GJU4Jv7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1592382209088, "user_tz": -600, "elapsed": 4997, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="61dbabf5-566b-4a86-9b5c-db76caa2ae48"
print(tf.__version__)
# + id="Nmo5y3VQ6lax" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 156} executionInfo={"status": "ok", "timestamp": 1592382212603, "user_tz": -600, "elapsed": 8495, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="2c564f3e-30eb-474f-c1f1-8811611b68de"
pip install --upgrade tf_slim
# + [markdown] id="hgqx7vZnw4_W" colab_type="text"
# ### Step 2: Initial Configuration to Select model config file and selection of other hyperparameters
# + id="Gz2sswLyw9hZ" colab_type="code" colab={}
# If you forked the repository, you can replace the link.
repo_url = 'https://github.com/Tony607/object_detection_demo'
# Number of training steps.
num_steps = 7000
# Number of evaluation steps.
num_eval_steps = 100
MODELS_CONFIG = {
'rfcn_resnet101': {
'model_name': 'rfcn_resnet101_coco_2018_01_28',
'pipeline_file': 'rfcn_resnet101_pets.config',
'batch_size': 8
}
}
# Pick the model you want to use
selected_model = 'rfcn_resnet101'
# Name of the object detection model to use.
MODEL = MODELS_CONFIG[selected_model]['model_name']
# Name of the pipline file in tensorflow object detection API.
pipeline_file = MODELS_CONFIG[selected_model]['pipeline_file']
# Training batch size fits in Colabe's Tesla K80 GPU memory for selected model.
batch_size = MODELS_CONFIG[selected_model]['batch_size']
# + id="yXwYO33qxB2Y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} executionInfo={"status": "ok", "timestamp": 1592382218420, "user_tz": -600, "elapsed": 12968, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="544c3030-6ed0-4380-f074-b0b9dbc1f504"
# %cd /content
repo_dir_path = os.path.abspath(os.path.join('.', os.path.basename(repo_url)))
# !git clone {repo_url}
# %cd {repo_dir_path}
# !git pull
# + [markdown] id="5xTGEynaxUfL" colab_type="text"
# ### Step 3: Download Google Object Detection API and other dependencies
# + id="OerFd75RoIWz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1592382258925, "user_tz": -600, "elapsed": 52243, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="84aba041-55f6-4ac0-a9d1-5ee94ed7bd7a"
# %cd /content
# #!git clone --quiet https://github.com/tensorflow/models.git
# !git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models.git
# !apt-get install -qq protobuf-compiler python-pil python-lxml python-tk
# !pip install -q Cython contextlib2 pillow lxml matplotlib
# !pip install -q pycocotools
# %cd /content/models/research
# !protoc object_detection/protos/*.proto --python_out=.
import os
os.environ['PYTHONPATH'] += ':/content/models/research/:/content/models/research/slim/'
# !python object_detection/builders/model_builder_test.py
# + [markdown] id="Xl2Xn4Mh0bqw" colab_type="text"
#
# + id="IsXk9v7fRYFM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} executionInfo={"status": "ok", "timestamp": 1592382294755, "user_tz": -600, "elapsed": 86812, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="21953bc8-adad-4103-bd96-979ff9d16001"
from google.colab import drive
drive.mount('/content/gdrive')
# + [markdown] id="uBOCW--vh6Wn" colab_type="text"
# ##Task-2: Conversion of XML annotations and images into tfrecords for training and testing datasets
# + [markdown] id="u-k7uGThXlny" colab_type="text"
# ### Step 4: Prepare `tfrecord` files
#
# Use the following scripts to generate the `tfrecord` files.
# ```bash
# # Convert train folder annotation xml files to a single csv file,
# # generate the `label_map.pbtxt` file to `data/` directory as well.
# python xml_to_csv.py -i data/images/train -o data/annotations/train_labels.csv -l data/annotations
#
# # Convert test folder annotation xml files to a single csv.
# python xml_to_csv.py -i data/images/test -o data/annotations/test_labels.csv
#
# # Generate `train.record`
# python generate_tfrecord.py --csv_input=data/annotations/train_labels.csv --output_path=data/annotations/train.record --img_path=data/images/train --label_map data/annotations/label_map.pbtxt
#
# # Generate `test.record`
# python generate_tfrecord.py --csv_input=data/annotations/test_labels.csv --output_path=data/annotations/test.record --img_path=data/images/test --label_map data/annotations/label_map.pbtxt
# ```
# + id="kTqzzSJAj2MB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1592248802205, "user_tz": -420, "elapsed": 57146, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15349769042236436170"}} outputId="b96d200e-c25b-4f76-e2b4-4a7ee2611858"
#create the annotation directory
# %cd /content/object_detection_demo/data
annotation_dir = 'annotations/'
os.makedirs(annotation_dir, exist_ok=True)
# + id="HfYP0Rd_tLYE" colab_type="code" colab={}
"""Need to manually upload the label_pbtxt file and the train_labels.csv and test_labels.csv
into the annotation folder using the link here
https://drive.google.com/drive/folders/1NqKz2tC8I5eL5Qo4YzZiEph8W-dtI44d
"""
# + id="QmpbbeIKiHjr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 241} executionInfo={"status": "ok", "timestamp": 1592227029995, "user_tz": -600, "elapsed": 6386110, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="74221794-2080-4ae6-9e1a-5d8fc8353ac4"
# %cd /content/gdrive/My Drive/A3 test
# Generate `train.record`
# !python generate_tfrecord.py --csv_input=train_csv/train_labels.csv --output_path=annotations/train.record --img_path=train --label_map annotations/label_map.pbtxt
# + id="0tLBgcWTuslH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 241} executionInfo={"status": "ok", "timestamp": 1592228672072, "user_tz": -600, "elapsed": 7269179, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="bc702358-43ef-4f1f-83ef-461b3f3e5fd4"
# %cd /content/gdrive/My Drive/A3 test
# Generate `test.record`
# !python generate_tfrecord.py --csv_input=test_csv/test_labels.csv --output_path=annotations/test.record --img_path=test --label_map annotations/label_map.pbtxt
# + id="c24vw09_5jNI" colab_type="code" colab={}
test_record_fname = '/content/gdrive/My Drive/A3 test/annotations/test.record'
train_record_fname = '/content/gdrive/My Drive/A3 test/annotations/train.record'
label_map_pbtxt_fname = '/content/gdrive/My Drive/A3 test/annotations/label_map.pbtxt'
# + [markdown] id="GGZqihrPiBjs" colab_type="text"
# ### Step 5. Download the base model for transfer learning
# + id="jj7_kqv6yR7D" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1592382335375, "user_tz": -600, "elapsed": 9520, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="3354a4cb-7ae5-474b-dd28-b38dd00c3fe0"
# %cd /content/models/research
import os
import shutil
import glob
import urllib.request
import tarfile
MODEL_FILE = MODEL + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
DEST_DIR = '/content/models/research/pretrained_model'
if not (os.path.exists(MODEL_FILE)):
urllib.request.urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar = tarfile.open(MODEL_FILE)
tar.extractall()
tar.close()
os.remove(MODEL_FILE)
if (os.path.exists(DEST_DIR)):
shutil.rmtree(DEST_DIR)
os.rename(MODEL, DEST_DIR)
# + id="zhsMMfNN25hY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} executionInfo={"status": "ok", "timestamp": 1592382346082, "user_tz": -600, "elapsed": 3867, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="96894265-18d8-4baa-ae59-4093269e9960"
# !echo {DEST_DIR}
# !ls -alh {DEST_DIR}
# + id="gs0047Qx4LIf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1592382361746, "user_tz": -600, "elapsed": 734, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="f0606b0c-e7bf-4a1c-c955-5c543e5c87d0"
fine_tune_checkpoint = os.path.join(DEST_DIR, "model.ckpt")
fine_tune_checkpoint
# + [markdown] id="GTJvAARCiQbm" colab_type="text"
# ##Task-3: Training: Transfer learning from already trained models
#
#
# + [markdown] id="-BWSpqW6NYJE" colab_type="text"
# ###Step 6: configuring a training pipeline
# + id="551Z4gAC4uzO" colab_type="code" colab={}
import os
pipeline_fname = os.path.join('/content/models/research/object_detection/samples/configs/', pipeline_file)
assert os.path.isfile(pipeline_fname), '`{}` not exist'.format(pipeline_fname)
# + id="DZYoq1tq40Ak" colab_type="code" colab={}
def get_num_classes(pbtxt_fname):
from object_detection.utils import label_map_util
label_map = label_map_util.load_labelmap(pbtxt_fname)
categories = label_map_util.convert_label_map_to_categories(
label_map, max_num_classes=90, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
return len(category_index.keys())
# + id="098IARSk5CJk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} executionInfo={"status": "ok", "timestamp": 1592382382428, "user_tz": -600, "elapsed": 1257, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="72317f13-12e6-48fd-f3c2-a8d137b23cbb"
num_classes = get_num_classes(label_map_pbtxt_fname)
with open(pipeline_fname) as f:
s = f.read()
with open(pipeline_fname, 'w') as f:
# fine_tune_checkpoint
s = re.sub('fine_tune_checkpoint: ".*?"',
'fine_tune_checkpoint: "{}"'.format(fine_tune_checkpoint), s)
# tfrecord files train and test.
s = re.sub(
'(input_path: ".*?)(train.record)(.*?")', 'input_path: "{}"'.format(train_record_fname), s)
s = re.sub(
'(input_path: ".*?)(val.record)(.*?")', 'input_path: "{}"'.format(test_record_fname), s)
# label_map_path
s = re.sub(
'label_map_path: ".*?"', 'label_map_path: "{}"'.format(label_map_pbtxt_fname), s)
# Set training batch_size.
s = re.sub('batch_size: [0-9]+',
'batch_size: {}'.format(batch_size), s)
# Set training steps, num_steps
s = re.sub('num_steps: [0-9]+',
'num_steps: {}'.format(num_steps), s)
# Set number of classes num_classes.
s = re.sub('num_classes: [0-9]+',
'num_classes: {}'.format(num_classes), s)
f.write(s)
# + id="LW5OvToX5v3Q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1592382392461, "user_tz": -600, "elapsed": 2083, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="d93b85ec-94ae-493a-ccaa-1dc3d8e9c3f2"
# !cat {pipeline_fname}
# + id="wOzfAVox6DxP" colab_type="code" colab={}
model_dir = 'training/'
# Optionally remove content in output model directory to fresh start.
# !rm -rf {model_dir}
os.makedirs(model_dir, exist_ok=True)
# + [markdown] id="KeXwrBbnibPp" colab_type="text"
# ### Step 7. Install Tensorboard to visualize the progress of training process
# + id="JY2IwBMvyiLq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 258} executionInfo={"status": "ok", "timestamp": 1592384345448, "user_tz": -600, "elapsed": 9505, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="70dfb412-cbd7-46f8-8433-809aa671c759"
# !wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
# !unzip -o ngrok-stable-linux-amd64.zip
# + id="gPSCe4VW6is7" colab_type="code" colab={}
LOG_DIR = model_dir
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
# + id="9NcQjIax6oSu" colab_type="code" colab={}
get_ipython().system_raw('./ngrok http 6006 &')
# + [markdown] id="LfI4vNaj6zgs" colab_type="text"
# ### Step: 8 Get tensorboard link
# + id="-26O0PrC6x2O" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1592384356643, "user_tz": -600, "elapsed": 5387, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="e844c269-eec0-4e35-836d-71f9d0f45634"
# ! curl -s http://localhost:4040/api/tunnels | python3 -c \
# "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
# + [markdown] id="STof3OBrdWUy" colab_type="text"
# ### Step 9. Training the model
# + id="PZSdysPLcfOA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1592393309317, "user_tz": -600, "elapsed": 6162647, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="1f8ef8af-4721-40d3-d03d-2de44eb2d06b"
# !python /content/models/research/object_detection/model_main.py \
# --pipeline_config_path={pipeline_fname} \
# --model_dir={model_dir} \
# --alsologtostderr \
# --num_train_steps={num_steps} \
# --num_eval_steps={num_eval_steps}
# + id="MQY54srv7NSL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 358} executionInfo={"status": "ok", "timestamp": 1592393311226, "user_tz": -600, "elapsed": 168233, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="8b36b0e0-b83f-4197-87d5-14e3e8cbaace"
# !ls {model_dir}
# + [markdown] id="4y32QymeiqDB" colab_type="text"
# ##Task-4: Freezing a trained model and export it for inference
#
# + [markdown] id="7b1cB69J7-Qh" colab_type="text"
# ### Step: 10 Exporting a Trained Inference Graph
# + id="ZhQu9lWK71fS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1592394861916, "user_tz": -600, "elapsed": 27128, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="4b5e9a90-49cc-4590-a8d1-4877245408a2"
import re
import numpy as np
output_directory = './fine_tuned_model'
lst = os.listdir(model_dir)
lst = [l for l in lst if 'model.ckpt-' in l and '.meta' in l]
steps=np.array([int(re.findall('\d+', l)[0]) for l in lst])
last_model = lst[steps.argmax()].replace('.meta', '')
last_model_path = os.path.join(model_dir, last_model)
print(last_model_path)
# !python /content/models/research/object_detection/export_inference_graph.py \
# --input_type=image_tensor \
# --pipeline_config_path={pipeline_fname} \
# --output_directory={output_directory} \
# --trained_checkpoint_prefix={last_model_path}
# + id="hnHLjMbXDZ5G" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} executionInfo={"status": "ok", "timestamp": 1592395555917, "user_tz": -600, "elapsed": 2422, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="536e8929-df5c-4be3-f04c-3827d87a10fe"
# !ls {output_directory}
# + [markdown] id="Rfwp6BrFC2qQ" colab_type="text"
# ### Step 11: Use frozen model for inference.
# + id="4pdDFjp9DS2l" colab_type="code" colab={}
import os
pb_fname = os.path.join(os.path.abspath(output_directory), "frozen_inference_graph.pb")
assert os.path.isfile(pb_fname), '`{}` not exist'.format(pb_fname)
# + id="w2JfeX0QSmiV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} executionInfo={"status": "ok", "timestamp": 1592395558670, "user_tz": -600, "elapsed": 2827, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="63e9a041-5bec-49fa-9e57-792070390483"
# !ls -alh {pb_fname}
# + id="-Maem5Yv_FCn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} executionInfo={"status": "ok", "timestamp": 1592395558671, "user_tz": -600, "elapsed": 2528, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="4506c4c4-4c43-4e42-f248-e7d64e4d5021"
import os
import glob
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = pb_fname
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = label_map_pbtxt_fname
# If you want to test the code with your images, just add images files to the PATH_TO_TEST_IMAGES_DIR.
PATH_TO_TEST_IMAGES_DIR = '/content/gdrive/My Drive/A3 test/img'
assert os.path.isfile(pb_fname)
assert os.path.isfile(PATH_TO_LABELS)
TEST_IMAGE_PATHS = glob.glob(os.path.join(PATH_TO_TEST_IMAGES_DIR, "*.*"))
assert len(TEST_IMAGE_PATHS) > 0, 'No image found in `{}`.'.format(PATH_TO_TEST_IMAGES_DIR)
print(TEST_IMAGE_PATHS)
# + id="xEHFCsWxFCRz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1592395569939, "user_tz": -600, "elapsed": 10280, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "00710599022801911658"}} outputId="c8240c33-f08d-44f0-b532-ab304ea47ee1"
# %cd /content/models/research/object_detection
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops
# This is needed to display the images.
# %matplotlib inline
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
num_classes = 1
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(
label_map, max_num_classes=num_classes, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {
output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(
tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(
tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(
tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [
real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [
real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(
output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
for image_path in TEST_IMAGE_PATHS:
#load images
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
# + id="GioPUIRjFDYX" colab_type="code" colab={}
| Models/RFCN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Enumerating BiCliques to Find Frequent Patterns
# #### KDD 2019 Workshop
#
# #### Authors
# - <NAME> (Microsoft)
# - <NAME> (NVIDIA)
# - <NAME> (Microsoft)
#
# #### Problem overview
# From time to time PCs running Microsoft Windows fail: a program might crash or hang, or you experience a kernel crash leading to the famous blue screen (we do love those 'Something went wrong' messages as well...;)).
#
# <img src="images/Windows_SomethingWentWrong.png" alt="Windows problems" width=380px class="center"/>
#
# Well, when this happens it's not a good experience and we are truly interested in quickly finding out what might have gone wrong and/or at least what is common among the PCs that have failed.
# ## Import necessary modules
import cudf
import numpy
import azureml.core as aml
import time
# ## Load the data
#
# The data prepared for this workshop will be available to download after the conference. We will share the link in the final notebook that will be available on RAPIDS github account.
#
# ### Data
# The data we will be using in this workshop has been synthetically generate to showcase the type of scenarios we encounter in our work.
#
# While running certain workloads, PCs might fail for one reason or another. We collect the information from both types of scenarios and enrich the observations with the metadata about each PC (hardware, software, failure logs etc.). This forms a dataset where each row represents a PC and the features column contains a list of all the metadata we want to mine to find frequent patterns about the population that has failed.
#
# In this tutorial we will be representing this data in a form of a bi-partite graph. A bi-partite graph can be divided into two disconnected subgraphs (none of the vertices within the subgraphs are connected) with the edges connecting the vertices from one subgraph to the other. See the example below.
#
# <img src="images/BiPartiteGraph_Example.png" alt="Bi-Partite graph example" width=200px class="center"/>
#
# In order to operate on this type of data we convert the list-of-features per row to a COO (Coordinate list) format: each row represents an edge connection, the first column contains the source vertex, the second one contains the destination vertex, and the third column contains the failure flag (0 = success, 1 = failure).
# !head -n 3 ../../../../data/fpm_graph/coo_fpm.csv
# Now we can load the data into a RAPIDS DataFrame `cudf`. ***NOTE: This will take longer than if you were running this on your local machine since the data-store is separate from this running VM. Normally it would be almost instant.***
# %%time
fpm_df = cudf.read_csv('../../../../data/fpm_graph/coo_fpm.csv', names=['src', 'dst', 'flag'])
import pandas as pd
# %%time
fpm_pdf = pd.read_csv('../../../../data/fpm_graph/coo_fpm.csv', names=['src', 'dst', 'flag'])
# Now that we have the data loaded let's check how big is our data.
# %%time
shp = fpm_df.shape
print('Row cnt: {0}, col cnt: {1}'.format(*shp))
# So, we have >41M records in our DataFrame. Let's see what it looks like:
print(fpm_df.head(10))
# ## Understand the data
#
# Now that we have the data, let's explore it a bit.
# ### Overall failure rate
# First, let's find out what is the overall failure rate. In general, we do not want to extract any patterns that are below the overall failure rate since the would not help us to understand anything about the phenomenon we're dealing with nor would help us pinpoint the actual problems.
# %%time
print(fpm_df['flag'].sum() / float(fpm_df['flag'].count()))
# So, the overall falure rate is 16.7%. However, you can see that running a `sum` and `count` reducers on 41M records took ~5-10ms.
# ### Device count
#
# I didn't tell you how many devices we included in the dataset. Let's figure it out. Since the `src` column contains multiple edges per PC we need to count only the unique ids for this column.
# %%time
print(fpm_df['src'].unique().count())
# So, we have 755k devices in the dataset and it took only 1s to find this out!!!
# ### Distinct features count
# Let's now check how many distinct meatadata features we included in the dataset
# %%time
print(fpm_df['dst'].unique().count())
# Now you can see it is a synthetic dataset ;) We have a universe of 15k distinct metadata features each PC can be comprised of.
# ### Degree distribution
# Different PCs have different number of features: some have two CPUs or 4 GPUs (lucky...). Below we can quickly find how many features each PCs has.
# %%time
degrees = fpm_df.groupby('src').agg({'dst': 'count'})
print(degrees)
print(
'On average PCs have {0:.2f} components. The one with the max numer has {1}.'
.format(
degrees['dst'].mean()
, degrees['dst'].max()
)
)
# ### Inspecting the distribution of degrees
#
# We can very quickly calculate the deciles of degrees.
# %%time
quantiles = degrees.quantile(q=[float(e) / 100 for e in range(0, 100, 10)])
print(quantiles.to_pandas())
# Let's see how the distribution looks like.
# %%time
buckets = degrees['dst'].value_counts().reset_index().to_pandas()
buckets.columns = ['Bucket', 'Count']
# +
import matplotlib.pyplot as plt
# %matplotlib inline
plt.bar(buckets['Bucket'], buckets['Count'])
# -
# ## Mine the data and find the bi-cliques
# In this part of the tutorial we will show you the prototype implementation of the iMBEA algorithm proposed in Zhang, Y et al. paper from 2014 titled _On finding bicliques in bipartite graphs: A novel algorithm and its application to the integration of diverse biological data types_ published in BMC bioinformatics 15 (110) URL: https://www.researchgate.net/profile/Michael_Langston/publication/261732723_On_finding_bicliques_in_bipartite_graphs_A_novel_algorithm_and_its_application_to_the_integration_of_diverse_biological_data_types/links/00b7d53a300726c5b3000000/On-finding-bicliques-in-bipartite-graphs-A-novel-algorithm-and-its-application-to-the-integration-of-diverse-biological-data-types.pdf
# ### Setup
# First, we do some setting up.
# +
from collections import OrderedDict
import numpy as np
# must be factor of 10
PART_SIZE = int(1000)
# -
# ### Data partitioning
# We partition the DataFrame into multiple parts to aid computations.
def _partition_data_by_feature(_df) :
#compute the number of sets
m = int(( _df['dst'].max() / PART_SIZE) + 1 )
_ui = [None] * (m + 1)
# Partition the data into a number of smaller DataFrame
s = 0
e = s + PART_SIZE
for i in range (m) :
_ui[i] = _df.query('dst >= @s and dst < @e')
s = e
e = e + PART_SIZE
return _ui, m
# ### Enumerating features
# One of the key components of iMBEA algorithm is how it scans the graph starting from, in our case, from the most popular to the least popular feature. The `_count_features(...)` method below achieves exactly that and produces a sorted list of features ranked by their popularity.
def _count_features( _gdf, sort=True) :
aggs = OrderedDict()
aggs['dst'] = 'count'
c = fpm_df.groupby(['dst'], as_index=False).agg(aggs)
c = c.rename(columns={'dst':'count'})
c = c.reset_index()
if (sort) :
c = c.sort_values(by='count', ascending=False)
return c
print(_count_features(fpm_df))
# ### Fundamental methods
#
# Below are some fundamental methods used iteratively by the final algorithm
#
# #### `get_src_from_dst`
# This method returns a DataFrame of all the source vertices that have the destination vertex `id` in their list of features.
# get all src vertices for a given dst
def get_src_from_dst( _gdf, id) :
_src_list = (_gdf.query('dst == @id'))
_src_list.drop_column('dst')
return _src_list
# #### `get_all_features`
# This method returns all the features that are connected to the vertices found in the `src_list_df`.
# get all the items used by the specified users
def get_all_feature(_gdf, src_list_df, N) :
c = [None] * N
for i in range(N) :
c[i] = src_list_df.merge(_gdf[i], on='src', how="inner")
return cudf.concat(c)
# #### `is_same_as_last`
# This method checks if the bi-clicque has already been enumerated.
def is_same_as_last(_old, _new) :
status = False
if (len(_old) == len(_new)) :
m = _old.merge(_new, on='src', how="left")
if m['src'].null_count == 0 :
status = True
return status
# #### `update_results`
# This is a util method that helps to (1) maintain a DataFrame with enumerated bi-cliques that contains some of the `src` and `dst` vertices, and (2) some basic information about these.
def update_results(m, f, key, b, s) :
"""
Input
* m = machines
* f = features
* key = cluster ID
* b = biclique answer
* s = stats answer
Returns
-------
B : cudf.DataFrame
A dataframe containing the list of machine and features. This is not the full
edge list to save space. Since it is a biclique, it is easy to recreate the edges
B['id'] - a cluster ID (this is a one up number - up to k)
B['vert'] - the vertex ID
B['type'] - 0 == machine, 1 == feature
S : cudf.DataFrame
A Pandas dataframe of statistics on the returned info.
This dataframe is (relatively small) of size k.
S['id'] - the cluster ID
S['total'] - total vertex count
S['machines'] - number of machine nodes
S['features'] - number of feature vertices
S['bad_ratio'] - the ratio of bad machine / total machines
"""
B = cudf.DataFrame()
S = cudf.DataFrame()
m_df = cudf.DataFrame()
m_df['vert'] = m['src'].astype(np.int32)
m_df['id'] = int(key)
m_df['type'] = int(0)
f_df = cudf.DataFrame()
f_df['vert'] = f['dst'].astype(np.int32)
f_df['id'] = int(key)
f_df['type'] = int(1)
if len(b) == 0 :
B = cudf.concat([m_df, f_df])
else :
B = cudf.concat([b, m_df, f_df])
# now update the stats
num_m = len(m_df)
num_f = len(f_df)
total = num_m# + num_f
num_bad = len(m.query('flag == 1'))
ratio = num_bad / total
# now stats
s_tmp = cudf.DataFrame()
s_tmp['id'] = key
s_tmp['total'] = total
s_tmp['machines'] = num_m
s_tmp['bad_machines'] = num_bad
s_tmp['features'] = num_f
s_tmp['bad_ratio'] = ratio
if len(s) == 0 :
S = s_tmp
else :
S = cudf.concat([s,s_tmp])
del m_df
del f_df
return B, S
# #### `ms_find_maximal_bicliques`
# This is the main loop for the algorithm. It iteratively scans the list of features and enumerates the bi-cliques.
def ms_find_maximal_bicliques(df, k,
offset=0,
max_iter=-1,
support=1.0,
min_features=1,
min_machines=10) :
"""
Find the top k maximal bicliques
Parameters
----------
df : cudf:DataFrame
A dataframe containing the bipartite graph edge list
Columns must be called 'src', 'dst', and 'flag'
k : int
The max number of bicliques to return
-1 mean all
offset : int
Returns
-------
B : cudf.DataFrame
A dataframe containing the list of machine and features. This is not the full
edge list to save space. Since it is a biclique, it is ease to recreate the edges
B['id'] - a cluster ID (this is a one up number - up to k)
B['vert'] - the vertex ID
B['type'] - 0 == machine, 1 == feature
S : cudf.DataFrame
A dataframe of statistics on the returned info.
This dataframe is (relatively small) of size k.
S['id'] - the cluster ID
S['total'] - total vertex count
S['machines'] - number of machine nodes
S['features'] - number of feature vertices
S['bad_ration'] - the ratio of bad machine / total machines
"""
x = [col for col in df.columns]
if 'src' not in x:
raise NameError('src column not found')
if 'dst' not in x:
raise NameError('dst column not found')
if 'flag' not in x:
raise NameError('flag column not found')
if support > 1.0 or support < 0.1:
raise NameError('support must be between 0.1 and 1.0')
# this removes a prep step that offset the values for CUDA process
if offset > 0 :
df['dst'] = df['dst'] - offset
# break the data into chunks to improve join/search performance
src_by_dst, num_parts = _partition_data_by_feature(df)
# Get a list of all the dst (features) sorted by degree
f_list = _count_features(df, True)
# create a dataframe for the answers
bicliques = cudf.DataFrame()
stats = cudf.DataFrame()
# create a dataframe to help prevent duplication of work
machine_old = cudf.DataFrame()
# create a dataframe for stats
stats = cudf.DataFrame()
answer_id = 0
iter_max = len(f_list)
if max_iter != -1 :
iter_max = max_iter
# Loop over all the features (dst) or until K is reached
for i in range(iter_max) :
# pop the next feature to process
feature = f_list['dst'][i]
degree = f_list['count'][i]
# compute the index to this item (which dataframe chunk is in)
idx = int(feature/PART_SIZE)
# get all machines that have this feature
machines = get_src_from_dst(src_by_dst[idx], feature)
# if this set of machines is the same as the last, skip this feature
if is_same_as_last(machine_old, machines) == False:
# now from those machines, hop out to the list of all the features
feature_list = get_all_feature(src_by_dst, machines, num_parts)
# summarize occurrences
ic = _count_features(feature_list, True)
goal = int(degree * support)
# only get dst nodes with the same degree
c = ic.query('count >= @goal')
# need more than X feature to make a biclique
if len(c) > min_features :
if len(machines) >= min_machines :
bicliques, stats = update_results(machines, c, answer_id, bicliques, stats)
answer_id = answer_id + 1
# end - if same
machine_old = machines
if k > -1:
if answer_id == k :
break
# end for loop
# All done, reset data
if offset > 0 :
df['dst'] = df['dst'] + offset
return bicliques, stats
# ### Finding bi-cliques
# Now that we have a fundamental understanding how this works -- let's put it to action.
# %%time
bicliques, stats = ms_find_maximal_bicliques(
df=fpm_df,
k=10,
offset=1000000,
max_iter=100,
support = 1.0,
min_features=3,
min_machines=100
)
# It takes somewhere between <font size="10">10 to 15 seconds</font> to analyze <font size="10">>42M</font>edges and output the top 10 most important bicliques.
#
# Let's see what we got. We enumerated 10 bicliques. The worst of them had a failure rate of over 97%.
print(stats)
# Let's look at the one of the worst ones that affected the most machines: over 57k.
bicliques.query('id == 1 and type == 1')['vert'].sort_values().to_pandas()
# If you change the `type` to `0` we could retrieve a sample list of PCs that fit this particular pattern/bi-clique: this is useful and sometimes helps us to further narrow down a problem by further scanning the logs from PCs.
| the_archive/archived_rapids_event_notebooks/KDD_2019/graph_pattern_mining/MiningFrequentPatternsFromGraphs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] outputHidden=false inputHidden=false
# # Whose was best?
# Finding the best model for each galaxy
# + [markdown] outputHidden=false inputHidden=false
# ### Problem:
#
# *Given a set of GZ:B classifications for a galaxy, determine which classification produced the best residuals*
# + [markdown] outputHidden=false inputHidden=false
# First, say the jupyter magic words 🧙
# + outputHidden=false inputHidden=false
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# -
# Import the plethora of useful modules we'll need (including some we probably don't)
# + outputHidden=false inputHidden=false
import pandas as pd
import numpy as np
import json
from copy import copy, deepcopy
import matplotlib
import matplotlib.pyplot as plt
from shapely.geometry import LineString, Point
from shapely.affinity import rotate, scale, translate
from descartes import PolygonPatch
from pprint import pprint
import lib.python_model_renderer.parse_annotation as pa
import lib.python_model_renderer.render_galaxy as rg
import lib.galaxy_utilities as gu
# + outputHidden=false inputHidden=false
font = {'family' : 'DejaVu Sans',
'size' : 16}
matplotlib.rc('font', **font)
# -
# Which subject should we work on? (parametrised to allow batch running at a later date)
# + outputHidden=false inputHidden=false tags=["parameters"]
subject_id = 20902040
# + [markdown] outputHidden=false inputHidden=false
# Grab the galaxy data (and classification data) for this galaxy
# + outputHidden=false inputHidden=false
classifications_for_subject = gu.classifications[gu.classifications['subject_ids'] == subject_id]
annotations_for_subject = [json.loads(a) for a in classifications_for_subject['annotations'].values]
print('Found {} classifications for subject_id {}'.format(len(annotations_for_subject),
subject_id))
# + outputHidden=false inputHidden=false
print('Getting galaxy data')
gal, angle = gu.get_galaxy_and_angle(subject_id)
pic_array, deprojected_image = gu.get_image(gal, subject_id, angle)
# + [markdown] outputHidden=false inputHidden=false
# Get the PSF and raw FITS data for this galaxy cutout (bundle with the Zooniverse subject)
# + outputHidden=false inputHidden=false
psf = gu.get_psf(subject_id)
diff_data = gu.get_image_data(subject_id)
galaxy_data = np.array(diff_data['imageData'])[::-1]
size_diff = diff_data['width'] / diff_data['imageWidth']
# + [markdown] outputHidden=false inputHidden=false
# Define some useful goodies for plotting later (to transform from image coordinates to arcseconds from centre of galaxy)
# + outputHidden=false inputHidden=false
pix_size = pic_array.shape[0] / (gal['PETRO_THETA'].iloc[0] * 4) # arcseconds per pixel for zooniverse image
pix_size2 = galaxy_data.shape[0] / (gal['PETRO_THETA'].iloc[0] * 4) # arcseconds per pixel for galaxy data
def transform_coords(c):
return (c - galaxy_data.shape[0] /2) / pix_size2
def transform_patch(p):
return scale(
translate(p, xoff=-pic_array.shape[0]/2, yoff=-pic_array.shape[1]/2),
xfact=1/pix_size,
yfact=1/pix_size,
origin=(0, 0),
)
imshow_kwargs = {
'cmap': 'gray_r', 'origin': 'lower',
'extent': (
-pic_array.shape[0]/2 / pix_size, # left of image in arcseconds from centre
pic_array.shape[0]/2 / pix_size, # right...
-pic_array.shape[1]/2 / pix_size, # bottom...
pic_array.shape[1]/2 / pix_size # top...
),
}
# + [markdown] outputHidden=false inputHidden=false
# For each classification, calculate the drawn model and accompanying residual and save the two
# + outputHidden=false inputHidden=false
parsed_annotation = pa.parse_annotation(
annotations_for_subject[12], size_diff=size_diff
)
print(parsed_annotation['disk'])
model = rg.calculate_model(parsed_annotation, diff_data['width'])
difference_data = rg.compare_to_galaxy(model, psf, galaxy_data)
plt.imshow(difference_data)
plt.colorbar()
# + outputHidden=false inputHidden=false
import time
residuals = np.zeros(len(annotations_for_subject))
print('Calculating residuals...')
try:
from ipywidgets import FloatProgress
from IPython.display import display
f = FloatProgress(min=0, max=100)
display(f)
except Exeption:
f = False
model_array = np.zeros((len(annotations_for_subject), diff_data['width'], diff_data['width']))
t0 = time.time()
for i, annotation in enumerate(annotations_for_subject):
if f is not False:
f.value = i
parsed_annotation = pa.parse_annotation(annotation, size_diff=size_diff)
model = rg.calculate_model(parsed_annotation, diff_data['width'])
model_array[i] = model
difference_data = rg.compare_to_galaxy(model, psf, galaxy_data)
residuals[i] = np.sum(difference_data**2)/np.multiply.reduce(galaxy_data.shape)
print('done in ', time.time() - t0)
# + [markdown] outputHidden=false inputHidden=false
# Let's have a look at these residual scores.
#
# Note these are not the 0-100 scores shown to users, to get from these to those use
#
# $$
# S = 100 \exp\left(\frac{-300}{N}\sum_{i=0}^N\frac{\text{arcsinh}^2\left(\,|\text{y}_i - M_i|\ /\ 0.6\right)}{\text{arcsinh}\,0.6 }\right)
# $$
# + outputHidden=false inputHidden=false
plt.figure(figsize=(10, 5))
plt.plot(residuals)
plt.ylabel('Mean error per pixel')
plt.xlabel('Annotation Index')
plt.hlines(np.sum(galaxy_data)/np.multiply.reduce(galaxy_data.shape), 0, len(residuals)-1, 'C1', label='Empty model')
plt.legend()
plt.yscale('log')
# + [markdown] outputHidden=false inputHidden=false
# Which index was the best?
# + outputHidden=false inputHidden=false
print('Best index:', np.argmin(residuals))
# + [markdown] outputHidden=false inputHidden=false
# Let's have a look at the two most successful models
# + outputHidden=false inputHidden=false
indices = np.argsort(residuals)[:2]
for i, model in enumerate(model_array[indices]):
rg.plot_model(model, psf, galaxy_data, imshow_kwargs=imshow_kwargs, title='Mean pixel error: {:.2e}'.format(
np.array(residuals)[indices][i]
))
# + [markdown] outputHidden=false inputHidden=false
# And the two least...
# + outputHidden=false inputHidden=false
indices = np.argsort(residuals)[-2::]
for i, model in enumerate(model_array[indices]):
rg.plot_model(model, psf, galaxy_data, imshow_kwargs=imshow_kwargs, title='Mean pixel error: {:.2e}'.format(
np.array(residuals)[indices][i]
))
# + [markdown] outputHidden=false inputHidden=false
# ## And the winner is...
# + outputHidden=false inputHidden=false
best_annotation = annotations_for_subject[np.argmin(residuals)]
best_annotation_parsed = pa.parse_annotation(best_annotation, size_diff=size_diff)
# + outputHidden=false inputHidden=false
model = rg.calculate_model(
best_annotation_parsed,
diff_data['width'],
)
# + outputHidden=false inputHidden=false
difference_data = rg.plot_model(model, psf, galaxy_data, imshow_kwargs=imshow_kwargs)
# + [markdown] outputHidden=false inputHidden=false
# <NAME>
#
# Let's see what the shapes drawn looked like:
# + outputHidden=false inputHidden=false
geoms = []
for func, comp in (
(gu.ellipse_geom_from_zoo, best_annotation[0]),
(gu.ellipse_geom_from_zoo, best_annotation[1]),
(gu.bar_geom_from_zoo, best_annotation[2]),
):
try:
geoms.append(func(comp['value'][0]['value'][0]))
except IndexError:
pass
plt.figure(figsize=(8, 8))
plt.imshow(galaxy_data, **imshow_kwargs)
for i, geom in enumerate(geoms):
plt.gca().add_patch(
PolygonPatch(transform_patch(geom), fc='C{}'.format(i), ec='k', alpha=0.2, zorder=3)
)
for points, params in best_annotation_parsed['spiral']:
plt.plot(*transform_coords(points).T)
# + [markdown] outputHidden=false inputHidden=false
# And there we have it! Let's see if we can't improve their slider values a bit... (in another notebook)
# + outputHidden=false inputHidden=false
with open('example-annotation.json', 'w') as f:
json.dump(pa.make_json(best_annotation_parsed), f)
| model_scoring/find_best_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Leaders
# An element is leader if it is greater than all the elements to its right side. And the rightmost element is always a leader.
# For example in the array [16, 17, 4, 3, 5, 2], leaders are 17, 5 and 2.
def getLeaders(arr):
big = arr[len(arr) - 1]
yield big
for i in range(len(arr) - 2, -1, -1):
if arr[i] > big:
big = arr[i]
yield big
arr = [16, 17, 4, 3, 5, 2]
arr
list(getLeaders(arr))
| array/leaders.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false}
import pandas as pd
import seaborn as sns
from scipy import stats
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.style as style
style.use('seaborn-poster') #sets the size of the charts
style.use('ggplot')
from mpl_toolkits import mplot3d
# + pycharm={"name": "#%%\n", "is_executing": false}
df = pd.read_excel('preprocessed.xlsx')
df=df.iloc[:,1:]
df = df.drop(['PAC','PAC Dev','Actual Duration','project_name'],axis=1)
# df.describe()
df.head(5)
# + pycharm={"name": "#%%\n", "is_executing": false}
def remove_outliers(df,columns):
for column in columns:
df['z-score '+column] = stats.zscore(df[column])
for column in columns:
df = df.loc[df['z-score '+ column].abs()<=3]
df = df.drop('z-score '+column,axis=1)
return df
# + pycharm={"name": "#%%\n", "is_executing": false}
#removing outliers using z-score
# removing outliers for selected Features
# df = remove_outliers(df,['Duration','Total Cost','Actual Cost'])
# Removing outliers for all features
df = remove_outliers(df,df.columns)
df.shape
# + pycharm={"name": "#%%\n", "is_executing": false}
fig = plt.figure(figsize=(16,10))
mask = np.triu(np.ones_like(df.corr(), dtype=bool))
sns.heatmap(df.corr(),mask=mask,vmin=-1, vmax=1, center=0,cmap=sns.diverging_palette(20, 220, n=200),annot=True)
plt.savefig('heatmap.png')
# + pycharm={"name": "#%%\n", "is_executing": false}
df = df[df['Actual Cost']!=df['Total Cost']]
df.to_excel('test.xlsx')
# + pycharm={"name": "#%%\n"}
# + [markdown] pycharm={"name": "#%% md\n", "is_executing": false}
# 3188 final rows of final data by removing outliers for all features
# + pycharm={"name": "#%%\n", "is_executing": false}
df.to_excel('final_data.xlsx')
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Creating Visualizations after outlier removal
#
# + pycharm={"name": "#%%\n", "is_executing": false}
name = 'Resource Cost'
resouce_cost = df[df[name]>0][name]
df_resouce = pd.DataFrame(resouce_cost)
for i in range(3):
df_resouce = remove_outliers(df_resouce,[name])
# + pycharm={"name": "#%%\n", "is_executing": false}
plt.title(name +' BoxPlot')
sns.boxplot(df_resouce[name],color='green')
plt.xticks(np.linspace(min(df_resouce[name]),max(df_resouce[name]) +1 ,10,dtype=int))
plt.savefig('boxplot-'+name+'.png')
# + pycharm={"name": "#%%\n", "is_executing": false}
plt.title(name+' Histogram')
plt.hist(df_resouce[name])
plt.savefig('histogram-'+name+'.png')
# + pycharm={"name": "#%%\n", "is_executing": false}
plt.title('Actual Cost BoxPlot')
sns.boxplot(df['Actual Cost'],color='green')
plt.xticks(np.linspace(min(df['Actual Cost']),max(df['Actual Cost']) +1 ,10,dtype=int))
plt.savefig('boxplot-Actual Cost.png')
# + pycharm={"name": "#%%\n", "is_executing": false}
fig = plt.figure(figsize = (30,10))
plt.title('Actual Cost Histogram')
n,bins,patches = plt.hist(df['Actual Cost'],bins=20)
plt.xticks(list(bins))
plt.savefig('histogram-Actual Cost.png')
# + pycharm={"name": "#%%\n", "is_executing": false}
max(df['Total Cost'])
# + pycharm={"name": "#%%\n", "is_executing": false}
# After removing all outliers
columns = ['Actual Cost','Total Cost','Duration']
for column in columns:
df = df.drop(columns = ['z-score '+ column])
sns.pairplot(df)
# + pycharm={"name": "#%%\n", "is_executing": false}
plt.figure(figsize=(20,10))
sns.pairplot(df)
plt.savefig('pariplot-after-removing-outliers.png')
# + pycharm={"name": "#%%\n", "is_executing": false}
df.to_excel('data-2.xlsx')
#df.to_excel('data-completely-cleaned.xlsx')
# + pycharm={"name": "#%%\n", "is_executing": false}
| 4-outliers/outlier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from vpython import *
ball = sphere(pos=vector(-5,0,0), radius=0.5,color=color.cyan)
wallR = box(pos=vector(6,0,0), size=vector(0.2,12,12), color=color.green)
ball.velocity = vector(25,0,0)
deltat = 0.005
t = 0
ball.pos = ball.pos + ball.velocity*deltat
while t < 3:
rate(100)
if ball.pos.x > wallR.pos.x:
ball.velocity.x = -ball.velocity.x
ball.pos = ball.pos + ball.velocity*deltat
t = t + deltat
# -
# [Image1]: ./Images/Spring-Pendulum.png "Problem diagram"
#
# # Spring - Pendulum
#
# ![Problem diagram][Image1]
#
#
# ### Lagrangian Function
#
# Taking the roof as the zero of gravitational potential ($V_{g} = 0$), we that the gravitational potential energy $V_{g}$ takes the form:
#
# $$ V_{g} = - m g (L_{0} + L) \cos{\theta} $$
#
# Where $L$ is the spring's elongation and $L_{0}$ is the spring's own length. Furthermore, the elastic energy $V_{k}$ associated with the spring's elongation takes the form:
#
# $$ V_{k} = \frac{1}{2} k L^{2} $$
#
# Moreover, the kinetic energy $T$ is:
#
# $$ T = \frac{1}{2} m \left(\dot{r}^{2} + r^{2} \dot{\theta}^{2} \right) = \frac{1}{2} m \; \left( \dot{L}^{2} + (L_{0} + L)^{2} \dot{\theta}^{2} \right)$$
#
# Where we have considered that $r = L_{0} + L$, so $\dot{r} = \dot{L} $. Then, the Lagrangian finally takes the form:
#
# $$ \mathscr{L} = T - V = T - V_{g} - V_{k} = \frac{1}{2} m \; \left( \dot{L}^{2} + (L_{0} + L)^{2} \dot{\theta}^{2} \right) + m g (L_{0} + L) \cos{\theta} - \frac{1}{2} k L^{2} $$
#
# ### Equations of motion
#
# Then the equations of motion are:
#
# $$\frac{d}{dt} \left( \frac{\partial \mathscr{L}}{\partial \dot{L}} \right) - \frac{\partial \mathscr{L}}{\partial L} = 0 \quad \implies \quad \ddot{L} = (L_{0} + L) \dot{\theta}^{2} + g \cos{\theta} - \frac{k}{m} L$$
#
# $$\frac{d}{dt} \left( \frac{\partial \mathscr{L}}{\partial \dot{\theta}} \right) - \frac{\partial \mathscr{L}}{\partial \theta} = 0 \quad \implies \quad \ddot{\theta} = - \frac{1}{(L_{0} + L)} \left[ \; g \sin{\theta} + 2 \dot{L} \dot{\theta} \; \right]$$
# ### Solve the system numerically
#
# As a first approach, we will try to solve the system numerically. To make it, we have to reduce the order of the differential equations, changing from two differential equations of second order to four differential equations of first order.
#
# Let:
#
# $$ v = \dot{L} $$
#
# $$ \omega = \dot{\theta} $$
#
# So:
#
# $$ \dot{L} = f_{1} (L, \theta, v, \omega, \mathrm{params}) = v $$
#
# $$ \dot{\theta} = f_{2} (L, \theta, v, \omega, \mathrm{params}) = \omega$$
#
# $$ \dot{v} = f_{3} (L, \theta, v, \omega, \mathrm{params}) = (L_{0} + L) \omega^{2} + g \cos{\theta} - \frac{k}{m} L $$
#
# $$ \dot{\omega} = f_{4} (L, \theta, v, \omega, \mathrm{params}) = - \frac{1}{(L_{0} + L)} \left[ \; g \sin{\theta} + 2 v \omega \; \right] $$
#
# Where we are working in space $( \; L \; , \; \theta \; , \; v \; , \; \omega \; )$ and $ \mathrm{params} = [ m, g, k, L_{0}]$
#
# In order to plot real motion of the mass, take into account that:
#
# $$ x = (L_{0} + L) \sin{\theta}$$
#
# $$ y = - (L_{0} + L) \cos{\theta}$$
#Libraries
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from scipy import signal
# +
# Functions for solving differential equations and to define fluxes in phase portrait
def f1(L, theta, v, omega, m, g, k, Lo):
return v
def f2(L, theta, v, omega, m, g, k, Lo):
return omega
def f3(L, theta, v, omega, m, g, k, Lo):
return (Lo + L) * omega**2 + g * np.cos(theta) - k/m * L
def f4(L, theta, v, omega, m, g, k, Lo):
return - (g * np.sin(theta) + 2.0 * v * omega) / (Lo + L)
def dydt(y, t, m, g, k, Lo):
L, theta, v, omega = y
dL = f1(L, theta, v, omega, m, g, k, Lo)
dtheta = f2(L, theta, v, omega, m, g, k, Lo)
dv = f3(L, theta, v, omega, m, g, k, Lo)
domega = f4(L, theta, v, omega, m, g, k, Lo)
return [dL, dtheta, dv, domega]
# -
# Let's intend to search some interresting dynamic regime in the system, trying out with some values for the system's parameters
# +
# Constant parameters
m = 0.2
g = 9.8
k = 3.5
Lo = 1
# Initial conditions
L0 = Lo
v0 = 0.
theta0 = 0.3
omega0 = 0.
y0 = [L0, theta0, v0, omega0]
# Time
ti = 0
tf = 25
Nt = 1000
t, timestep = np.linspace(ti, tf , Nt, retstep=True)
print('time step = {}'.format(timestep))
# Solve differential equations
sol = odeint(dydt, y0, t, args=(m, g, k, Lo))
# Retrieve variales information
L = sol[:, 0]
theta = sol[:, 1]
v = sol[:, 2]
omega = sol[:, 3]
# Retrieve (x,y) positions
x = (Lo + L) * np.sin(theta)
y = -(Lo + L) * np.cos(theta)
# +
# Plot positions
plt.close()
plt.figure(figsize=(8,8))
plt.plot(x, y,"-")
plt.xlabel('y')
plt.ylabel('x')
plt.title('Real motion')
plt.axis([-np.abs(x).max(), np.abs(x).max(), -np.abs(y).max(), 0]) # plt.axis([xmin, xmax, ymin, ymax])
plt.grid()
plt.show()
# +
# Plot time series
plt.close()
plt.title(r'$L$ time series')
plt.plot(t, L,"-")
plt.xlabel(r'$t$')
plt.ylabel(r'$L$')
plt.grid()
plt.show()
plt.close()
plt.title(r'$\theta$ time series')
plt.plot(t, theta,"-")
plt.xlabel(r'$t$')
plt.ylabel(r'$\theta$')
plt.grid()
plt.show()
# +
# Power spectrum and Power spectrum density for L time series.
time_series = L
# fourier = np.fft.fft(time_series)
# n = time_series.size
print('Nyquist frecuency = {}'.format(1./(2.*timestep)))
plt.close()
f, Pxx = signal.periodogram(time_series, 1./timestep, scaling='spectrum')
plt.semilogy(f, Pxx)
plt.title(r'$L$ Power Spectrum (log scale y)')
plt.xlabel('frequency [Hz]')
plt.ylabel('[(time_series_units)**2]')
plt.grid()
plt.show()
plt.close()
f, Pxx = signal.periodogram(time_series, 1./timestep, scaling='spectrum')
plt.plot(f, Pxx,'-')
plt.title(r'$L$ Power Spectrum')
plt.xlabel('frequency [Hz]')
plt.ylabel('[(time_series_units)**2]')
plt.grid()
plt.show()
plt.close()
f, Pxx_den = signal.periodogram(time_series, 1./timestep, scaling='density')
plt.semilogy(f, Pxx_den)
plt.title(r'$L$ Power Spectrum Density (log scale y)')
plt.xlabel('frequency [Hz]')
plt.ylabel('[(time_series_units)**2]')
plt.grid()
plt.show()
plt.close()
f, Pxx_den = signal.periodogram(time_series, 1./timestep, scaling='density')
plt.plot(f, Pxx_den, '-')
plt.title(r'$L$ Power Spectrum Density')
plt.xlabel('frequency [Hz]')
plt.ylabel('[(time_series_units)**2]')
plt.grid()
plt.show()
# +
# Power spectrum and Power spectrum density for THETA time series.
time_series = theta
plt.close()
f, Pxx = signal.periodogram(time_series, 1./timestep, scaling='spectrum')
plt.semilogy(f, Pxx)
plt.title(r'$\theta$ Power Spectrum (log scale y)')
plt.xlabel('frequency [Hz]')
plt.ylabel('[(time_series_units)**2]')
plt.grid()
plt.show()
plt.close()
f, Pxx = signal.periodogram(time_series, 1./timestep, scaling='spectrum')
plt.plot(f, Pxx, '-')
plt.title(r'$\theta$ Power Spectrum')
plt.xlabel('frequency [Hz]')
plt.ylabel('[(time_series_units)**2]')
plt.grid()
plt.show()
plt.close()
f, Pxx_den = signal.periodogram(time_series, 1./timestep, scaling='density')
plt.semilogy(f, Pxx_den)
plt.title(r'$\theta$ Power Spectrum Density (log scale y)')
plt.xlabel('frequency [Hz]')
plt.ylabel('[(time_series_units)**2]')
plt.grid()
plt.show()
plt.close()
f, Pxx_den = signal.periodogram(time_series, 1./timestep, scaling='density')
plt.plot(f, Pxx_den, '-')
plt.title(r'$\theta$ Power Spectrum Density')
plt.xlabel('frequency [Hz]')
plt.ylabel('[(time_series_units)**2]')
plt.grid()
plt.show()
# +
print('Frecuencies length = {} \nSpectrum length = {}\n'.format(len(f), len(Pxx)))
Pxx_max_index = np.argmax(Pxx)
f_max = f[Pxx_max_index]
print('Maximum frecuency = {}\nCorresponding period = {}\n'.format(f_max, 1./f_max))
f2 = np.hstack((f[0:Pxx_max_index], f[Pxx_max_index+1:]))
Pxx2 = np.hstack((Pxx[0:Pxx_max_index], Pxx[Pxx_max_index+1:]))
Pxx2_max_index = np.argmax(Pxx2)
f2_max = f2[Pxx2_max_index]
print('Second maximum frecuency = {}\nCorresponding period = {}\n'.format(f2_max, 1./f2_max))
f3 = np.hstack((f2[0:Pxx2_max_index], f2[Pxx2_max_index+1:]))
Pxx3 = np.hstack((Pxx2[0:Pxx2_max_index], Pxx2[Pxx2_max_index+1:]))
Pxx3_max_index = np.argmax(Pxx3)
f3_max = f3[Pxx3_max_index]
print('Third maximum frecuency = {}\nCorresponding period = {}\n'.format(f3_max, 1./f3_max))
f4 = np.hstack((f3[0:Pxx3_max_index], f3[Pxx3_max_index+1:]))
Pxx4 = np.hstack((Pxx3[0:Pxx3_max_index], Pxx3[Pxx3_max_index+1:]))
Pxx4_max_index = np.argmax(Pxx4)
f4_max = f4[Pxx4_max_index]
print('Fourth maximum frecuency = {}\nCorresponding period = {}\n'.format(f4_max, 1./f4_max))
# -
# ### Disccussion
#
# It seems we have only found periodic regime. If we still trying out with different values for **ALL** the parameters, it will require a huge effort. So we need an smarter method to perform it, and the very best way is first to nondimensionalize the system.
#
# ### Dimensionless system
#
# The system an be rewritten as:
#
# $$\frac{\mathrm{d}^{2}x}{\mathrm{d}\tau^{2}} = x \; \left( \frac{\mathrm{d}\theta}{\mathrm{d}\tau} \right)^{2} + \cos{\theta} - \gamma \left( 1 - x \right) $$
#
# $$\frac{\mathrm{d}^{2} \theta}{\mathrm{d}\tau^{2}} = - \frac{\sin{\theta}}{x} - \frac{2}{x} \frac{\mathrm{d} x}{\mathrm{d} \tau} \frac{\mathrm{d} \theta}{\mathrm{d} \tau} $$
#
# Let:
#
# $$ v_{ad} = \frac{\mathrm{d}x}{\mathrm{d} \tau} $$
#
# $$ \omega = \frac{\mathrm{d}\theta}{\mathrm{d} \tau} $$
#
# So:
#
# $$ \frac{\mathrm{d}x}{\mathrm{d} \tau} = F_{1} (x, \theta, v_{ad}, \omega, \gamma) = v_{ad} $$
#
# $$ \frac{\mathrm{d}\theta}{\mathrm{d} \tau} = F_{2} (x, \theta, v_{ad}, \omega, \gamma) = \omega$$
#
# $$ \frac{\mathrm{d}v_{ad}}{\mathrm{d} \tau} = F_{3} (x, \theta, v_{ad}, \omega, \gamma) = x \; \omega^{2} + \cos{\theta} + \gamma \left( 1 - x \right) $$
#
# $$ \frac{\mathrm{d}\omega}{\mathrm{d} \tau} = F_{4} (x, \theta, v_{ad}, \omega, \gamma) = - \frac{\sin{\theta}}{x} - \frac{2 \; v_{ad} \; \omega}{x} $$
#
# Now we are working in space $( \; x \; , \; \theta \; , \; v_{ad} \; , \; \omega \; )$ and the only param is $\gamma$
#
# In order to plot real motion of the mass, take into account that:
#
# $$ x_{real} = L_{0} \; x \; \sin{\theta}$$
#
# $$ y_{real} = - L_{0} \; x \; \cos{\theta}$$
#
# The Jacobian of the system is:
#
# $ J = \begin{pmatrix}
# 0 & 0 & 1 & 0 \\
# 0 & 0 & 0 & 1 \\
# \omega^{2} - \gamma & -\sin{\theta} & 0 & 2 x \\
# \frac{1}{x^{2}} \left( \sin{\theta} + 2 \; v_{ad} \; \omega \right)
# & - \frac{\cos{\theta}}{x} & - \frac{2 \omega}{x} & - \frac{2 \; v_{ad}}{x}
# \end{pmatrix} $
# $ J_{1} = \begin{pmatrix}
# 0 & 0 & 1 & 0 \\
# 0 & 0 & 0 & 1 \\
# -\gamma & 0 & 0 & 2 \frac{\gamma + 1}{\gamma} \\
# 0 & - \frac{1}{x} & 0 & 0
# \end{pmatrix} $
#
# $ J_{1} = \begin{pmatrix}
# 0 & 0 & 1 & 0 \\
# 0 & 0 & 0 & 1 \\
# -\gamma & 0 & 0 & 2 \frac{\gamma - 1}{\gamma} \\
# 0 & \frac{1}{x} & 0 & 0
# \end{pmatrix} $
from sympy.solvers import solve, nonlinsolve
from sympy import Symbol, symbols
from sympy import sin, cos, limit
from mpmath import *
# Here we search for a solution of equation $x^{2} == 1$ such that x is in the interval $[0.5, 3]$.
x = Symbol('x')
eq = x**2 - 1
solve([x >= 0.5, x <= 3, eq ], x)
# +
x, theta, gamma = symbols('x, theta, gamma', real=True)
eq1 = sin(theta)/x
solve(eq1, [x, theta])
# +
# theta = 0 --> cos(0) = 1
eq2 = 1 + gamma * (1 - x)
# theta = pi --> cos(pi) = -1
eq3 = -1 + gamma * (1 - x)
print(solve(eq2, [x, gamma]))
print('\n')
print(solve(eq3, [x, gamma]))
# +
# Functions for solving differential equations and to define fluxes in phase portrait
def F1(x, theta, Vad, omega, gamma):
return Vad
def F2(x, theta, Vad, omega, gamma):
return omega
def F3(x, theta, Vad, omega, gamma):
return x * omega**2 + np.cos(theta) + gamma * (1 - x)
def F4(x, theta, Vad, omega, gamma):
return - ( np.sin(theta) + 2.0 * Vad * omega) / x
def dYdtau(y, tau, gamma):
x, theta, Vad, omega = y
dx = F1(x, theta, Vad, omega, gamma)
dtheta = F2(x, theta, Vad, omega, gamma)
dVad = F3(x, theta, Vad, omega, gamma)
domega = F4(x, theta, Vad, omega, gamma)
return [dx, dtheta, dVad, domega]
# +
# Constant parameters
gamma = 0.5
Lo = 1
# Initial conditions
x0 = Lo
Vad0 = 0.
theta0 = 0.3
omega0 = 0.
y0 = [x0, theta0, Vad0, omega0]
# Time
tau_i = 0
tau_f = 100
Ntau = 4000
tau, tau_step = np.linspace(tau_i, tau_f , Ntau, retstep=True)
print('tau time step = {}'.format(tau_step))
# Solve differential equations
sol = odeint(dYdtau, y0, tau, args=(gamma,))
# Retrieve variales information
x = sol[:, 0]
theta = sol[:, 1]
Vad = sol[:, 2]
omega = sol[:, 3]
# Retrieve (x,y) positions
xReal = Lo * x * np.sin(theta)
yReal = - Lo * x * np.cos(theta)
# Plot positions
plt.close()
plt.figure(figsize=(8,8))
plt.plot(xReal, yReal,"-")
plt.xlabel('y')
plt.ylabel('x')
plt.title('Real motion')
plt.axis([-np.abs(xReal).max(), np.abs(xReal).max(), -np.abs(yReal).max(), 0]) # plt.axis([xmin, xmax, ymin, ymax])
plt.grid()
plt.show()
# +
# Constant parameters
gamma = 10.
Lo = 1
# Initial conditions
x0 = Lo
Vad0 = 2.
theta0 = 0.0
omega0 = 2.
y0 = [x0, theta0, Vad0, omega0]
# Time
tau_i = 0
tau_f = 200
Ntau = 8000
tau, tau_step = np.linspace(tau_i, tau_f , Ntau, retstep=True)
print('tau time step = {}'.format(tau_step))
# Solve differential equations
sol = odeint(dYdtau, y0, tau, args=(gamma,))
# Retrieve variales information
x = sol[:, 0]
theta = sol[:, 1]
Vad = sol[:, 2]
omega = sol[:, 3]
# Retrieve (x,y) positions
xReal = Lo * x * np.sin(theta)
yReal = - Lo * x * np.cos(theta)
# Plot positions
plt.close()
plt.figure(figsize=(8,8))
plt.plot(xReal, yReal,"-")
plt.xlabel('y')
plt.ylabel('x')
plt.title('Real motion')
#plt.axis([-np.abs(xReal).max(), np.abs(xReal).max(), -np.abs(yReal).max(), 0]) # plt.axis([xmin, xmax, ymin, ymax])
plt.grid()
plt.show()
# Plot time series
plt.close()
plt.title(r'$x$ time series')
plt.plot(tau, x,"-")
plt.xlabel(r'$\tau$')
plt.ylabel(r'$L$')
plt.grid()
plt.show()
plt.close()
plt.title(r'$\theta$ time series')
plt.plot(tau, theta,"-")
plt.xlabel(r'$\tau$')
plt.ylabel(r'$\theta$')
plt.grid()
plt.show()
# +
# Constant parameters
gamma = np.pi
Lo = 1.
# Initial conditions
x0 = Lo
Vad0 = 10.
theta0 = 0.1
omega0 = 20*np.pi
y0 = [x0, theta0, Vad0, omega0]
# Time
tau_i = 0
tau_f = 200
Ntau = 8000
tau, tau_step = np.linspace(tau_i, tau_f , Ntau, retstep=True)
print('tau time step = {}'.format(tau_step))
# Solve differential equations
sol = odeint(dYdtau, y0, tau, args=(gamma,))
# Retrieve variales information
x = sol[:, 0]
theta = sol[:, 1]
Vad = sol[:, 2]
omega = sol[:, 3]
# Retrieve (x,y) positions
xReal = Lo * x * np.sin(theta)
yReal = - Lo * x * np.cos(theta)
# Plot positions
plt.close()
plt.figure(figsize=(8,8))
plt.plot(xReal, yReal,"-")
plt.xlabel('y')
plt.ylabel('x')
plt.title('Real motion')
#plt.axis([-np.abs(xReal).max(), np.abs(xReal).max(), -np.abs(yReal).max(), 0]) # plt.axis([xmin, xmax, ymin, ymax])
plt.grid()
plt.show()
# Plot time series
plt.close()
plt.title(r'$x$ time series')
plt.plot(tau, x,"-")
plt.xlabel(r'$\tau$')
plt.ylabel(r'$L$')
plt.grid()
plt.show()
plt.close()
plt.title(r'$\theta$ time series')
plt.plot(tau, theta,"-")
plt.xlabel(r'$\tau$')
plt.ylabel(r'$\theta$')
plt.grid()
plt.show()
# -
| Spring-Pendulum.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# The problem of graph clustering for social network graphs like `page-page` naturally moulds into community detection. There are a number of community detection algorithms. We'll test the [Girvan Newman's Algorithm](https://en.wikipedia.org/wiki/Girvan%E2%80%93Newman_algorithm). More algorithmic details in the [research paper](https://www.pnas.org/content/99/12/7821).
#
# Note: This notebook continues from `visual.ipynb`
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
target = pd.read_csv('../facebook_large/musae_facebook_target.csv')
edges = pd.read_csv('../facebook_large/musae_facebook_edges.csv')
G = nx.Graph()
for it, cat in zip(target['id'], target['page_type']):
G.add_node(it, page_type=cat)
for n1, n2 in zip(edges['id_1'], edges['id_2']):
G.add_edge(n1, n2)
# sanity check
G.number_of_nodes(), G.number_of_edges()
# returns the edge with the maximum edge-betweenness-centrality measure
def max_ebc_edge(graph):
ebc = nx.edge_betweenness_centrality(graph)
return max(ebc, key=lambda z: ebc[z])
# the girvan-newman algorithm
def girvan_newman(graph):
cc, cnt = nx.connected_components(graph), nx.number_connected_components(graph)
num_comm: int = 1
# while the number of communities isn't created
while(cnt <= num_comm):
# remove the edge with the maximum EBC
graph.remove_edge(*max_ebc_edge(graph))
# update cc and cnt
cc, cnt = nx.connected_components(graph), nx.number_connected_components(graph)
return cc
# testing the algorithm on a much simpler graph first
# this is a sample graph inbuilt in NetworkX
F = nx.davis_southern_women_graph()
max_ebc_edge(F)
# communities in the graph
c = girvan_newman(F.copy())
# nodes forming these communities
c_nodes = [list(i) for i in c]
for n in c_nodes:
print(n.__len__())
# plot the communities
colors = ['red', 'blue']
color_map = [colors[0] if n in c_nodes[0] else colors[1]
for n in F]
nx.draw(F, node_color=color_map, with_labels=True)
plt.show()
# Finally, testing the algorithm on our original graph
gcc = girvan_newman(G.copy())
| src/girvan-newman.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Una vez finalizado el curso, por favor ayúdanos completando la siguiente **Encuesta de FINAL del curso**.
#
# [](https://forms.office.com/Pages/ResponsePage.aspx?id=r4yvt9iDREaFrjF8VFIjwUHkKiCq1wxFstxAwkoFiilUOExRVkVMWlZERVcyWlpUU1EyTFg4T1Q3WC4u)
#
# ## 12. Series de tiempo
#
# [Playlist de Ciencia de Datos en castellano](https://www.youtube.com/playlist?list=PLjyvn6Y1kpbEmRY4-ELeRA80ZywV7Xd67)
# [](https://www.youtube.com/watch?v=yfgE0GheCWY&list=PLLBUgWXdTBDg1Qgmwt4jKtVn9BWh5-zgy "Python Data Science")
#
# Datos de **series de tiempo** se producen secuencialmente a medida que se registran nuevas mediciones. Los modelos derivados de datos dan una idea de lo que sucede a continuación. También muestran cómo se puede cambiar el sistema para lograr un resultado futuro diferente. Los modelos de series de tiempo son una representación de un sistema dinámico en tiempo discreto. Poner un modelo en forma de serie de tiempo es la base de muchos métodos de dinámica y control. Un **Digital Twin** o gemelo digital, es una representación virtual de un proceso que se ejecuta en paralelo al sistema físico. Un modelo de serie temporal puede considerarse un Digital Twin en la definición limitada de entradas y salidas específicas incluidas en el modelo. A continuación se muestra un modelo de serie de tiempo con una sola entrada `u` y una sola salida `y` con `k` como índice que hace referencia al intervalo de tiempo.
#
# $y_{k+1} = \sum_{i=1}^{n_a} a_i y_{k-i+1} + \sum_{i=1}^{n_b} b_i u_{k-i+1}$
#
# Los modelos de serie de tiempo Time series models se utilizan para identificación de sistemas y control. Hay información adicional sobre tipos específicos de series de tiempo y modelos dinámicos, como [ARX (Auto-Regressive exogenous inputs o entradas exógenas auto-regresivas)](https://apmonitor.com/wiki/index.php/Apps/ARXTimeSeries), modelos [Espacio-estado discretos](https://apmonitor.com/wiki/index.php/Apps/DiscreteStateSpace), y modelos [Espacio-estado continuos](https://apmonitor.com/wiki/index.php/Apps/LinearStateSpace).
# 
#
# ### Predecir la Respuesta de Series de Tiempo `y` con Cambios en la Entrada `u`
#
# De manera similar a los modelos de ecuaciones diferenciales, un modelo de serie temporal puede tener una entrada (atributo) que cambia por una fuente externa, como la que cambia activamente a un sensor de medición, una persona (manualmente) o un computador.
#
# 
#
# Calcula la respuesta `y` cuando la entrada `u` cambia de `0` a `100` con `k=5`. Utiliza los valores $n_a$=3, $n_b$=1, $n_u$=1, y $n_y$=1. El modelo de series de tiempo es:
#
# $y_{k+1} = a_1 \, y_k + a_2 \, y_{k-1} + a_3 \, y_{k-2} + b_1 \, u_k$
#
# | **Parámetro** | **Valor** |
# | ----------- | ----------- |
# | $a_1$ | 0.6 |
# | $a_2$ | -0.15 |
# | $a_3$ | 0.46 |
# | $b_1$ | 0.08 |
#
# La condición inicial es $y_0, y_1, y_2 = 0$ y la solución debe calcularse hasta $k=100$. Completa la ecuación para la serie de tiempo en el ciclo.
#
# ```python
# y[k+1] = y[k] # completa la ecuación de series de tiempo aquí
# ```
# +
import numpy as np
import pandas as pd
n = 101
t = np.linspace(0,100,101)
u = np.zeros(n); u[5:]=100
y = np.zeros(n)
a = [0.6,-0.15,0.46]
b = [0.08]
for i in range(2,n-1):
k = int(t[i])
y[k+1] = y[k] # completa la ecuación de series de tiempo aquí
import matplotlib.pyplot as plt
# %matplotlib inline
plt.plot(t,y,t,u)
plt.show()
# -
# 
#
# ### Regresión de Series de Tiempo
#
# Ahora que has simulado un modelo de series de tiempo, el siguiente paso es determinar los coeficientes a partir de los datos. La función de Gekko `sysid` automatiza el proceso de identificación de sistemas de una serie de tiempo.
#
# 
#
# Puedes acceder a la ayuda sobre esta función con `help(m.sysid)`. Parte de la ayuda se indica a continuación.
#
# y,p,K = sysid(t,u,y,na=1,nb=1,shift='calc',pred='model')
#
# Input: t = time data
# u = input data for the regression
# y = output data for the regression
# na = number of output coefficients (default=1)
# nb = number of input coefficients (default=1)
# nk = input delay steps (default=0)
# shift (optional) =
# 'none' (no shift)
# 'init' (initial pt),
# 'mean' (mean center)
# 'calc' (calculate c)
# pred (option) =
# 'model' for output error regression form, implicit solution
# 'meas' for ARX regression form, explicit solution
# Using 'model' favors an unbiased model prediction but
# can require more time to compute, especially for large
# data sets
# Using 'meas' computes the coefficients of the time series
# model with an explicit solution
#
# Output: returns
# ypred (predicted outputs)
# p as coefficient dictionary with keys 'a','b','c'
# K gain matrix
#
# Hay varias opciones como `pred`, `meas` o `model`. Con `meas`, el siguiente paso de la serie de tiempo se predice de una medida anteriorcomo en la forma `ARX`. Con `model`, utiliza las predicciones del modelo anterior para predecir el próximo paso de tiempo. Esto también se conoce como modelo de error de salida o "Output Error" (`OE`) model. Debes usar `pred=meas` si se trata de un conjunto de datos grande ya que hará más rápido resolverlo.
#
# Lo más importante es decidir cuántos coeficientes incluir en el modelo estableciendo `na` y `nb`. Es importante comenzar con números pequeños y solo agregar coeficientes adicionales si se necesita más precisión para predecir dinámicas de orden superior. Otro factor es `shift` donde `init` es preferible si se está comenzando desde condiciones de estado estacionario. De lo contrario `mean` o `calc` son buenas opciones para crear un modelo no sesgado y que no tenga un "offset" en las predicciones.
#
# 
#
# Cambia el número de coeficientes `na` y `nb` y observa la precisión de las predicciones. Además, establece `na=2` y `nb=2` y cambia `pred=model`.
#
# ```python
# na = 2 # coeficientes de salida
# nb = 2 # coeficientes de entrada
# yp,p,K = m.sysid(t,u,y,na,nb,pred='meas')
# ```
#
# ¿Cuánto se tarda en resolver a medida que aumentamos el número de coeficiente o cambiamos `pred`? Se puede cronometrar con la función:
#
# ```python
# import time
# start = time.time()
# ### la función
# print('Elapsed time: ' + str(time.time()-start))
# ```
# +
from gekko import GEKKO
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# cargar datos y analizar en columnas
url = 'http://apmonitor.com/pdc/uploads/Main/tclab_data4.txt'
data = pd.read_csv(url)
t = data['Time']
u = data['Q1']
y = data['T1']
# generar modelo de series de tiempo
m = GEKKO(remote=False)
# identificación del sistema
na = 2 # coeficientes de salida
nb = 2 # coeficientes de entrada
yp,p,K = m.sysid(t,u,y,na,nb,pred='meas')
plt.figure(figsize=(10,6))
plt.subplot(2,1,1)
plt.plot(t,u)
plt.legend([r'$Q_1$ (%)'])
plt.ylabel('MV Calentador (%)')
plt.subplot(2,1,2)
plt.plot(t,y,'b-',label=r'$T_{1,meas}$')
plt.plot(t,yp,'r--',label=r'$T_{1,pred}$')
plt.legend(); plt.ylabel('CV Temp (°C)')
plt.xlabel('Tiempo (s)'); plt.savefig('12-sysid.png')
# -
# 
#
# ### Simular Series de Tiempo
#
# También puede haber múltiples entradas y múltiples salidas, como cuando $n_a$=2, $n_b$=1, $n_u$=2, y $n_y$=2.
#
# $y_{1,k+1} = a_{1,1} \, y_{1,k} + a_{2,1} \, y_{1,k-1} + b_{1,1} \, u_{1,k} + b_{1,2} \, u_{2,k}$
#
# $y_{2,k+1} = a_{1,2} \, y_{2,k} + a_{2,2} \, y_{2,k-1} + b_{2,1} \, u_{1,k} + b_{2,2} \, u_{2,k}$
#
# Gekko tiene el modelo `arx` que resuelve modelos de series de tiempo una vez que han sido identificados. Requiere un diccionario de Python con $A\in\mathbb{R}^{n_a \, \mathrm{x} \, n_y}$, $B\in\mathbb{R}^{n_y \, \mathrm{x} \, \left(n_b \mathrm{x} n_u\right)}$, y $C\in\mathbb{R}^{n_y}$ matrices de coeficientes. Este diccionario se crea automáticamente con la función de Gekko `sysid` (identificación de sistemas). A continuación tenemos un ejemplo para crear un diccionario manualmente.
#
# ```python
# # python dictionary
# p = {'a':A,'b':B,'c':C}
# ```
#
# $A = \begin{bmatrix}0.36788 & 0.36788 \\ 0.223 & -0.136\end{bmatrix}$
# $B = \begin{bmatrix}0.63212 & 0.18964 \\ 0.31606 & 1.2642\end{bmatrix}$
# $C = \begin{bmatrix}0 & 0\end{bmatrix}$
#
# [Tutoriales adicionales de Gekko en inglés](https://apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization) nos indican cómo resolver otro tipo de ecuaciones y problemas de optimización.
# +
import numpy as np
from gekko import GEKKO
import matplotlib.pyplot as plt
# %matplotlib inline
na = 2 # Número de coeficientes A
nb = 1 # Número de coeficientes B
ny = 2 # Número de variables de salida
nu = 2 # Número de variables de entrada
# A (na x ny)
A = np.array([[0.36788,0.36788],\
[0.223,-0.136]])
# B (ny x (nb x nu))
B1 = np.array([0.63212,0.18964]).T
B2 = np.array([0.31606,1.26420]).T
B = np.array([[B1],[B2]])
C = np.array([0,0])
# Crear un diccionario de parámetros
p = {'a':A,'b':B,'c':C}
# Crea un modelo GEKKO
m = GEKKO(remote=False)
# Construye un modelo de GEKKO ARX
y,u = m.arx(p)
# Establece entradas
tf = 20 # tiempo final
u1 = np.zeros(tf+1)
u2 = u1.copy()
u1[5:] = 3.0
u2[10:] = 5.0
u[0].value = u1
u[1].value = u2
# Establece nombres
mv1 = u[0]; mv2 = u[1]
cv1 = y[0]; cv2 = y[1]
# Opciones
m.time = np.linspace(0,tf,tf+1)
m.options.imode = 4; m.options.nodes = 2
# Simula
m.solve(disp=False)
plt.figure(figsize=(10,6))
plt.subplot(2,1,1)
plt.plot(m.time,mv1.value,'r-',label=r'$MV_1$')
plt.plot(m.time,mv2.value,'b--',label=r'$MV_2$')
plt.ylabel('MV')
plt.legend(loc='best')
plt.subplot(2,1,2)
plt.plot(m.time,cv1.value,'r:',label=r'$CV_1$')
plt.plot(m.time,cv2.value,'b.-',label=r'$CV_2$')
plt.ylabel('CV'); plt.xlabel('Time (sec)')
plt.legend(loc='best')
plt.show()
# -
# 
#
# ### Actividad con el TCLab
#
# Ejecuta el "script" para generar datos para la identificación de un sistema ARX y el controlador de modelo predictivo (MPC - Model Predictive Controller).
#
# 
# +
import numpy as np
import pandas as pd
import tclab
import time
import matplotlib.pyplot as plt
# %matplotlib inline
# Generar datos con el Arduino
filename = '12-tclab.csv'
# Salto escalón de los calentadores
Q1d = np.zeros(601)
Q1d[10:100] = 80
Q1d[100:200] = 20
Q1d[200:300] = 70
Q1d[300:400] = 50
Q1d[400:500] = 100
Q1d[500:] = 0
Q2d = np.zeros(601)
Q2d[50:150] = 35
Q2d[150:250] = 95
Q2d[250:350] = 25
Q2d[350:450] = 100
Q2d[450:550] = 45
Q2d[550:] = 0
# Conectar al Arduino
a = tclab.TCLab()
fid = open(filename,'w')
fid.write('Tiempo,Q1,Q2,T1,T2\n')
fid.close()
# Correr la prueba (por 20 minutos)
for i in range(601):
# Establecer valores del calentador
a.Q1(Q1d[i])
a.Q2(Q2d[i])
print('Tiempo: ' + str(2*i) + \
' Q1: ' + str(Q1d[i]) + \
' Q2: ' + str(Q2d[i]) + \
' T1: ' + str(a.T1) + \
' T2: ' + str(a.T2))
# Espera por 2 segundos
time.sleep(2)
fid = open(filename,'a')
fid.write(str(2*i)+','+str(Q1d[i])+','+str(Q2d[i])+',' \
+str(a.T1)+','+str(a.T2)+'\n')
fid.close()
# Cierra la conexión con el Arduino
a.close()
# lee los datos del archivo
data = pd.read_csv(filename)
# Grafica las medidas
plt.figure()
plt.subplot(2,1,1)
plt.plot(data['Tiempo'],data['Q1'],'r-',label='Calentador 1')
plt.plot(data['Tiempo'],data['Q2'],'b--',label='Calentador 2')
plt.ylabel('Calentador (%)')
plt.legend(loc='best')
plt.subplot(2,1,2)
plt.plot(data['Tiempo'],data['T1'],'r.',label='Temp. 1')
plt.plot(data['Tiempo'],data['T2'],'b.',label='Temp. 2')
plt.ylabel('Temp (°C)')
plt.legend(loc='best')
plt.xlabel('Tiempo (s)')
plt.savefig('12-tclab.png')
plt.show()
# -
# 
#
# ### MPC con Modelo de Series de Tiempo
#
# Ejecuta la siguiente aplicación con el TCLab conectado. Utiliza los datos de la prueba del ejercicio anterior para crear un modelo de series de tiempo. A continuación, la aplicación utiliza esta serie de tiempo para crear una aplicación de MPC que optimiza los calentadores a los valores de temperatura objetivo. Mientras el MPC funciona, sopla los calentadores para causar una "perturbación". Observa cómo cambia el perfil del calentador previsto a medida que se aplica esta perturbación.
#
# <video width="500" height="350" controls src="https://apmonitor.com/do/uploads/Main/tclab_arx_mpc.mp4" />
# +
import numpy as np
import time
import matplotlib.pyplot as plt
import pandas as pd
import json
# obtén la librería gekko con:
# pip install gekko
from gekko import GEKKO
# obtén la librería tclab con:
# pip install tclab
from tclab import TCLab
# Detecta la sesión en IPython
try:
from IPython import get_ipython
from IPython.display import display,clear_output
get_ipython().run_line_magic('matplotlib', 'inline')
ipython = True
print('IPython Notebook :)')
except:
ipython = False
print('No IPython Notebook :(')
# Conecta el Arduino
a = TCLab()
# Tiempo final
tf = 10 # minutos
# número de puntos de datos (cada 2 segundos)
n = tf * 30 + 1
# Porcentaje del calentador (0-100%)
Q1s = np.zeros(n)
Q2s = np.zeros(n)
# Temperatura (°C)
T1m = a.T1 * np.ones(n)
T2m = a.T2 * np.ones(n)
# Setpoints de temperatura
T1sp = T1m[0] * np.ones(n)
T2sp = T2m[0] * np.ones(n)
# Setpoints del calentador cada 150 segundos
T1sp[3:] = 50.0
T2sp[40:] = 35.0
T1sp[80:] = 30.0
T2sp[120:] = 50.0
T1sp[160:] = 45.0
T2sp[200:] = 35.0
T1sp[240:] = 60.0
#########################################################
# Inicializar el MODELO
#########################################################
# Obtener los datos (20 minutos, dt=2 segundos) y organizarlos en columnas
data = pd.read_csv('12-tclab.csv')
t = data['Tiempo']
u = data[['Q1','Q2']]
y = data[['T1','T2']]
# generar un modelo de series de tiempo
m = GEKKO(remote=False)
##################################################################
# identificación del sistema
na = 2 # coeficientes de salida
nb = 2 # coeficientes de entrada
print('Identificar el modelo')
yp,p,K = m.sysid(t,u,y,na,nb,objf=10000,scale=False,diaglevel=1)
##################################################################
# Graficar resultados "sysid"
plt.figure()
plt.subplot(2,1,1)
plt.plot(t,u)
plt.legend([r'$Q_1$',r'$Q_2$'])
plt.ylabel('MVs')
plt.subplot(2,1,2)
plt.plot(t,y)
plt.plot(t,yp)
plt.legend([r'$T_{1meas}$',r'$T_{2meas}$',\
r'$T_{1pred}$',r'$T_{2pred}$'])
plt.ylabel('CVs')
plt.xlabel('Time')
plt.savefig('sysid.png')
plt.show()
##################################################################
# Crear el modelo de control ARX
y = m.Array(m.CV,2)
u = m.Array(m.MV,2)
m.arx(p,y,u)
# pon un nuevo nombre a las CVs (variables controladas)
TC1 = y[0]
TC2 = y[1]
# pon un nuevo nombre a las MVs (variables manipuladas)
Q1 = u[0]
Q2 = u[1]
# inicialización en estado estacionario
m.options.IMODE = 1
m.solve(disp=False)
# establece un MPC
m.options.IMODE = 6 # MPC
m.options.CV_TYPE = 1 # tipo de objetivo
m.options.NODES = 2 # nodos de colocación
m.options.SOLVER = 3 # IPOPT
m.time=np.linspace(0,120,61)
# Variables manipuladas
Q1.STATUS = 1 # manipulada
Q1.FSTATUS = 0 # no medida
Q1.DMAX = 50.0
Q1.DCOST = 0.1
Q1.UPPER = 100.0
Q1.LOWER = 0.0
Q2.STATUS = 1 # manipulada
Q2.FSTATUS = 0 # no medida
Q2.DMAX = 50.0
Q2.DCOST = 0.1
Q2.UPPER = 100.0
Q2.LOWER = 0.0
# Variables controladas
TC1.STATUS = 1 # set point
TC1.FSTATUS = 1 # recibir medida
TC1.TAU = 20 # velocidad de respuesta (tiempo constante)
TC1.TR_INIT = 2 # trayectoria de referencia
TC1.TR_OPEN = 0
TC2.STATUS = 1 # set point
TC2.FSTATUS = 1 # recibir medida
TC2.TAU = 20 # velocidad de respuesta (tiempo constante)
TC2.TR_INIT = 2 # banda muerta
TC2.TR_OPEN = 1
# Lazo principal
start_time = time.time()
prev_time = start_time
tm = np.zeros(n)
# Crea gráfico
if not ipython:
plt.figure(figsize=(10,7))
plt.ion()
plt.show()
try:
for i in range(1,n-1):
# Tiempo de pausa
sleep_max = 2.0
sleep = sleep_max - (time.time() - prev_time)
if sleep>=0.01:
time.sleep(sleep-0.01)
else:
time.sleep(0.01)
# Registra el tiempo y cambia el tiempo
t = time.time()
dt = t - prev_time
prev_time = t
tm[i] = t - start_time
# Lee las temperaturas en °C
T1m[i] = a.T1
T2m[i] = a.T2
# Insertar medidas
TC1.MEAS = T1m[i]
TC2.MEAS = T2m[i]
# Ajusta los setpoints
db1 = 1.0 # banda muerta
TC1.SPHI = T1sp[i] + db1
TC1.SPLO = T1sp[i] - db1
db2 = 0.2
TC2.SPHI = T2sp[i] + db2
TC2.SPLO = T2sp[i] - db2
# Ajusta los calentadores con MPC
m.solve(disp=False)
if m.options.APPSTATUS == 1:
# Obten los nuevos valores
Q1s[i+1] = Q1.NEWVAL
Q2s[i+1] = Q2.NEWVAL
# Obtén información adicional sobre la solución
with open(m.path+'//results.json') as f:
results = json.load(f)
else:
# Solución fallida
Q1s[i+1] = 0.0
Q2s[i+1] = 0.0
# Escribe nuevos valores de calentador (0-100)
a.Q1(Q1s[i])
a.Q2(Q2s[i])
# Gráfica
if ipython:
plt.figure(figsize=(8,5))
else:
plt.clf()
ax=plt.subplot(3,1,1)
ax.grid()
plt.plot(tm[0:i+1],T1sp[0:i+1]+db1,'k-',\
label=r'$T_1$ set-point',linewidth=3)
plt.plot(tm[0:i+1],T1sp[0:i+1]-db1,'k-',\
label=None,linewidth=3)
plt.plot(tm[0:i+1],T1m[0:i+1],'r.',label=r'$T_1$ medido')
plt.plot(tm[i]+m.time,results['v1.bcv'],'r-',\
label=r'$T_1$ modelo',linewidth=3)
plt.plot(tm[i]+m.time,results['v1.tr_hi'],'k--',\
label=r'$T_1$ trayectoria')
plt.plot(tm[i]+m.time,results['v1.tr_lo'],'k--')
plt.ylabel('Temp. (°C)')
plt.legend(loc=2)
ax=plt.subplot(3,1,2)
ax.grid()
plt.plot(tm[0:i+1],T2sp[0:i+1]+db2,'k-',\
label=r'$T_2$ set-point',linewidth=3)
plt.plot(tm[0:i+1],T2sp[0:i+1]-db2,'k-',\
label=None,linewidth=3)
plt.plot(tm[0:i+1],T2m[0:i+1],'b.',label=r'$T_2$ medido')
plt.plot(tm[i]+m.time,results['v2.bcv'],'b-',\
label=r'$T_2$ modelo',linewidth=3)
plt.plot(tm[i]+m.time,results['v2.tr_hi'],'k--',\
label=r'$T_2$ rango')
plt.plot(tm[i]+m.time,results['v2.tr_lo'],'k--')
plt.ylabel('Temp. (°C)')
plt.legend(loc=2)
ax=plt.subplot(3,1,3)
ax.grid()
plt.plot([tm[i],tm[i]],[0,100],'k-',\
label='Tiempo actual',linewidth=1)
plt.plot(tm[0:i+1],Q1s[0:i+1],'r.-',\
label=r'$Q_1$ histórico',linewidth=2)
plt.plot(tm[i]+m.time,Q1.value,'r-',\
label=r'$Q_1$ plan',linewidth=3)
plt.plot(tm[0:i+1],Q2s[0:i+1],'b.-',\
label=r'$Q_2$ histórico',linewidth=2)
plt.plot(tm[i]+m.time,Q2.value,'b-',
label=r'$Q_2$ plan',linewidth=3)
plt.plot(tm[i]+m.time[1],Q1.value[1],color='red',\
marker='.',markersize=15)
plt.plot(tm[i]+m.time[1],Q2.value[1],color='blue',\
marker='X',markersize=8)
plt.ylabel('Calentador')
plt.xlabel('Tiempo (s)')
plt.legend(loc=2)
if ipython:
clear_output(wait=True)
display(plt.gcf())
else:
plt.draw()
plt.pause(0.05)
# Apagua los calentadores y cierra la conexión
a.Q1(0)
a.Q2(0)
a.close()
# Guarda la gráfica
plt.savefig('12-tclab_mpc.png')
# esto nos permite cerrar el lazo con Ctrl-C
except KeyboardInterrupt:
# Apagua los calentadores y cierra la conexión
a.Q1(0)
a.Q2(0)
a.close()
print('Apagar el sistema')
plt.savefig('12-tclab_mpc.png')
# Asegúrate que la conexión se cierre a pesar de haber un error
except:
# Desconéctate del Arduino
a.Q1(0)
a.Q2(0)
a.close()
print('Error: Apagar el sistema')
plt.savefig('12-tclab_mpc.png')
raise
# -
| 12. Series_de_tiempo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="hlkcXi0pd-sk"
# # <center> Polynomial Regression </center>
#
# <center> $ \operatorname{y} = a+a $<sub>1</sub> $ x+a $<sub>2</sub> $ x^2+{...}+a $<sub>n</sub> $ x^n $ </center>
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import linear_model
from numpy import genfromtxt
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
# -
# ## Approach with Linear Regression
# + colab={"base_uri": "https://localhost:8080/", "height": 592} colab_type="code" executionInfo={"elapsed": 1507, "status": "ok", "timestamp": 1557508132170, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CeMGa-V2idY/AAAAAAAAAAI/AAAAAAAAO2Q/22S5ucbTeu8/s64/photo.jpg", "userId": "06972008324074016783"}, "user_tz": -60} id="sSlIDp5ziGfH" outputId="cfb55a9a-5a87-4f87-adde-66b6e3ad618d"
data = pd.read_csv("./insurance.csv", delimiter=",", header=None)
print(data)
n, p = data.shape
n -= 1
X = [int(i) for i in data[0][1:].values]
X = np.array(X)
X = X.reshape(n, 1)
Y = [float(i) for i in data[1][1:].values]
Y = np.array(Y)
plt.plot(X, Y, color='red',linestyle='dashed',linewidth=2,
marker='o',markersize=7,markerfacecolor='blue',
markeredgecolor='blue', alpha=0.5)
aX = np.hstack((np.ones((n, 1)), X))
# Ordinary least squares.
W = np.linalg.inv(aX.T @ aX) @ aX.T @ Y
# Predictions
_X = np.array([[1, 5], [1, 15]])
# line
_Y = _X @ W
plt.title('Timeline Average Claims')
plt.ylabel('Av_claims')
plt.xlabel('Years')
plt.plot(_X[:,1], _Y, c="red")
plt.legend(['fact', 'linear regression'])
plt.show()
_x = np.linspace(-500, 500, 100)
mse = [np.square(aX @ np.array([w1, W[0]]) - Y).mean() for w1 in _x]
plt.title('Global Minimun of Optimization')
plt.plot(_x, mse)
plt.ylabel('Value')
plt.xlabel('Distance')
plt.plot([W[0],W[0]], [0, 50])
plt.plot(W[0], W[1], color='orange', marker='o')
plt.legend(['Error', 'Minimun'])
plt.show()
# + [markdown] colab_type="text" id="9CjmpoAsj7jw"
# ## Polynomial Regression.
#
# ---
#
# <center>
# $
# Y=
# \left[ {\begin{array}{cc}
# 1 & x_{11} & x_{12} & x_{13}^2 & x_{14}^2 & x_{15}^3 & x_{16}^3 & {...} & x_{1n}^n\\
# 1 & x_{21} & x_{22} & x_{23}^2 & x_{24}^2 & x_{25}^3 & x_{26}^3 & {...} & x_{2n}^n \\
# {...} & {...} & {...} & {...} & {...} & {...} & {...} & {...} & {...} \\
# 1 & x_{n1} & x_{n2} & x_{n3}^2 & x_{n4}^2 & x_{n5}^3 & x_{n6}^3 & {...} & x_{nn}^n \\
# \end{array} } \right]
# $
# </center>
# + colab={} colab_type="code" id="h50ydlKA3f3v"
def poly_matrix(X, grad):
n, p = X.shape
Xt = np.ones((n, 1))
for i in range(p):
for g in range(grad):
Xt = np.hstack((Xt, np.power(X[:, i:i+1], g+1)))
return Xt
# + colab={"base_uri": "https://localhost:8080/", "height": 384} colab_type="code" executionInfo={"elapsed": 1307, "status": "ok", "timestamp": 1557509680748, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CeMGa-V2idY/AAAAAAAAAAI/AAAAAAAAO2Q/22S5ucbTeu8/s64/photo.jpg", "userId": "06972008324074016783"}, "user_tz": -60} id="R8byuGh88tJ3" outputId="a431ad8d-cbbb-43af-bafe-ec1fd0a76e1b"
fig, axes = plt.subplots(3, 2, sharex='all', sharey='all')
dataX = np.array(data[1:][:].astype(float))
for gr in range(1, 7):
# Extension matrix
aX = poly_matrix(dataX[:,:-1], gr)
# OLS
W = np.linalg.inv(aX.T @ aX) @ aX.T @ Y
_X = np.linspace(np.min(dataX[:, 0]), 16, 100)[:, np.newaxis]
axs0 = [0, 0, 1, 1, 2, 2]
axs1 = [0, 1, 0, 1, 0, 1]
# Real points
axes[axs0[gr-1], axs1[gr-1]].plot(dataX[:, 0], dataX[:, 1], color='blue',linestyle='dashed',linewidth=2,
marker='o',markersize=5,markerfacecolor='blue',
markeredgecolor='blue', alpha=0.5)
plt.ylim(25, 150)
axes[axs0[gr-1], axs1[gr-1]].set_title("Grade:"+str(gr))
# Evaluate
axes[axs0[gr-1], axs1[gr-1]].plot(_X, poly_matrix(_X, gr) @ W, c="red")
MSE = lambda Yp, Y: np.mean(np.power(Yp - Y, 2))
print("Grade:", gr,"MSE:", MSE(poly_matrix(X, gr) @ W, Y))
# + [markdown] colab_type="text" id="MHUZBnCh_4XY"
# ## Polynomial Regression with Sklearn
#
# ---
#
# + colab={"base_uri": "https://localhost:8080/", "height": 384} colab_type="code" executionInfo={"elapsed": 1537, "status": "ok", "timestamp": 1557509739287, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-CeMGa-V2idY/AAAAAAAAAAI/AAAAAAAAO2Q/22S5ucbTeu8/s64/photo.jpg", "userId": "06972008324074016783"}, "user_tz": -60} id="WnuDHvz5czrn" outputId="5ff383f3-d149-43fd-87a1-ee2884504a0d"
fig, axes = plt.subplots(3, 2, sharex='all', sharey='all')
for gr in range(1,7):
poly = PolynomialFeatures(gr)
aX = poly.fit_transform(X)
lreg = LinearRegression(fit_intercept=False)
lreg.fit(aX, Y)
W = np.array(lreg.coef_).T
_X = np.linspace(np.min(dataX[:, 0]), 16, 100)[:, np.newaxis]
axs0 = [0, 0, 1, 1, 2, 2]
axs1 = [0, 1, 0, 1, 0, 1]
axes[axs0[gr-1], axs1[gr-1]].plot(dataX[:, 0], dataX[:, 1], color='blue',linestyle='dashed',linewidth=2,
marker='o',markersize=5,markerfacecolor='blue',
markeredgecolor='blue', alpha=0.5)
plt.ylim(25, 150)
axes[axs0[gr-1], axs1[gr-1]].set_title("Grade:"+str(gr))
axes[axs0[gr-1], axs1[gr-1]].plot(_X, poly_matrix(_X, gr) @ W, c="red")
MSE = lambda Yp, Y: np.mean(np.power(Yp - Y, 2))
print("Grade:", gr,"MSE:", MSE(poly_matrix(X, gr) @ W, Y))
| PolynomialRegression vs Linear Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
idx = pd.IndexSlice
raw_data = pd.read_csv('./data/EF_battles_corrected.csv', parse_dates=['start', 'end'])
import altair as alt
alt.renderers.enable('notebook')
cols = ['name','allies killed', 'axis killed',
'allies tanks', 'axis tanks', 'allies airplane', 'axis airplane',
'Lattitude', 'Longitude', 'start', 'end', 'Location', 'url', 'parent']
data = raw_data[cols]
data['killed total'] = data[['allies killed', 'axis killed']].sum(1)
data.head(3)
# # Scatterplot
alt.Chart(data).mark_circle(size=60).encode(
x='allies killed',
y='axis killed',
color='parent',
tooltip=['name', 'allies killed', 'axis killed', 'start']
).interactive()
# # Map
url = 'https://unpkg.com/world-atlas@1/world/50m.json'
# +
data_geo = alt.topo_feature(url, feature='countries')
proj = {'center':[10, 52], 'type':'conicEquidistant', 'scale':800}
# plot map, where variables ares nested within `properties`,
basemap = alt.Chart(data_geo).mark_geoshape(
clip=True,
fill='lightgray',
stroke='white',
).properties(
width=700,
height=700,
).project(**proj)
mask = data[['Lattitude', 'Longitude']].notnull().all(1)
points = alt.Chart(data[mask]).mark_circle(clip=True, color='red', opacity=.5).encode(
latitude='Lattitude',
longitude='Longitude',
size=alt.Size('killed total:Q', scale=alt.Scale(type='linear', range=[10, 1000], domain=[10, 1_500_000]), title='Casualties'),
color=alt.value('red'),
tooltip=['name', 'killed total'],
href = 'url'
).project(**proj)
# -
(basemap + points)
C = (basemap + points)
C.save('chart.html')
| Chapter12/2_Altair.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Ruby 2.4.1
# language: ruby
# name: ruby
# ---
require 'daru/view'
dv = Daru::Vector.new [:a, :a, :a, :b, :b, :c], type: :category
bar_graph1 = Daru::View::Plot.new(dv, type: :bar)
bar_graph1.init_script
Daru::View.plotting_library
bar_graph1.div
IRuby.html bar_graph1.div
| spec/dummy_iruby/Nyaplot testing .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#importing the required libraries
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
from torch.autograd import Variable
#importing the dataset
test_set=pd.read_csv('ml-1m/test_set.csv')
training_set=pd.read_csv('ml-1m/training_set.csv')
#checking the info of the datasets
test_set.info()
training_set.info()
#Converting the dataframe to numpy array
training_set=np.array(training_set,dtype=int)
test_set=np.array(test_set,dtype=int)
#Getting the max value number of movie and number of user
nf_users=max(max(training_set[:,0]),max(test_set[:,0]))
nf_movies=max(max(training_set[:,1]),max(test_set[:,1]))
#Convertingthe array to the list of list where one rows corresponds to
#the reviews of all the movies that were given by one single users
def Converter(data):
dataset=[]
for user in range(1,nf_users+1):
movies=data[:,1][data[:,0]==user]
ratings=data[:,2][data[:,0]==user]
new_ratings=np.zeros(nf_movies)
new_ratings[movies-1]=ratings
dataset.append(list(new_ratings))
return dataset
training_set=Converter(training_set)
test_set=Converter(test_set)
#Now Convertng these list of list structure to the pytorch tensors
training_set=torch.FloatTensor(training_set)
test_set=torch.FloatTensor(test_set)
type(training_set)
#Now building the Architecture of the Auto Encoder Model
#We will use inheritance to build the architecture from the nn Module class
#we are bulding six layer 3 layer for encoding and 3 layers for the decoding
#we can add more layers depending upon the requirements.
class AutoEncoder(nn.Module):
def __init__(self,):
super(AutoEncoder,self).__init__()
self.full_con1=nn.Linear(nf_movies,40)#encoding
self.full_con2=nn.Linear(40,20)#encoding
self.full_con3=nn.Linear(20,10)#encoding
self.full_con4=nn.Linear(10,20)#decoding
self.full_con5=nn.Linear(20,40)#decoding
self.full_con6=nn.Linear(40,nf_movies)#output layer
self.activation=nn.Sigmoid()
def forward(self,x):
x=self.activation(self.full_con1(x))
x=self.activation(self.full_con2(x))
x=self.activation(self.full_con3(x))
x=self.activation(self.full_con4(x))
x=self.activation(self.full_con5(x))
x=self.full_con6(x)
return x
#making the object of the class
#making the creterion to calculate the loss
#making the optimization for changing the values of thne weights
auto_encoder=AutoEncoder()
criterion=nn.MSELoss()
optimizer=optim.RMSprop(auto_encoder.parameters(),lr=0.01,weight_decay=0.5)
#Training the Auto encoder
n_epochs=200
for epochs in range(1,n_epochs+1):
train_loss=0
s=0.
for user in range(nf_users):
input=Variable(training_set[user]).unsqueeze(0)
target=input.clone()
if torch.sum(target.data>0)>0:
output=auto_encoder(input)
target.require_grad=False
output[target==0]=0
loss=criterion(output,target)
mean_corrector=nf_movies/float(torch.sum(target.data > 0) + 1e-10)
#decide the direction updation of weights
loss.backward()
train_loss+=np.sqrt(loss.data.item()*mean_corrector)
s += 1.
#decide the intensity by which the weights will be updated
optimizer.step()
print(f'epoch - {epochs} loss - {str(train_loss/s)}')
#testing the model
test_loss = 0
s = 0.
for user in range(nf_users):
input = Variable(training_set[user]).unsqueeze(0)
target = Variable(test_set[user])
if torch.sum(target.data > 0) > 0:
output = auto_encoder(input)
target.require_grad = False
output[target.reshape(1,-1) == 0] = 0
loss = criterion(output, target)
mean_corrector = nf_movies/float(torch.sum(target.data > 0) + 1e-10)
test_loss += np.sqrt(loss.data.item()*mean_corrector)
s += 1.
print(f'Test Loss - {(test_loss/s)}')
| Recommender System.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Recuperar posts
#
# Segundo a Graph API v2.3, podemos recuperar:
#
# * ```/{user-id}/home``` - Retorna o fluxo de todos os posts criados pelo usuário e seus amigos. O que normalmente se encontra no Feed de Noticia.
#
#
# * ```/{user-id}/feed``` – inclui tudo que você vê no seu perfil (links compartilhados, checkins, fotos, atualização de status, além de incluir posts criados por amigos no perfil do usuário.
#
#
# * ```/{user-id}/statuses``` – Retorna apenas a atualização de status postado pelo usuário em seu próprio perfil
#
#
# * ```/{user-id}/posts``` – retorna os posts criados pelo usuário no seu próprio mural ou nos dos amigos, e pode incluir qualquer conteúdo como links compartilhados, checkins, fotos e atualização de status.
#
#
#
import facebook
import simplejson as json
import requests
# O módulo requests é utilizado para fazer requisições HTTP, ele será útil para que possamos requisitar novas páginas com conteúdo do Facebook.
#
# Ele funciona da seguinte maneira:
req = requests.get('http://python.org')
req.status_code # Se o código for 200, a requisição foi realizada.
# +
#req.text
# -
'Python' in req.text
req.close()
# ## 'me/feed'
#
# Inclui tudo que você vê no seu perfil (links compartilhados, checkins, fotos, atualização de status, além de incluir posts criados por amigos no perfil do usuário.
import facebook
access_token = '<KEY>'
api = facebook.GraphAPI(access_token, version='2.3')
noticias = api.get_object('me/feed')
# +
#print(json.dumps(noticias, indent=4))
# -
for item in range(0, len(noticias['data'])):
try:
print(item, '--->', noticias['data'][item]['story'])
except:
pass
noticias = api.get_object('me/feed')
while True:
try:
for item in range(0, len(noticias['data'])):
try:
print(item, '--->', noticias['data'][item]['story'])
except:
pass
noticias = requests.get(noticias['paging']['next']).json()
except Exception as e:
print(e)
break
# **Exercicio 2 - Altere o programa da aula6-parte5-recuperar ('me/feed') e imprima, além do histórico, o nome, tipo e a hora da criação.**
# - story
# - name
# - type
# - created_time
#
# ## 'me/home'
#
# Retorna o fluxo de todos os posts criados pelo usuário e seus amigos. O que normalmente se encontra no Feed de Noticia.
feed_noticias = api.get_object('me/home')
len(feed_noticias['data'])
for item in range(0, len(feed_noticias['data'])):
try:
print(item, '---->', feed_noticias['data'][item]['name'])
except:
pass
feed_noticias['data'][1].keys()
feed_noticias['data'][1]['type']
feed_noticias['data'][1]['name']
# +
#feed_noticias['data'][1]['application']
# -
feed_noticias['data'][1]['updated_time']
feed_noticias['data'][1]['created_time']
# +
# feed_noticias['data'][1]['comments']
# -
feed_noticias['data'][1]['likes']
# Note que só retornamos 25 resultados, porém nosso feed de notícias tem muita informação ainda para ser recuperada!
#
# Não temos um parâmetro específico que podemos informar quantos items queremos, portanto temos que criar mecanismos para parar a captura dos dados.
feed_noticias['paging']
# Da mesma forma que foi feito anteriormente, podemos requisitar a próxima página até que atinja a quantidade desejada.
#
# **<p style="color: red">É importante notar que no meu caso, tenho poucas conexões, desta forma a quantidade de dados é bem inferior se comparado ao facebook de uma pessoa que usa ativamente!!!!</p>**
# ### Bônus - Lista de Amigos
# Também podemos recuperar as conexões, como por exemplo, uma lista de amigos.
access_token = '<KEY>'
amigos = api.get_connections("me", "friends")
todos_amigos = []
while True:
try:
for amigo in amigos['data']:
todos_amigos.append(amigo['name'])
amigos = requests.get(amigos['paging']['next']).json()
except KeyError:
break
print(todos_amigos)
| Python/2016-08-05/aula6-parte5-recuperar.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
i=30
#Use to not add multiple Results to the same graph when rerunning code below
tf.reset_default_graph()
with tf.name_scope('Outer'):
with tf.name_scope('Inner_1'):
matrix1 = tf.constant([[3., 3.]], name="matrix1")
matrix2 = tf.constant([[2.],[2.]], name="matrix2")
product = tf.matmul(matrix1, matrix2, name="Result")
relu_result = tf.nn.relu(product, name="Relu")
with tf.name_scope('Inner_2'):
constant_1 = tf.constant(25, name="constant_1")
constant_2 = tf.constant(75, name="constant_2")
addition_result = tf.add(constant_1,constant_2)
#with tf.name_scope('Final_Addition'):
#final_result = tf.add(addition_result,relu_result)
# +
init = tf.global_variables_initializer()
log_path = "C:/MYLOCALFILES/JUPYTER_NOTEBOOKS/IB_ML/IB_NOTEBOOKS/test_graphs/" + "run{0}/".format(i)
with tf.Session() as sess:
writer = tf.summary.FileWriter(log_path, sess.graph)
sess.run(init)
#Graph
#writer.add_graph(sess.graph)
#get result
summary = sess.run(addition_result)
print(summary)
writer.close()
sess.close()
#writer = None
print("\ttensorboard --logdir=%s" % (log_path))
print("Point your web browser to: http://localhost:6006/")
i+=1
# -
| Tensorflow/Tensorflow_Tensorboard_Intro_Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # PANDAS
# ## Series
# Series are one-dimensional lists.
import pandas as pd
series1 = pd.Series(["Apple", "Orange", "Tomato"])
series1
# Another example to series;
series2 = pd.Series(["Green", "Orange", "Red"])
series2
# ## Data Frames
# * Data Frames are two-dimensional structures.
# * More useful than Series, it has column names.
foodlist = pd.DataFrame({"Food Name": series1, "Food Color": series2})
foodlist
# As seen here, series can be used in data frames.
# Import data from .csv file
car_sales = pd.read_csv("car-sales.csv")
car_sales
# Export a csv file
car_sales.to_csv("exported_car_sales.csv", index=False)
exported_cs = pd.read_csv("exported_car_sales.csv")
exported_cs
# # Describing Data
# Attribute
car_sales.dtypes
# We can access column names in a list with .columns attribute.
car_sales_columns = car_sales.columns
car_sales_columns
# Also we can access indexes by using .index attribute
car_sales_indexes = car_sales.index
car_sales_indexes
# It starts with 0 and stop at 10th index. (0 1 2 3 4 5 6 7 8 9, 9 is 10th.)
car_sales.describe()
# It shows us the statistical data of this table. Price is not included because Price column is not recognized as integer, it is defined as Object.
car_sales.info()
# We can choose a column and call the functions for mean, sum etc for given column. Below, we see mean of only Doors column.
car_sales["Doors"].mean()
car_sales.sum()
car_sales["Odometer (KM)"].sum()
car_sales["Odometer (KM)"].mean()
# We can get the raw count with len function
len(car_sales)
car_sales["Doors"].sum()
# Sum of car doors is 40.
# ## Viewing and Selecting Data
car_sales.head()
# * .head function gets first 5 rows in csv file.
# * looking the table above gives us an idea about the data we are currently working on.
# * it is useful, exactly while working with thousands rows
# If we need more than 5, we can write the number inside of head paranthesis as below.
car_sales.head(7)
# Similarly, if we need the bottom of data, we can call .tail() function
car_sales.tail()
# This table gives us last 5 rows of csv file.
# Also we can give number to see exact same number of rows from bottom in .tail() function
car_sales.tail(3)
# ## .loc & .iloc
# ### .loc
animals = pd.Series(["cat", "dog", "bird", "panda", "snake"],index=[0,3,9,8,3])
animals
# We can see that index is under our control.
animals.loc[3]
# .loc function returns the items on given index number as above.
# dog and snake is on 3. index (we gave them 3). loc[3] returned dog and snake
animals.loc[9]
# Let's try .loc[] on car_sales
car_sales.loc[6]
# ### .iloc
animals.iloc[3]
# .iloc refers to position independently from the indexes we gave.
car_sales.iloc[5]
# ## note: .loc refers to index, .iloc refers to position.
# It can be used for slicing.
animals.iloc[:3]
# 3 is not included.
car_sales.loc[2:7]
# On .loc, both the limits are included.
car_sales["Make"]
car_sales.Make
# Both syntax works for getting a column.
# If column name includes space, .{column-name} won't work
car_sales[car_sales["Make"] == "Honda"]
car_sales[car_sales["Odometer (KM)"] > 100000]
pd.crosstab(car_sales["Make"], car_sales["Doors"])
# .crosstab is great for comparing two columns.
# In the table above, it can be analysed as BMW has 1 car which has 5 doors, Honda has 3 cars which have 4 doors, Nissan has 2 cars which have 4 doors and lastly Toyota has 1 car which has 3 doors and 3 cars which have 4 doors.
# ### What if we need to compare more than columns?
# groupby can be used.
car_sales.groupby(["Make"]).mean()
# With this .groupby method, we could group by almost any column and see the other columns respectively, with given method(like mean or sum)
car_sales.groupby(["Colour"]).sum()
car_sales["Odometer (KM)"].plot()
car_sales["Odometer (KM)"].hist()
car_sales["Price"] = car_sales["Price"].str.replace('[\$\,\.]', '').astype(int)
car_sales["Price"]
# ## Manipulating Data
car_sales["Make"] = car_sales["Make"].str.lower()
car_sales["Make"]
car_sales
car_sales["Price"] /= 100
car_sales
car_sales_missing = pd.read_csv("car-sales-missing-data.csv")
car_sales_missing
# This table has some missing data on some of columns. How to fill them?
# Well, .fillna() function can be used as a solution. This function finds the missing parts on given column and fill them with the value which is
# given as a parameter.
car_sales_missing["Odometer"].fillna(car_sales_missing["Odometer"].mean())
# But as we see here, filled data is not changed anything in the table before. We should change the inplace parameter of fillna() function. So that, created 'missing data place holder' will be included to table.
car_sales_missing
car_sales_missing["Odometer"].fillna(car_sales_missing["Odometer"].mean(), inplace= True)
car_sales_missing
# ### What if we want to drop the rows which have NaN value?
# .dropna() will work perfectly
car_sales_missing.dropna()
# But still, it didn't manipulate our data. We can assign the new table to another variable. (or we can make changes on the old variable car_sales_missing with changing inplace parameter to True.)
car_sales_missing_dropped = car_sales_missing.dropna()
car_sales_missing_dropped
# Now all the NaN values are gone. PERFECT. This is a new table.
# We can save this table to another csv.
car_sales_missing_dropped.to_csv("car_sales_missing_dropped.csv")
# ## Create Data from Existing Data
# +
seats_column = pd.Series([5, 5, 5, 5, 5])
# New column called seats.
car_sales["Seats"] = seats_column
car_sales
# -
car_sales["Seats"].fillna(5, inplace=True)
car_sales
# +
# Column from Python list.
fuel_economy = [7.5, 9.2, 5.0, 9.6, 8.7, 4.7, 3.9, 1.2, 4.9, 2.6]
car_sales["Fuel Per 100KM"] = fuel_economy
# -
# While creating a column from python list, length of list must have the same size with the row number.
car_sales
car_sales["Total Fuel Used"] = car_sales["Odometer (KM)"]/100 * car_sales["Fuel Per 100KM"]
car_sales
# +
# Create a column from a single value
car_sales["Number of wheels"] = 4
car_sales
# -
car_sales["Passed road safety"] = True
car_sales
car_sales.dtypes
# ## Removing columns
# +
# Let's remove the Passed road safety column.
# -
# - We will remove column, so we add axis=1 for dropping a column.
# - Column => axis = 1, Raw => axis = 0
car_sales = car_sales.drop("Passed road safety", axis=1)
car_sales
car_sales_shuffled = car_sales.sample(frac=1)
car_sales_shuffled
# only select 20% of data
car_sales_shuffled.sample(frac=0.2)
car_sales_shuffled.reset_index(inplace = True, drop=True)
car_sales_shuffled
# Let's convert KM to miles. .apply function can be used with lambda function.
# * Helps of apply function, we can apply a function to a column.
car_sales["Odometer (KM)"] = car_sales["Odometer (KM)"].apply(lambda x: x/1.6)
car_sales
# # Try it, run your code
# # Search for it.
# # Try again.
# # Ask
| introduction-to-pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ##### Defintion of Node Class and Associated Functions
# Each node consists of
# - Data/key
# - Count of the occurrences of the data/key
# - Pointer to the first child
# - Pointer to an adjacent sibling
# !pip install pydotplus
# !pip install graphviz
class Node:
def __init__(self, key = None):
"""Each node consists of a data/key, count of the occurrences of the data/key
pointer to the first child, pointer to an adjacent sibling"""
self.data = key
self.count = 1
self.child = None
self.next = None
def incrementCount(self):
"""Increments the count of the data or key associated with the node"""
self.count += 1
def setChild(self, child):
"""Set the child pointer to the first child"""
self.child = child
def setNext(self, sibling):
"""Sets the next pointer to the next sibling"""
self.next = sibling
def getData(self):
"""Returns the data or key associated with the node"""
return(self.data)
def getCount(self):
"""Returns the count of the data or key associated with the node"""
return(self.count)
def getChild(self):
"""Returns the first child of the node"""
return(self.child)
def getNext(self):
"""Returns the adjacent sibling of the node"""
return(self.next)
# ###### Setting the path for libraries required for visualizing a tree
import os
os.environ['PATH'] = os.environ['PATH'] + ';' + os.environ['CONDA_PREFIX'] + r"\Library\bin\graphviz"
# ##### Importing libraries required for visualizing a tree
import pydotplus
from IPython.display import Image, display, clear_output
# ##### Defintion of Probabilistic Suffix Tree Class and Associated Functions
# It is a generic tree with an empty root node.
#
# To keep the number of pointers in a node constant
# - All the children of a particular parent are in a linked list
# - The parent points only to the first node (head) of the linked list
# - A new child is added at the beginning of the linked list
# +
import time
class PST:
def __init__(self):
"""Initialize tree with empty root node"""
self.root = Node()
def find(self, current, key):
"""Finds the node with the given key"""
while(current != None):
if(current.getData() == key):
return(current)
current = current.getNext()
return(current)
def fit(self, data, size):
""" Build a tree on the given data """
start = time.time()
if(type(data) != list and type(data) != str):
print("Could not fit the data.")
print("Data should be string type or 1D list.")
if(type(size) != int):
print("Buffer size should be an integer.")
elif(type(size) != int):
print("Could not fit the data.")
print("Buffer size should be an integer.")
else:
for i in range(len(data)):
S = data[i:i+size]
parent = self.root
for j in range(len(S)):
#self.show()
current = parent.getChild()
temp = self.find(current, S[j])
if(temp != None):
temp.incrementCount()
else:
temp = Node(S[j])
temp.setNext(current)
parent.setChild(temp)
parent = temp
print("Fit complete in %0.4f s" %(time.time()-start))
def show(self):
"""Creates a DOT file of the tree and displays the tree"""
f = open("PST.dot", 'w')
f.write("graph PST {\n")
f.write("node0" + "[label = Root];\n")
temp = [self.root]
index = [0]
j = 1
while(len(temp)):
parent = temp.pop(0)
i = index.pop(0)
current = parent.getChild()
while(current != None):
f.write("node" + str(j) + "[label = " + str(current.getData()) + "];\n")
f.write("\"node" + str(i) + "\" -- " + "\"node" + str(j) +
"\"[label = " + str(current.getCount()) + "]" + ";\n")
temp.append(current)
current = current.getNext()
index.append(j)
j += 1
f.write("}")
f.close()
graph = pydotplus.graph_from_dot_file("PST.dot")
graph.set_size('"10,10!"')
clear_output(wait=True)
display(Image(graph.create_png()))
graph.write_png("PST.png")
# -
# ##### Fitting a PST on the sequence 'AABABCDEFABABCD' with a buffer size of 4
a = PST()
a.fit("AABABCDEFABABCD", 4)
a.show()
# ##### Importing dataset containing hourly energy consumption for a period of 10 years
# The dataset is available [here](http://www.kaggle.com/robikscube/hourly-energy-consumption/data). Only one of the files from the dataset is used for fitting a PST.
import numpy as np
data = np.genfromtxt('AEP_hourly.csv', delimiter = ',', skip_header = 1)
data = np.array(data[:,1], ndmin = 2).T
data.shape
data = data[:500]
# ##### Importing libraries required for clustering the data
from scipy.cluster.vq import kmeans, whiten
# ##### Scaling the data to have unit variance and performing k-Means on the scaled data
data = whiten(data)
means, distortion = kmeans(data, k_or_guess = 5)
# ##### Assigning non-numeric label to each data
labels = []
label = ['A', 'B', 'C', 'D', 'E']
for i in range(len(data)):
labels.append(label[np.argmin((means - data[i])**2)])
# ##### Fitting a PST on the clustered data labels
pst = PST()
pst.fit(labels, 4)
pst.show()
| Probabilistic_Suffix_Tree 01-08-2020.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: TensorFlow-GPU
# language: python
# name: tf-gpu
# ---
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": true}}}}
import h5py
import pandas as pd
import numpy as np
#from plotnine import *
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score, recall_score
from sklearn.metrics import cohen_kappa_score
from sklearn import preprocessing
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import train_test_split
from sklearn.svm import LinearSVC
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import export_graphviz
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.ensemble import GradientBoostingClassifier
#import statsmodels.api as sm
#from scipy.stats import mode
import random
from IPython.display import Image
import pydot
from pydot import Dot
import matplotlib.pyplot as plt
plt.style.use('classic')
# %matplotlib inline
import seaborn as sns
sns.set()
import tensorflow as tf
#import keras-gpu
import keras
from keras import backend as K
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.optimizers import SGD, Adam
from keras.utils import to_categorical
#from keras.np_utils import probas_to_classes
from keras.layers import Dense, Dropout, Input, Reshape
from keras.layers import Embedding
from keras.layers import Conv1D, GlobalAveragePooling1D, MaxPooling1D, merge, Concatenate, Conv2D, MaxPooling2D
from keras.layers.merge import concatenate, add
from keras.models import load_model
from keras.utils import plot_model
from keras.callbacks import TensorBoard
from keras import regularizers
#from hyperopt import Trials, STATUS_OK, tpe
#from hyperas import optim
#from hyperas.distributions import choice, uniform
#import gc
import os
os.environ["PATH"] += os.pathsep + 'C:\\Program Files (x86)\\Graphviz2.38\\bin\\'
# + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 8, "height": 4, "hidden": false, "row": 158, "width": 4}, "report_default": {}}}}
# Ask tenserflow not to be too gready on GPU
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 4, "hidden": false, "row": 0, "width": 4}, "report_default": {"hidden": false}}}}
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
K.set_session(sess)
keras.__version__
#with eeg only 5 filters, kernel size 100
#Test score: 1.3631985359422534
#Test accuracy: 0.6440793987471519
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": true}}}}
# # %load "C:\\Users\\i053131\Desktop\\Epilepsie\\Dreem\\src\\utils\\error.py"
def AnalyzeError(y_true, y_pred):
fig, ax = plt.subplots(figsize=(20,10))
plt.subplot(1,2, 1)
sns.countplot(x=0, data=pd.DataFrame(y_true))
plt.ylim(0, 4000)
plt.subplot(1,2, 2)
sns.countplot(x=0, data=pd.DataFrame(y_pred))
plt.ylim(0, 4000)
fig.suptitle("Actual and predicted distribution", size = 'x-large')
plt.show()
df_ = pd.DataFrame()
df_["Test"]= y_true
df_["Pred"] = y_pred
df_['error'] = df_.Test != df_.Pred
#sns.countplot(x="Test", data=df_[df_.error])
error0 = df_[(df_.error) & (df_.Test==0)].count()[0] / df_[df_.Test==0].count()[0]
error1 = df_[(df_.error) & (df_.Test==1)].count()[0] / df_[df_.Test==1].count()[0]
error2 = df_[(df_.error) & (df_.Test==2)].count()[0] / df_[df_.Test==2].count()[0]
error3 = df_[(df_.error) & (df_.Test==3)].count()[0] / df_[df_.Test==3].count()[0]
error4 = df_[(df_.error) & (df_.Test==4)].count()[0] / df_[df_.Test==4].count()[0]
Lerror = [error0, error1, error2, error3, error4]
sns.barplot(x=[0, 1, 2, 3, 4], y=Lerror)
plt.title('Wrongly classified in a phase in percent of the test population for this phase')
plt.show()
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": true}}}}
dataPath = "C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\raw\\"
trainOutput = pd.read_csv(dataPath + "challenge_fichier_de_sortie_dentrainement_classification_en_stade_de_sommeil_a_laide_de_signaux_mesures_par_le_bandeau_dreem.csv", sep=";")
Y = trainOutput["label"]
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": true}}}}
filetrain= dataPath + "train.h5"
filetest= dataPath + "test.h5"
h5 = h5py.File(filetrain, "r")
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": true}}}}
eeg_1 = pd.DataFrame(h5['eeg_1'][:])
eeg_2 = pd.DataFrame(h5['eeg_2'][:])
eeg_3 = pd.DataFrame(h5['eeg_3'][:])
eeg_4 = pd.DataFrame(h5['eeg_4'][:])
po_ir = pd.DataFrame(h5['po_ir'][:])
po_r = pd.DataFrame(h5['po_r'][:])
accelerometer_x = pd.DataFrame(h5['accelerometer_x'][:])
accelerometer_y = pd.DataFrame(h5['accelerometer_y'][:])
accelerometer_z = pd.DataFrame(h5['accelerometer_z'][:])
# + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 4, "hidden": false, "row": 159, "width": 4}, "report_default": {}}}}
# Make a train/test (80/20) sample of the whole data
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": true}}}}
# EEG channels sampled at 125Hz (4 x 125 x 30 = 15000 size per sample)
# Pulse Oxymeter channels (red and infra-red) sampled at 50 Hz (2 x 50 x 30 = 3000 size per sample)
# Accelerometer channels sampled at 50Hz (3 x 50 x 30 = 4500 size per sample)
#step 1 eeg alone
df = pd.concat([eeg_1, eeg_2, eeg_3, eeg_4, po_ir, po_r, accelerometer_x, accelerometer_y, accelerometer_z],
axis=1, sort = False)
df.columns = range(15000 + 3000 + 4500)
df["Y"] = Y
training, test = train_test_split(df, test_size=0.2, random_state=42)
X = training.iloc[:,:-1]
X_train = X.values
y = training.iloc[:,-1]
y_train = to_categorical(y.values, num_classes=5)
X_test = test.iloc[:,:-1].values
y_true = test.iloc[:,-1].values
y_test = to_categorical(y_true, num_classes=5)
# + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 8, "height": 4, "hidden": false, "row": 162, "width": 4}, "report_default": {}}}}
# check the sleep phases distribution on test and train sample
# + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 5, "hidden": false, "row": 166, "width": 12}, "report_default": {}}}}
# Reshape data set
# - eeg: 4 layers of 125*30 columns
# - pulse: 2 layer of 50*columns
# - accelerometer: 3 layer of 50*columns
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 4, "hidden": false, "row": 4, "width": 4}, "report_default": {"hidden": false}}}}
#Input to keras.layers.Conv1D should be 3-d with dimensions (nb_of_examples, timesteps, features)
#X is (nb_of_examples, timesteps)
# EEG channels sampled at 125Hz (4 x 125 x 30 = 15000 size per sample)
# Pulse Oxymeter channels (red and infra-red) sampled at 50 Hz (2 x 50 x 30 = 3000 size per sample)
# Accelerometer channels sampled at 50Hz (3 x 50 x 30 = 4500 size per sample)
eeg_train = np.empty(shape=(35064, 3750, 4))
eeg_train[:, :, 0]= X_train[:, :3750] #eeg_1
eeg_train[:, :, 1]= X_train[:, 3750:3750*2] #eeg_2
eeg_train[:, :, 2]= X_train[:, 3750*2:3750*3] #eeg_3
eeg_train[:, :, 3]= X_train[:, 3750*3:3750*4] #eeg_4
eeg_test = np.empty(shape=(8766, 3750, 4))
eeg_test[:, :, 0]= X_test[:, :3750] #eeg_1
eeg_test[:, :, 1]= X_test[:, 3750:3750*2] #eeg_2
eeg_test[:, :, 2]= X_test[:, 3750*2:3750*3] #eeg_3
eeg_test[:, :, 3]= X_test[:, 3750*3:3750*4] #eeg_4
print("eeg_train", eeg_train.shape)
pulse_train = np.empty(shape=(35064, 1500, 2))
pulse_train[:, :, 0]= X_train[:, 3750*4 : 3750*4 + 1500] #po_ir
pulse_train[:, :, 1]= X_train[:, 3750*4 + 1500 : 3750*4 +1500*2] #po_r
pulse_test = np.empty(shape=(8766, 1500, 2))
pulse_test[:, :, 0]= X_test[:, 3750*4 : 3750*4 + 1500] #po_ir
pulse_test[:, :, 1]= X_test[:, 3750*4 + 1500 : 3750*4 + 1500*2] #po_r
print("pulse_train", pulse_train.shape)
accelerometer_train = np.empty(shape=(35064, 1500, 3))
accelerometer_train[:, :, 0]= X_train[:, 3750*4 + 1500*2 : 3750*4 + 1500*3] #accelerometer_x
accelerometer_train[:, :, 1]= X_train[:, 3750*4 + 1500*3 : 3750*4 + 1500*4] #accelerometer_y
accelerometer_train[:, :, 2]= X_train[:, 3750*4 + 1500*4 : 3750*4 + 1500*5] #accelerometer_z
accelerometer_test = np.empty(shape=(8766, 1500, 3))
accelerometer_test[:, :, 0]= X_test[:, 3750*4 + 1500*2 : 3750*4 + 1500*3] #accelerometer_x
accelerometer_test[:, :, 1]= X_test[:, 3750*4 + 1500*3 : 3750*4 + 1500*4] #accelerometer_y
accelerometer_test[:, :, 2]= X_test[:, 3750*4 + 1500*4 : 3750*4 + 1500*5] #accelerometer_z
print("accelerometer_train", pulse_train.shape)
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": true}}}}
def fitmodel(X_train, X_test, nb_filter1=5, nb_filter2=20, kernel1=25, kernel2=10, maxPool1=25, maxPool2=10, epochs=50,
dropout=False, modelfile=False):
model = Sequential()
model.add(Conv1D(nb_filter=nb_filter1, kernel_size=kernel1, activation='relu', strides = 1, padding='valid',
input_shape=X_train.shape[1:3]))
model.add(MaxPooling1D(maxPool1))
model.add(Conv1D(nb_filter=20, kernel_size=kernel2, activation='relu', strides = 1, padding='valid'))
model.add(MaxPooling1D(maxPool2))
model.add(Flatten())
model.add(Dense(500, activation='relu'))
if (dropout):
model.add(Dropout(0.5))
model.add(Dense(500, activation='relu'))
model.add(Dense(5, activation='softmax'))
#optimizer = SGD(lr=0.0001)
#optimizer='rmsprop'
optimizer=Adam(0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
history = model.fit(X_train, y_train, batch_size=100, epochs=epochs, validation_data = (X_test, y_test))
score = model.evaluate(X_test, y_test, batch_size=100)
print("Test score: ", score[0])
print("Test accuracy: ", score[1])
if (modelfile):
model.save('C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\models\\' + str(modelfile)+ '.h5')
return [model, history]
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": true}}}}
def modelPerf(L, X_test, y_true = y_true, sequential=True, verbose=True): # L = [model, history]
model = L[0]
history = L[1]
if (verbose):
model.summary()
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.ylabel('Model Accuracy')
plt.xlabel('Epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
if (sequential):
y_pred = pd.DataFrame(model.predict_classes(X_test, batch_size=128))
else:
y_probas = model.predict([eeg_test, pulse_test, accelerometer_test], batch_size=100)
y_classes = y_probas.argmax(axis=-1) #keras.np_utils.probas_to_classes(y_probas)
y_pred = pd.DataFrame(y_classes)
print("accurancy: ", accuracy_score(y_true, y_pred))
print("kappa: ", cohen_kappa_score(y_true, y_pred))
if (verbose):
AnalyzeError(y_true, y_pred)
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": true}}}}
#model = load_model('C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\interim\\model_eeg1_5_100.h5')
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {}}}}
#plot_model(model, to_file='model_empile.png', show_shapes=True, show_layer_names=True)
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": true}}}}
def submodel(X_train, nb_filter1=5, nb_filter2=20, kernel1=25, kernel2=10, maxPool1=25, maxPool2=10):
inputs = Input(shape=X_train.shape[1:3])
conv1 = Conv1D(nb_filter=nb_filter1, kernel_size=kernel1, activation='relu', strides = 1, padding='valid')(inputs)
pool1 = MaxPooling1D(maxPool1)(conv1)
conv2 = Conv1D(nb_filter=20, kernel_size=kernel2, activation='relu', strides = 1, padding='valid')(pool1)
pool2 = MaxPooling1D(maxPool2)(conv2)
outputs = Flatten()(pool2)
return Model(inputs=inputs, outputs=outputs)
def submodel_2D(X_train, nb_filter1=5, nb_filter2=20, kernel1=25, maxPool1=25):
inputs = Input(shape=X_train.shape[1:3])
conv1 = Conv1D(nb_filter=nb_filter1, kernel_size=kernel1, activation='relu', strides = 1, padding='valid')(inputs)
pool1 = MaxPooling1D(maxPool1)(conv1)
#conv2 = Conv1D(nb_filter=20, kernel_size=kernel2, activation='relu', strides = 1, padding='valid')(pool1)
#pool2 = MaxPooling1D(maxPool2)(conv2)
return Model(inputs=inputs, outputs=pool1)
# + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 0, "height": 53, "hidden": false, "row": 511, "width": 12}, "report_default": {}}}}
# # Channels are stacked in order to apply 2D convolution
# submodels output are not concatenate on filters channels but stacked and the result is fed to a 2D convolution model
# - sub convolution models for eeg, pulse and accelerometer have only one convolution layer and a maxPool layer. Kernel size are 10 times greater than in previous
# - eeg: nb_filter1=10, kernel1=250, maxPool1=20
# - pulse & accelerometers: nb_filter1=10, kernel1=100, maxPool1=8
# <br>
# - submodels output are stacked to a input for a 2D convolution layer(186, 5, 1)
# - output is fed in a 2D convolution layer (nb_filter = 50, kernel_size=(40, 30)) and a maxPool layer (20, 1)
# - a flatten layer
# - two dense layers with 300 neurons
# - no regularization
# <br>
#
# 
#
# <br>
# Model performance for 100 epochs
# - accurancy: 0.73
# - kappa: 0.62
#
# <br>
#
# 
# <br>
# 
# <br>
# After 100 epochs, overfitting will appears which can be regularized with L2 kernel regularization on softmax out put layer (Cf. CNN on raw all notebook)
# -
# Increasing size of the two hidden dense layers from 300 to 500 each gives
# - accurancy: 0.64
# - kappa: 0.49
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 4, "height": 52, "hidden": false, "row": 99, "width": 4}, "report_default": {"hidden": false}}}}
tensor_board = TensorBoard('./logs/stacked')
nb_filter1=10
nb_filter2=50
dense_size= 500
reg = 0.1
model_eeg = submodel_2D(eeg_train, nb_filter1=nb_filter1, kernel1=250, maxPool1=20)
model_pulse = submodel_2D(pulse_train, nb_filter1=nb_filter1,kernel1=100, maxPool1=8)
model_acc = submodel_2D(accelerometer_train, nb_filter1=nb_filter1, kernel1=100, maxPool1=8)
in_eeg = Input(shape=eeg_train.shape[1:3])
in_pulse = Input(shape=pulse_train.shape[1:3])
in_acc = Input(shape=accelerometer_train.shape[1:3])
out_eeg = model_eeg(in_eeg)
out_pulse = model_pulse(in_pulse)
out_acc = model_acc(in_acc)
merged = concatenate([out_eeg, out_pulse, out_acc], axis=-1)
steps = int(merged.shape[1])
filters = int(merged.shape[2])
stacked_shape = (None, steps, filters, 1)
stacked = Reshape((steps, filters, 1))(merged)
conv3 = Conv2D(nb_filter=nb_filter2, kernel_size=(40, 30), activation='relu', strides=(1, 1), padding='valid',
data_format="channels_last")(stacked)
pool3 = MaxPooling2D(pool_size=(20, 1))(conv3)
flat = Flatten()(pool3)
dense1 = Dense(dense_size, activation='relu')(flat)
#drop1 = Dropout(0.5)(dense1)
dense2 = Dense(dense_size, activation='relu')(dense1)
out = Dense(5, activation='softmax', kernel_regularizer=regularizers.l2(reg))(dense2)
model = Model([in_eeg, in_pulse, in_acc], out)
optimizer=Adam(0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
#plot_model(model, to_file='model_stacked.png', show_shapes=True, show_layer_names=True)
model.summary()
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 8, "height": 12, "hidden": false, "row": 146, "width": 4}, "report_default": {"hidden": false}}}}
epochs= 150
history = model.fit([eeg_train, pulse_train, accelerometer_train], y_train, batch_size=100, epochs= epochs,
validation_data = ([eeg_test, pulse_test, accelerometer_test], y_test), callbacks=[tensor_board])
score = model.evaluate([eeg_test, pulse_test, accelerometer_test], y_test, batch_size=100)
print("Test score: ", score[0])
print("Test accuracy: ", score[1])
# + extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"col": 4, "height": 12, "hidden": false, "row": 151, "width": 4}, "report_default": {"hidden": false}}}}
modelPerf([model, history], [eeg_test, pulse_test, accelerometer_test], sequential=False, verbose=False)
# -
# In order to try to improve performance we give more degre of liberty to the model
# - reduce the kernel_size for the 2D convolution at (40,10) instead of (40,30) to "separate" data from eeg, pulse, and accelerometer
# - add a strides=(1, 10) to conv2D to separate the data from eeg, pulse and acc
# - reduce the pooling after the 2D convolution to something more resonable from (20, 1) to (5,1)
# - divide by two 1D pooling (eeg= 10, pulse=accelerometer=4)
# - l2 kernel reguatisation on softmax layer = 0.0001
# - increase the hidden dense layers from 300 to 500
#
# Model performance
# - accurancy: 0.61
# - kappa: 0.46
#
# with huge overfitting
# +
nb_filter1=10
#nb_filter2=50
nb_filter3 = 50
dense_size= 500
kernel_size3 = (40, 10)
maxpool3 = (5, 1)
reg = 0.0001
model_eeg = submodel_2D(eeg_train, nb_filter1=nb_filter1, kernel1=250, maxPool1=10)
model_pulse = submodel_2D(pulse_train, nb_filter1=nb_filter1,kernel1=100, maxPool1=4)
model_acc = submodel_2D(accelerometer_train, nb_filter1=nb_filter1, kernel1=100, maxPool1=4)
in_eeg = Input(shape=eeg_train.shape[1:3])
in_pulse = Input(shape=pulse_train.shape[1:3])
in_acc = Input(shape=accelerometer_train.shape[1:3])
out_eeg = model_eeg(in_eeg)
out_pulse = model_pulse(in_pulse)
out_acc = model_acc(in_acc)
merged = concatenate([out_eeg, out_pulse, out_acc], axis=-1)
steps = int(merged.shape[1])
filters = int(merged.shape[2])
stacked_shape = (None, steps, filters, 1)
stacked = Reshape((steps, filters, 1))(merged)
conv3 = Conv2D(nb_filter=nb_filter3, kernel_size=kernel_size3, activation='relu', strides=(1, 10), padding='valid',
data_format="channels_last")(stacked)
pool3 = MaxPooling2D(pool_size=maxpool3)(conv3)
flat = Flatten()(pool3)
dense1 = Dense(dense_size, activation='relu')(flat)
#drop1 = Dropout(0.5)(dense1)
dense2 = Dense(dense_size, activation='relu')(dense1)
out = Dense(5, activation='softmax', kernel_regularizer=regularizers.l2(reg))(dense2)
model = Model([in_eeg, in_pulse, in_acc], out)
optimizer=Adam(0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
#plot_model(model, to_file='model_stacked.png', show_shapes=True, show_layer_names=True)
model.summary()
# + [markdown] extensions={"jupyter_dashboards": {"version": 1, "views": {"grid_default": {"hidden": true}, "report_default": {"hidden": false}}}}
#
# -
epochs= 150
history = model.fit([eeg_train, pulse_train, accelerometer_train], y_train, batch_size=100, epochs= epochs,
validation_data = ([eeg_test, pulse_test, accelerometer_test], y_test), callbacks=[tensor_board])
score = model.evaluate([eeg_test, pulse_test, accelerometer_test], y_test, batch_size=100)
print("Test score: ", score[0])
print("Test accuracy: ", score[1])
modelPerf([model, history], [eeg_test, pulse_test, accelerometer_test], sequential=False, verbose=False)
# In order to reduce previous overfitting
# - reduce the kernel_size for the 2D convolution at (40,10) instead of (40,30) to "separate" data from eeg, pulse, and accelerometer
# - keep strides=(1, 10) to conv2D to separate the data from eeg, pulse and acc
# - pooling after the 2D convolution e from (5, 1) to (10, 5)
# - three 1D pooling: from (eeg= 10, pulse=accelerometer=4) to (eeg= 15, pulse=accelerometer=6)
# - bring back the size of the two dense layers to 300
# - l2 kernel reguatisation on softmax layer = 0.0001 stay the same
#
# Performance
# - accurancy: 0.67
# - kappa: 0.54
# still significant with overfitting
# +
nb_filter1=10
#nb_filter2=50
nb_filter3 = 50
dense_size= 300
kernel_size3 = (40, 10)
maxpool3 = (10, 1)
reg = 0.0001
model_eeg = submodel_2D(eeg_train, nb_filter1=nb_filter1, kernel1=250, maxPool1=15)
model_pulse = submodel_2D(pulse_train, nb_filter1=nb_filter1,kernel1=100, maxPool1=6)
model_acc = submodel_2D(accelerometer_train, nb_filter1=nb_filter1, kernel1=100, maxPool1=6)
in_eeg = Input(shape=eeg_train.shape[1:3])
in_pulse = Input(shape=pulse_train.shape[1:3])
in_acc = Input(shape=accelerometer_train.shape[1:3])
out_eeg = model_eeg(in_eeg)
out_pulse = model_pulse(in_pulse)
out_acc = model_acc(in_acc)
merged = concatenate([out_eeg, out_pulse, out_acc], axis=-1)
steps = int(merged.shape[1])
filters = int(merged.shape[2])
stacked_shape = (None, steps, filters, 1)
stacked = Reshape((steps, filters, 1))(merged)
conv3 = Conv2D(nb_filter=nb_filter3, kernel_size=kernel_size3, activation='relu', strides=(1, 10), padding='valid',
data_format="channels_last")(stacked)
pool3 = MaxPooling2D(pool_size=maxpool3)(conv3)
flat = Flatten()(pool3)
dense1 = Dense(dense_size, activation='relu')(flat)
#drop1 = Dropout(0.5)(dense1)
dense2 = Dense(dense_size, activation='relu')(dense1)
out = Dense(5, activation='softmax', kernel_regularizer=regularizers.l2(reg))(dense2)
model = Model([in_eeg, in_pulse, in_acc], out)
optimizer=Adam(0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
#plot_model(model, to_file='model_stacked.png', show_shapes=True, show_layer_names=True)
model.summary()
# -
epochs= 100
history = model.fit([eeg_train, pulse_train, accelerometer_train], y_train, batch_size=100, epochs= epochs,
validation_data = ([eeg_test, pulse_test, accelerometer_test], y_test), callbacks=[tensor_board])
score = model.evaluate([eeg_test, pulse_test, accelerometer_test], y_test, batch_size=100)
print("Test score: ", score[0])
print("Test accuracy: ", score[1])
modelPerf([model, history], [eeg_test, pulse_test, accelerometer_test], sequential=False, verbose=False)
# Try to solve overfitting by dropout regularization (instead of L2)
# -accurancy: 0.70
# -kappa: 0.57
#
# less overfitting but does not disapear
# +
nb_filter1=10
#nb_filter2=50
nb_filter3 = 50
dense_size= 300
kernel_size3 = (40, 10)
maxpool3 = (10, 1)
#reg = 0.0001
model_eeg = submodel_2D(eeg_train, nb_filter1=nb_filter1, kernel1=250, maxPool1=15)
model_pulse = submodel_2D(pulse_train, nb_filter1=nb_filter1,kernel1=100, maxPool1=6)
model_acc = submodel_2D(accelerometer_train, nb_filter1=nb_filter1, kernel1=100, maxPool1=6)
in_eeg = Input(shape=eeg_train.shape[1:3])
in_pulse = Input(shape=pulse_train.shape[1:3])
in_acc = Input(shape=accelerometer_train.shape[1:3])
out_eeg = model_eeg(in_eeg)
out_pulse = model_pulse(in_pulse)
out_acc = model_acc(in_acc)
merged = concatenate([out_eeg, out_pulse, out_acc], axis=-1)
steps = int(merged.shape[1])
filters = int(merged.shape[2])
stacked_shape = (None, steps, filters, 1)
stacked = Reshape((steps, filters, 1))(merged)
conv3 = Conv2D(nb_filter=nb_filter3, kernel_size=kernel_size3, activation='relu', strides=(1, 10), padding='valid',
data_format="channels_last")(stacked)
pool3 = MaxPooling2D(pool_size=maxpool3)(conv3)
flat = Flatten()(pool3)
dense1 = Dense(dense_size, activation='relu')(flat)
drop1 = Dropout(0.5)(dense1)
dense2 = Dense(dense_size, activation='relu')(drop1)
out = Dense(5, activation='softmax')(dense2)
model = Model([in_eeg, in_pulse, in_acc], out)
optimizer=Adam(0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
#plot_model(model, to_file='model_stacked.png', show_shapes=True, show_layer_names=True)
model.summary()
# -
epochs= 200
history = model.fit([eeg_train, pulse_train, accelerometer_train], y_train, batch_size=100, epochs= epochs,
validation_data = ([eeg_test, pulse_test, accelerometer_test], y_test), callbacks=[tensor_board])
score = model.evaluate([eeg_test, pulse_test, accelerometer_test], y_test, batch_size=100)
print("Test score: ", score[0])
print("Test accuracy: ", score[1])
modelPerf([model, history], [eeg_test, pulse_test, accelerometer_test], sequential=False, verbose=False)
# back to basis
# only change is in conv2D strides=(1, 10) and kernel_size=(40, 10)
# model performance
# - accurancy: 0.68
# - kappa: 0.55
# +
tensor_board = TensorBoard('./logs/stacked')
nb_filter1=10
nb_filter2=50
model_eeg = submodel_2D(eeg_train, nb_filter1=nb_filter1, kernel1=250, maxPool1=20)
model_pulse = submodel_2D(pulse_train, nb_filter1=nb_filter1,kernel1=100, maxPool1=8)
model_acc = submodel_2D(accelerometer_train, nb_filter1=nb_filter1, kernel1=100, maxPool1=8)
in_eeg = Input(shape=eeg_train.shape[1:3])
in_pulse = Input(shape=pulse_train.shape[1:3])
in_acc = Input(shape=accelerometer_train.shape[1:3])
out_eeg = model_eeg(in_eeg)
out_pulse = model_pulse(in_pulse)
out_acc = model_acc(in_acc)
#conv1D output (batch_size, new_steps, filters)
merged = concatenate([out_eeg, out_pulse, out_acc], axis=-1)
#batch_size = merged.shape[0]
steps = int(merged.shape[1])
#print("steps", steps)
filters = int(merged.shape[2])
stacked_shape = (None, steps, filters, 1)
print(stacked_shape)
# shape (None, 186, 15 = nb_filter1*3) | (batch_size, new_steps, filters)
stacked = Reshape((steps, filters, 1))(merged)
#conv2D inputs (samples, rows, cols, channels)
conv3 = Conv2D(nb_filter=nb_filter2, kernel_size=(40, 10), activation='relu', strides=(1, 10), padding='valid',
data_format="channels_last")(stacked)
pool3 = MaxPooling2D(pool_size=(20, 1))(conv3)
flat = Flatten()(pool3)
dense1 = Dense(300, activation='relu')(flat)
#drop1 = Dropout(0.5)(dense1)
dense2 = Dense(300, activation='relu')(dense1)
out = Dense(5, activation='softmax')(dense2)
model = Model([in_eeg, in_pulse, in_acc], out)
optimizer=Adam(0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
#plot_model(model, to_file='model_stacked.png', show_shapes=True, show_layer_names=True)
model.summary()
# -
epochs= 100
history = model.fit([eeg_train, pulse_train, accelerometer_train], y_train, batch_size=100, epochs= epochs,
validation_data = ([eeg_test, pulse_test, accelerometer_test], y_test), callbacks=[tensor_board])
score = model.evaluate([eeg_test, pulse_test, accelerometer_test], y_test, batch_size=100)
print("Test score: ", score[0])
print("Test accuracy: ", score[1])
modelPerf([model, history], [eeg_test, pulse_test, accelerometer_test], sequential=False, verbose=False)
# ### re-check original
#
# on 200 epochs
# - accurancy: 0.713
# - kappa: 0.60
# +
# 2D input(samples, rows, cols, channels) with channel last
#1D output (batch_size, new_steps, filters) (None, 186, 5) / for inputs shape=eeg_train.shape[1:3]
tensor_board = TensorBoard('./logs/stacked')
nb_filter1=10
nb_filter2=50
model_eeg = submodel_2D(eeg_train, nb_filter1=nb_filter1, kernel1=250, maxPool1=20)
model_pulse = submodel_2D(pulse_train, nb_filter1=nb_filter1,kernel1=100, maxPool1=8)
model_acc = submodel_2D(accelerometer_train, nb_filter1=nb_filter1, kernel1=100, maxPool1=8)
in_eeg = Input(shape=eeg_train.shape[1:3])
in_pulse = Input(shape=pulse_train.shape[1:3])
in_acc = Input(shape=accelerometer_train.shape[1:3])
out_eeg = model_eeg(in_eeg)
out_pulse = model_pulse(in_pulse)
out_acc = model_acc(in_acc)
#conv1D output (batch_size, new_steps, filters)
merged = concatenate([out_eeg, out_pulse, out_acc], axis=-1)
#batch_size = merged.shape[0]
steps = int(merged.shape[1])
#print("steps", steps)
filters = int(merged.shape[2])
stacked_shape = (None, steps, filters, 1)
print(stacked_shape)
# shape (None, 186, 15 = nb_filter1*3) | (batch_size, new_steps, filters)
stacked = Reshape((steps, filters, 1))(merged)
conv3 = Conv2D(nb_filter=nb_filter2, kernel_size=(40, 30), activation='relu', strides=(1, 1), padding='valid',
data_format="channels_last")(stacked)
pool3 = MaxPooling2D(pool_size=(20, 1))(conv3)
flat = Flatten()(pool3)
dense1 = Dense(300, activation='relu')(flat)
#drop1 = Dropout(0.5)(dense1)
dense2 = Dense(300, activation='relu')(dense1)
out = Dense(5, activation='softmax')(dense2)
model = Model([in_eeg, in_pulse, in_acc], out)
optimizer=Adam(0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
#plot_model(model, to_file='model_stacked.png', show_shapes=True, show_layer_names=True)
model.summary()
# +
epochs= 200
history = model.fit([eeg_train, pulse_train, accelerometer_train], y_train, batch_size=100, epochs= epochs,
validation_data = ([eeg_test, pulse_test, accelerometer_test], y_test), callbacks=[tensor_board])
score = model.evaluate([eeg_test, pulse_test, accelerometer_test], y_test, batch_size=100)
print("Test score: ", score[0])
print("Test accuracy: ", score[1])
model.save('C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\models\\stacked_base' + '.h5')
# -
modelPerf([model, history], [eeg_test, pulse_test, accelerometer_test], sequential=False, verbose=False)
# +
# use eeg_train, pulse_train, accelerometer train
# eeg_test, pulse_test, accelerometer_test
#y_test, y_true as global variable
def stackedmodel(ksize1=250, ksize2=100, ksize3=100, nb_filter1=10, nb_filter2=50, dsize=300, reg=10, epochs=100):
model_eeg = submodel_2D(eeg_train, nb_filter1=nb_filter1, kernel1=ksize1, maxPool1=20)
model_pulse = submodel_2D(pulse_train, nb_filter1=nb_filter1,kernel1=ksize2, maxPool1=8)
model_acc = submodel_2D(accelerometer_train, nb_filter1=nb_filter1, kernel1=ksize3, maxPool1=8)
in_eeg = Input(shape=eeg_train.shape[1:3])
in_pulse = Input(shape=pulse_train.shape[1:3])
in_acc = Input(shape=accelerometer_train.shape[1:3])
out_eeg = model_eeg(in_eeg)
out_pulse = model_pulse(in_pulse)
out_acc = model_acc(in_acc)
merged = concatenate([out_eeg, out_pulse, out_acc], axis=-1)
steps = int(merged.shape[1])
filters = int(merged.shape[2])
stacked_shape = (None, steps, filters, 1)
stacked = Reshape((steps, filters, 1))(merged)
conv3 = Conv2D(nb_filter=nb_filter2, kernel_size=(40, 30), activation='relu', strides=(1, 1), padding='valid',
data_format="channels_last")(stacked)
pool3 = MaxPooling2D(pool_size=(20, 1))(conv3)
flat = Flatten()(pool3)
dense1 = Dense(dsize, activation='relu')(flat)
#drop1 = Dropout(0.5)(dense1)
dense2 = Dense(dsize, activation='relu')(dense1)
out = Dense(5, activation='softmax', kernel_regularizer=regularizers.l2(reg))(dense2)
model = Model([in_eeg, in_pulse, in_acc], out)
optimizer=Adam(0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
model.fit([eeg_train, pulse_train, accelerometer_train], y_train, batch_size=100, epochs= epochs,
validation_data = ([eeg_test, pulse_test, accelerometer_test], y_test))
score = model.evaluate([eeg_test, pulse_test, accelerometer_test], y_test, batch_size=100)
y_probas = model.predict([eeg_test, pulse_test, accelerometer_test], batch_size=100)
y_classes = y_probas.argmax(axis=-1) #keras.np_utils.probas_to_classes(y_probas)
y_pred = pd.DataFrame(y_classes)
loss = score[0]
accuracy = score[1]
kappa = cohen_kappa_score(y_true, y_pred)
arguments = "Ksize1: {}, Ksize2: {}, Ksize3: {}, nb_filter1: {}, nb_filter2: {}, densitysize: {}, l2: {}, epochs: {}".format(ksize1, ksize2, ksize3, nb_filter1, nb_filter2, dsize, reg, epochs)
del model
#if K.backend() == 'tensorflow':
#K.clear_session()
#gc.collect()
#model = None
#model_eeg = None
#model_loss = None
#model_acc = None
return [arguments, loss, accuracy, kappa]
#return {'loss': -kappa, 'status': STATUS_OK, 'model': model}
# +
#gc.collect()
#K.clear_session()
# +
#kill kernel
Lresults =[]
epochs = 1
reg = 10
ksize1=250
ksize2=100
ksize3=100
i =1
for dsize in [250, 300]:
for nb_filter1 in [10, 20]:
for nb_filter2 in [25, 50]:
r = stackedmodel(ksize1=ksize1, ksize2=ksize2, ksize3=ksize3, nb_filter1=nb_filter1,
nb_filter2=nb_filter2, dsize=dsize, reg=reg, epochs=epochs)
print()
print('--------------------------------')
print()
print("iteration: ", i)
print(r[0])
print("loss", r[1])
print('accuracy', r[2] )
print('kappa', r[3])
i = i+1
Lresults.append(r)
K.clear_session()
sess.close()
gc.collect()
sess = tf.Session(config=config)
K.set_session(sess)
Lresults
# +
for r in Lresults:
print(r[0])
print("loss", format(r[1], '.2f'))
print('accuracy', format(r[2], '.2f') )
print('kappa', format(r[3], '.2f'))
print()
# -
# +
#trying with hyperas
def data():
dataPath = "C:\\Users\\i053131\\Desktop\\Epilepsie\\Dreem\\data\\raw\\"
trainOutput = pd.read_csv(dataPath + "challenge_fichier_de_sortie_dentrainement_classification_en_stade_de_sommeil_a_laide_de_signaux_mesures_par_le_bandeau_dreem.csv", sep=";")
Y = trainOutput["label"]
filetrain= dataPath + "train.h5"
filetest= dataPath + "test.h5"
h5 = h5py.File(filetrain, "r")
eeg_1 = pd.DataFrame(h5['eeg_1'][:])
eeg_2 = pd.DataFrame(h5['eeg_2'][:])
eeg_3 = pd.DataFrame(h5['eeg_3'][:])
eeg_4 = pd.DataFrame(h5['eeg_4'][:])
po_ir = pd.DataFrame(h5['po_ir'][:])
po_r = pd.DataFrame(h5['po_r'][:])
accelerometer_x = pd.DataFrame(h5['accelerometer_x'][:])
accelerometer_y = pd.DataFrame(h5['accelerometer_y'][:])
accelerometer_z = pd.DataFrame(h5['accelerometer_z'][:])
df = pd.concat([eeg_1, eeg_2, eeg_3, eeg_4, po_ir, po_r, accelerometer_x, accelerometer_y, accelerometer_z],
axis=1, sort = False)
df.columns = range(15000 + 3000 + 4500)
df["Y"] = Y
training, test = train_test_split(df, test_size=0.2, random_state=42)
X = training.iloc[:,:-1]
X_train = X.values
y = training.iloc[:,-1]
y_train = to_categorical(y.values, num_classes=5)
X_test = test.iloc[:,:-1].values
y_true = test.iloc[:,-1].values
y_test = to_categorical(y_true, num_classes=5)
eeg_train = np.empty(shape=(35064, 3750, 4))
eeg_train[:, :, 0]= X_train[:, :3750] #eeg_1
eeg_train[:, :, 1]= X_train[:, 3750:3750*2] #eeg_2
eeg_train[:, :, 2]= X_train[:, 3750*2:3750*3] #eeg_3
eeg_train[:, :, 3]= X_train[:, 3750*3:3750*4] #eeg_4
eeg_test = np.empty(shape=(8766, 3750, 4))
eeg_test[:, :, 0]= X_test[:, :3750] #eeg_1
eeg_test[:, :, 1]= X_test[:, 3750:3750*2] #eeg_2
eeg_test[:, :, 2]= X_test[:, 3750*2:3750*3] #eeg_3
eeg_test[:, :, 3]= X_test[:, 3750*3:3750*4] #eeg_4
print("eeg_train", eeg_train.shape)
pulse_train = np.empty(shape=(35064, 1500, 2))
pulse_train[:, :, 0]= X_train[:, 3750*4 : 3750*4 + 1500] #po_ir
pulse_train[:, :, 1]= X_train[:, 3750*4 + 1500 : 3750*4 +1500*2] #po_r
pulse_test = np.empty(shape=(8766, 1500, 2))
pulse_test[:, :, 0]= X_test[:, 3750*4 : 3750*4 + 1500] #po_ir
pulse_test[:, :, 1]= X_test[:, 3750*4 + 1500 : 3750*4 + 1500*2] #po_r
print("pulse_train", pulse_train.shape)
accelerometer_train = np.empty(shape=(35064, 1500, 3))
accelerometer_train[:, :, 0]= X_train[:, 3750*4 + 1500*2 : 3750*4 + 1500*3] #accelerometer_x
accelerometer_train[:, :, 1]= X_train[:, 3750*4 + 1500*3 : 3750*4 + 1500*4] #accelerometer_y
accelerometer_train[:, :, 2]= X_train[:, 3750*4 + 1500*4 : 3750*4 + 1500*5] #accelerometer_z
accelerometer_test = np.empty(shape=(8766, 1500, 3))
accelerometer_test[:, :, 0]= X_test[:, 3750*4 + 1500*2 : 3750*4 + 1500*3] #accelerometer_x
accelerometer_test[:, :, 1]= X_test[:, 3750*4 + 1500*3 : 3750*4 + 1500*4] #accelerometer_y
accelerometer_test[:, :, 2]= X_test[:, 3750*4 + 1500*4 : 3750*4 + 1500*5] #accelerometer_z
print("accelerometer_train", pulse_train.shape)
x_train = [eeg_train, pulse_train, accelerometer_train] #check
x_test = [eeg_test, pulse_test, accelerometer_test]
return x_train, y_train, x_test, y_test
# +
def create_model(x_train, y_train, x_test, y_test):
ksize1=250; ksize2=100; ksize3=100
nb_filter1={{choice([10, 20, 50])}}
nb_filter2={{choice([50, 100, 200])}}
dsize={{choice([250, 300, 350])}}
reg=10; epochs=100
model_eeg = submodel_2D(x_train[0], nb_filter1=nb_filter1, kernel1=ksize1, maxPool1=20)
model_pulse = submodel_2D(x_train[1], nb_filter1=nb_filter1,kernel1=ksize2, maxPool1=8)
model_acc = submodel_2D(x_train[2], nb_filter1=nb_filter1, kernel1=ksize3, maxPool1=8)
in_eeg = Input(shape=x_train[0].shape[1:3])
in_pulse = Input(shape=x_train[1].shape[1:3])
in_acc = Input(shape=x_train[2].shape[1:3])
out_eeg = model_eeg(in_eeg)
out_pulse = model_pulse(in_pulse)
out_acc = model_acc(in_acc)
merged = concatenate([out_eeg, out_pulse, out_acc], axis=-1)
steps = int(merged.shape[1])
filters = int(merged.shape[2])
stacked_shape = (None, steps, filters, 1)
stacked = Reshape((steps, filters, 1))(merged)
conv3 = Conv2D(nb_filter=nb_filter2, kernel_size=(40, 30), activation='relu', strides=(1, 1), padding='valid',
data_format="channels_last")(stacked)
pool3 = MaxPooling2D(pool_size=(20, 1))(conv3)
flat = Flatten()(pool3)
dense1 = Dense(dsize, activation='relu')(flat)
#drop1 = Dropout(0.5)(dense1)
dense2 = Dense(dsize, activation='relu')(dense1)
out = Dense(5, activation='softmax', kernel_regularizer=regularizers.l2(reg))(dense2)
model = Model([in_eeg, in_pulse, in_acc], out)
optimizer=Adam(0.0001)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=100, epochs= epochs,
validation_data = (x_test, y_test))
score = model.evaluate(x_test, y_test, batch_size=100)
#y_probas = model.predict(x_test, batch_size=100)
#y_classes = y_probas.argmax(axis=-1) #keras.np_utils.probas_to_classes(y_probas)
#y_pred = pd.DataFrame(y_classes)
loss = score[0]
accuracy = score[1]
#kappa = cohen_kappa_score(y_true, y_pred)
#arguments = "Ksize1: {}, Ksize2: {}, Ksize3: {}, nb_filter1: {}, nb_filter2: {}, densitysize: {}, l2: {}, epochs: {}".format(ksize1, ksize2, ksize3, nb_filter1, nb_filter2, dsize, reg, epochs)
#model = None
#model_eeg = None
#model_loss = None
#model_acc = None
#return [arguments, loss, accuracy, kappa]
return {'loss': -accuracy, 'status': STATUS_OK, 'model': model}
#y_true = test.iloc[:,-1].values
#y_test = to_categorical(y_true, num_classes=5)
# +
best_run, best_model = optim.minimize(model=create_model,
data=data,
algo=tpe.suggest,
max_evals=5,
trials=Trials(), notebook_name='CNN on raw zoom on stacke model')
X_train, Y_train, X_test, Y_test = data()
print("Evalutation of best performing model:")
print(best_model.evaluate(X_test, Y_test))
print("Best performing model chosen hyper-parameters:")
print(best_run)
# +
epochs = 1
reg = 10
ksize1=250
ksize2=100
ksize3=100
dsize = 300
nb_filter1=20
nb_filter2 =50
result = stackedmodel(ksize1=ksize1, ksize2=ksize2, ksize3=ksize3, nb_filter1=nb_filter1,
nb_filter2=nb_filter2, dsize=dsize, reg=reg, epochs=epochs)
result
# -
for r in result:
print(r[0])
print("loss", format(r[1], '.2f'))
print('accuracy', format(r[2], '.2f') )
print('kappa', format(r[3], '.2f'))
print()
| notebooks/CNN on raw zoom on stacke model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial
# This notebook provides a use case example of the ``EsnTorch`` library.
# It described the implementation of an Echo State Network (ESN)
# for text classification on the IMDB dataset.
#
# The instantiation, training and evaluation of an ESN for text classification
# is achieved via the following steps:
# - Import the required modules
# - Create the dataloaders
# - Instantiate the ESN by specifying:
# - a reservoir
# - a loss function
# - a learning algorithm
# - Train the ESN
# - Training and testing results
# ## Librairies
# +
# # !pip install transformers==4.8.2
# # !pip install datasets==1.7.0
# -
# Comment this if library is installed
import os
import sys
sys.path.insert(0, os.path.abspath(".."))
# +
# import numpy as np
from sklearn.metrics import classification_report
import torch
from datasets import load_dataset, Dataset, concatenate_datasets
from transformers import AutoTokenizer
from transformers.data.data_collator import DataCollatorWithPadding
import esntorch.core.reservoir as res
import esntorch.core.learning_algo as la
import esntorch.core.merging_strategy as ms
import esntorch.core.esn as esn
# -
# %load_ext autoreload
# %autoreload 2
# ## Device and Seed
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
# ## Load and Tokenize Data
# +
# Custom functions for loading and preparing data
def tokenize(sample):
"""Tokenize sample"""
sample = tokenizer(sample['text'], truncation=True, padding=False, return_length=True)
return sample
def load_and_prepare_dataset(dataset_name, split, cache_dir):
"""
Load dataset from the datasets library of HuggingFace.
Tokenize and add length.
"""
# Load dataset
dataset = load_dataset(dataset_name, split=split, cache_dir=CACHE_DIR)
# Rename label column for tokenization purposes
dataset = dataset.rename_column('label', 'labels')
# Tokenize data
dataset = dataset.map(tokenize, batched=True)
dataset = dataset.rename_column('length', 'lengths')
dataset.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels', 'lengths'])
return dataset
# +
# Load BERT tokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
# Load and prepare data
CACHE_DIR = 'cache_dir/' # put your path here
full_dataset = load_and_prepare_dataset('imdb', split=None, cache_dir=CACHE_DIR)
train_dataset = full_dataset['train'].sort("lengths")
test_dataset = full_dataset['test'].sort("lengths")
# Create dict of all datasets
dataset_d = {
'train': train_dataset,
'test': test_dataset
}
# -
dataset_d
# +
# Create dict of dataloaders
dataloader_d = {}
for k, v in dataset_d.items():
dataloader_d[k] = torch.utils.data.DataLoader(v, batch_size=256, collate_fn=DataCollatorWithPadding(tokenizer))
# -
dataloader_d
dataset_d['train']['lengths'][0:10]
# ## Model
# +
# ESN parameters
esn_params = {
'embedding_weights': 'bert-base-uncased', # TEXT.vocab.vectors,
'distribution' : 'gaussian', # uniform, gaussian
'input_dim' : 768, # dim of BERT encoding!
'reservoir_dim' : 1000,
'input_scaling' : 1.0,
'bias_scaling' : 1.0, # can be None
'sparsity' : 0.99,
'spectral_radius' : 0.9,
'leaking_rate': 0.5,
'activation_function' : 'relu', # 'tanh', relu'
'mean' : 0.0,
'std' : 1.0,
#'learning_algo' : None, # initialzed below
#'criterion' : None, # initialzed below
#'optimizer' : None, # initialzed below
'merging_strategy' : 'mean',
'bidirectional' : False, # True
'device' : device,
'seed' : 42
}
# Instantiate the ESN
ESN = esn.EchoStateNetwork(**esn_params)
# Define the learning algo of the ESN
ESN.learning_algo = la.RidgeRegression(alpha=10)
# Put the ESN on the device (CPU or GPU)
ESN = ESN.to(device)
# +
# Warm up the ESN on 3 sentences
nb_sentences = 3
for i in range(nb_sentences):
sentence = dataset_d["train"].select([i])
dataloader_tmp = torch.utils.data.DataLoader(sentence,
batch_size=1,
collate_fn=DataCollatorWithPadding(tokenizer))
for sentence in dataloader_tmp:
ESN.warm_up(sentence)
# -
# ## Training
# %%time
# training the ESN
ESN.fit(dataloader_d["train"])
# ## Results
# Train predictions and accuracy
train_pred, train_acc = ESN.predict(dataloader_d["train"], verbose=False)
train_acc.item()
# Test predictions and accuracy
test_pred, test_acc = ESN.predict(dataloader_d["test"], verbose=False)
test_acc.item()
# Test classification report
print(classification_report(test_pred.tolist(),
dataset_d['test']['labels'].tolist(),
digits=4))
# + active=""
# # With 'tanh'
# train: 89.6199951171875
# test: 88.16400146484375
# + active=""
# # With 'relu'
# train: 89.7439956665039
# test: 88.26799774169922
# + active=""
# # With 'relu' and 'gaussian'
# train: 89.7959976196289
# test: 88.41999816894531
# +
# 'relu' and 'gaussian' seem better!
# Redo the experiments with those params...
# Also, seem to generalized better on the test set with small scalings, i.e., 0.1
| examples/example_IMDB.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab_type="code" id="jzzuGAMna4NG" colab={}
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# + id="hDfnV-3GMtXW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 546} outputId="c8ed5199-f186-4aba-c981-4a3d183d0180"
# !pip install torchaudio
# !pip install PyDrive
# !pip install soundfile
# + colab_type="code" id="ppRP3ZqP8vZP" colab={}
import torch
import torchaudio
# + id="_Mox7PgiRLqz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 126} outputId="a15151a5-856d-4296-d6cf-2bd6e5be7d8b"
# !git clone https://github.com/facebookresearch/CPC_audio.git
# + id="Pe4sHjb2RLRt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 728} outputId="f62f2ee8-16a8-4416-9cd4-ebaa7308dd80"
# %cd /content/CPC_audio
# !python setup.py develop
# + [markdown] colab_type="text" id="YZ1Wywx3BNGS"
# # Part 1 : contrastive predictive coding
#
# Contrastive Predictive Coding (CPC) is a method of unsupervised training for speech models. The idea behind it is pretty simple:
#
#
# 1. The raw audio wave is passed through a convolutional network: the ```encoder```
# 2. Then, the encoder's output is given to a recurrent network the ```context```
# 3. A third party network, the ```prediction_network``` will try to predict the future embeddings of the encoder using the output of the context network.
#
# In order to avoid a collapse to trivial solutions, the prediction_network doesn't try to reconstruct the future features. Instead, using the context output $c_t$ at time $t$ it is trained to discriminate the real encoder representatioin $g_{t+k}$ at time $t+k$ from several other features $(g_n)_n$ taken elsewhere in the batch. Thus the loss becomes:
#
# \\[ \mathcal{L}_c = - \frac{1}{K} \sum_{k=1}^K \text{Cross_entropy}(\phi_k(c_t), g_{t+k}) \\]
#
# Or:
#
# \\[ \mathcal{L}_c = - \frac{1}{K} \sum_{k=1}^K \log \frac{ \exp\left(\phi_k(c_t)^\top g_{t+k}\right) }{ \sum_{\mathbf{n}\in\mathcal{N}_t} \exp\left(\phi_k(c_t)^\top g_n\right)} \\]
#
# Where:
#
#
# * $\phi_k$ is the prediction network for the kth timestep
# * $\mathcal{N}_t$ is the set of all negative examples sampled for timestep $t$
#
#
#
# + [markdown] colab_type="text" id="frPFYXuPfNPs"
# ## Exercice 1 : Building the model
#
# In this exercise, we will build and train a small CPC model using the repository CPC_audio.
#
# The code below loads a context and an encoder newtorks.
# + colab_type="code" id="8g-xPSdLRdui" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="e67a44fd-e69a-4373-f4c8-67ca746d09a2"
# %cd /content/CPC_audio
from cpc.model import CPCEncoder, CPCAR
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
DIM_ENCODER=256
DIM_CONTEXT=256
KEEP_HIDDEN_VECTOR=False
N_LEVELS_CONTEXT=1
CONTEXT_RNN="LSTM"
N_PREDICTIONS=12
LEARNING_RATE=2e-4
N_NEGATIVE_SAMPLE =128
# + colab_type="code" id="7Wx8WkrQk9bQ" colab={}
encoder = CPCEncoder(DIM_ENCODER).to(device)
context = CPCAR(DIM_ENCODER, DIM_CONTEXT, KEEP_HIDDEN_VECTOR, 1, mode=CONTEXT_RNN).to(device)
# + colab_type="code" id="f9BrweAIla4J" colab={}
# Several functions that will be necessary to load the data later
from cpc.dataset import findAllSeqs, AudioBatchData, parseSeqLabels
SIZE_WINDOW = 20480
BATCH_SIZE=8
def load_dataset(path_dataset, file_extension='.wav', phone_label_dict=None):
data_list, speakers = findAllSeqs(path_dataset, extension=file_extension)
dataset = AudioBatchData(path_dataset, SIZE_WINDOW, data_list, phone_label_dict, len(speakers))
return dataset
# + [markdown] colab_type="text" id="58pQ7ysXk9ZO"
# Now build a new class, ```CPCModel``` which will
# + colab_type="code" id="rR5IYRTpRF8T" colab={}
class CPCModel(torch.nn.Module):
def __init__(self,
encoder,
AR):
super(CPCModel, self).__init__()
self.gEncoder = encoder
self.gAR = AR
def forward(self, batch_data):
encoder_output = self.gEncoder(batch_data)
#print(encoder_output.shape)
# The output of the encoder data does not have the good format
# indeed it is Batch_size x Hidden_size x temp size
# while the context requires Batch_size x temp size x Hidden_size
# thus you need to permute
context_input = encoder_output.permute(0, 2, 1)
context_output = self.gAR(context_input)
#print(context_output.shape)
return context_output, encoder_output
# + [markdown] colab_type="text" id="XUJgm6Rl4vS4"
# Let's test your code !
#
# + colab_type="code" id="D1x1n4mv4y03" colab={}
audio = torchaudio.load("/content/drive/My Drive/DATA/train/recording1/200625-153409_yor_af2_elicit_0.wav")[0]
audio = audio.view(1, 1, -1)
cpc_model = CPCModel(encoder, context).to(device)
context_output, encoder_output = cpc_model(audio.to(device))
# + [markdown] colab_type="text" id="X27ce8Hy3C2p"
# ## Exercise 2 : CPC loss
#
# We will define a class ```CPCCriterion``` which will hold the prediction networks $\phi_k$ defined above and perform the classification loss $\mathcal{L}_c$.
#
# a) In this exercise, the $\phi_k$ will be a linear transform, ie:
#
# \\[ \phi_k(c_t) = \mathbf{A}_k c_t\\]
#
# Using the class [torch.nn.Linear](https://pytorch.org/docs/stable/nn.html#torch.nn.Linear), define the transformations $\phi_k$ in the code below and complete the function ```get_prediction_k``` which computes $\phi_k(c_t)$ for a given batch of vectors $c_t$.
#
# b) Using both ```get_prediction_k``` and ```sample_negatives``` defined below, write the forward function which will take as input two batches of features $c_t$ and $g_t$ and outputs the classification loss $\mathcal{L}_c$ and the average acuracy for all predictions.
# + colab_type="code" id="UlAs-z1fBc3W" colab={}
# Exercice 2: write the CPC loss
# a) Write the negative sampling (with some help)
# ERRATUM: it's really hard, the sampling will be provided
class CPCCriterion(torch.nn.Module):
def __init__(self,
K,
dim_context,
dim_encoder,
n_negative):
super(CPCCriterion, self).__init__()
self.K_ = K
self.dim_context = dim_context
self.dim_encoder = dim_encoder
self.n_negative = n_negative
self.predictors = torch.nn.ModuleList()
for k in range(self.K_):
# TO COMPLETE !
# A affine transformation in pytorch is equivalent to a nn.Linear layer
# To get a linear transformation you must set bias=False
# input dimension of the layer = dimension of the encoder
# output dimension of the layer = dimension of the context
self.predictors.append(torch.nn.Linear(dim_context, dim_encoder, bias=False))
def get_prediction_k(self, context_data):
#TO COMPLETE !
output = []
# For each time step k
for k in range(self.K_):
# We need to compute phi_k = A_k * c_t
phi_k = self.predictors[k](context_data)
output.append(phi_k)
return output
def sample_negatives(self, encoded_data):
r"""
Sample some negative examples in the given encoded data.
Input:
- encoded_data size: B x T x H
Returns
- outputs of size B x (n_negative + 1) x (T - K_) x H
outputs[:, 0, :, :] contains the positive example
outputs[:, 1:, :, :] contains negative example sampled in the batch
- labels, long tensor of size B x (T - K_)
Since the positive example is always at coordinates 0 for all sequences
in the batch and all timestep in the sequence, labels is just a tensor
full of zeros !
"""
batch_size, time_size, dim_encoded = encoded_data.size()
window_size = time_size - self.K_
outputs = []
neg_ext = encoded_data.contiguous().view(-1, dim_encoded)
n_elem_sampled = self.n_negative * window_size * batch_size
# Draw nNegativeExt * batchSize negative samples anywhere in the batch
batch_idx = torch.randint(low=0, high=batch_size,
size=(n_elem_sampled, ),
device=encoded_data.device)
seq_idx = torch.randint(low=1, high=time_size,
size=(n_elem_sampled, ),
device=encoded_data.device)
base_idx = torch.arange(0, window_size, device=encoded_data.device)
base_idx = base_idx.view(1, 1, window_size)
base_idx = base_idx.expand(1, self.n_negative, window_size)
base_idx = base_idx.expand(batch_size, self.n_negative, window_size)
seq_idx += base_idx.contiguous().view(-1)
seq_idx = torch.remainder(seq_idx, time_size)
ext_idx = seq_idx + batch_idx * time_size
neg_ext = neg_ext[ext_idx].view(batch_size, self.n_negative,
window_size, dim_encoded)
label_loss = torch.zeros((batch_size, window_size),
dtype=torch.long,
device=encoded_data.device)
for k in range(1, self.K_ + 1):
# Positive samples
if k < self.K_:
pos_seq = encoded_data[:, k:-(self.K_-k)]
else:
pos_seq = encoded_data[:, k:]
pos_seq = pos_seq.view(batch_size, 1, pos_seq.size(1), dim_encoded)
full_seq = torch.cat((pos_seq, neg_ext), dim=1)
outputs.append(full_seq)
return outputs, label_loss
def forward(self, encoded_data, context_data):
# TO COMPLETE:
# Perform the full cpc criterion
# Returns 2 values:
# - the average classification loss avg_loss
# - the average classification acuracy avg_acc
# Reminder : The permuation !
encoded_data = encoded_data.permute(0, 2, 1)
# First we need to sample the negative examples
negative_samples, labels = self.sample_negatives(encoded_data)
# Then we must compute phi_k
phi_k = self.get_prediction_k(context_data)
# Finally we must get the dot product between phi_k and negative_samples
# for each k
#The total loss is the average of all losses
avg_loss = 0
# Average acuracy
avg_acc = 0
for k in range(self.K_):
B, N_sampled, S_small, H = negative_samples[k].size()
B, S, H = phi_k[k].size()
# As told before S = S_small + K. For segments too far in the sequence
# there are no positive exmples anyway, so we must shorten phi_k
phi = phi_k[k][:, :S_small]
# Now the dot product
# You have several ways to do that, let's do the simple but non optimal
# one
# pytorch has a matrix product function https://pytorch.org/docs/stable/torch.html#torch.bmm
# But it takes only 3D tensors of the same batch size !
# To begin negative_samples is a 4D tensor !
# We want to compute the dot product for each features, of each sequence
# of the batch. Thus we are trying to compute a dot product for all
# B* N_sampled * S_small 1D vector of negative_samples[k]
# Or, a 1D tensor of size H is also a matrix of size 1 x H
# Then, we must view it as a 3D tensor of size (B* N_sampled * S_small, 1, H)
negative_sample_k = negative_samples[k].view(B* N_sampled* S_small, 1, H)
# But now phi and negative_sample_k no longer have the same batch size !
# No worries, we can expand phi so that each sequence of the batch
# is repeated N_sampled times
phi = phi.view(B, 1,S_small, H).expand(B, N_sampled, S_small, H)
# And now we can view it as a 3D tensor
phi = phi.contiguous().view(B * N_sampled * S_small, H, 1)
# We can finally get the dot product !
scores = torch.bmm(negative_sample_k, phi)
# Dot_product has a size (B * N_sampled * S_small , 1, 1)
# Let's reorder it a bit
scores = scores.reshape(B, N_sampled, S_small)
# For each elements of the sequence, and each elements sampled, it gives
# a floating score stating the likelihood of this element being the
# true one.
# Now the classification loss, we need to use the Cross Entropy loss
# https://pytorch.org/docs/master/generated/torch.nn.CrossEntropyLoss.html
# For each time-step of each sequence of the batch
# we have N_sampled possible predictions.
# Looking at the documentation of torch.nn.CrossEntropyLoss
# we can see that this loss expect a tensor of size M x C where
# - M is the number of elements with a classification score
# - C is the number of possible classes
# There are N_sampled candidates for each predictions so
# C = N_sampled
# Each timestep of each sequence of the batch has a prediction so
# M = B * S_small
# Thus we need an input vector of size B * S_small, N_sampled
# To begin, we need to permute the axis
scores = scores.permute(0, 2, 1) # Now it has size B , S_small, N_sampled
# Then we can cast it into a 2D tensor
scores = scores.reshape(B * S_small, N_sampled)
# Same thing for the labels
labels = labels.reshape(B * S_small)
# Finally we can get the classification loss
loss_criterion = torch.nn.CrossEntropyLoss()
loss_k = loss_criterion(scores, labels)
avg_loss+= loss_k
# And for the acuracy
# The prediction for each elements is the sample with the highest score
# Thus the tensors of all predictions is the tensors of the index of the
# maximal score for each time-step of each sequence of the batch
predictions = torch.argmax(scores, 1)
acc_k = (labels == predictions).sum() / (B * S_small)
avg_acc += acc_k
# Normalization
avg_loss = avg_loss / self.K_
avg_acc = avg_acc / self.K_
return avg_loss , avg_acc
# + [markdown] colab_type="text" id="0cqGXhLf-_O1"
# Don't forget to test !
# + colab_type="code" id="sYJSMh5I_QCf" colab={"base_uri": "https://localhost:8080/", "height": 55} outputId="d41a2edb-bdf1-4333-c723-3075734e4e7b"
audio = torchaudio.load("/content/drive/My Drive/DATA/train/recording1/200625-153409_yor_af2_elicit_1.wav")[0]
audio = audio.view(1, 1, -1)
cpc_criterion = CPCCriterion(N_PREDICTIONS, DIM_CONTEXT,
DIM_ENCODER, N_NEGATIVE_SAMPLE).to(device)
context_output, encoder_output = cpc_model(audio.to(device))
loss, avg = cpc_criterion(encoder_output,context_output)
# + [markdown] colab_type="text" id="zLv7GoE0_4C3"
# ## Exercise 3: Full training loop !
#
# You have the model, you have the criterion. All you need now are a data loader and an optimizer to run your training loop.
#
# We will use an Adam optimizer:
# + colab_type="code" id="zcg29tqPAIR1" colab={}
parameters = list(cpc_criterion.parameters()) + list(cpc_model.parameters())
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
# + [markdown] colab_type="text" id="hETDwSfTAuF4"
# And as far as the data loader is concerned, we will rely on the data loader provided by the CPC_audio library.
# + id="KHs9nFENUk6K" colab_type="code" colab={}
# + colab_type="code" id="O6KES4RXA0tU" colab={"base_uri": "https://localhost:8080/", "height": 345} outputId="8b0de4b5-dbbf-4088-cb05-2561479d13b5"
dataset_train = load_dataset('/content/drive/My Drive/DATA/train')
dataset_val = load_dataset('/content/drive/My Drive/DATA/validation')
data_loader_train = dataset_train.getDataLoader(BATCH_SIZE, "speaker", True)
data_loader_val = dataset_train.getDataLoader(BATCH_SIZE, "sequence", False)
# + [markdown] colab_type="text" id="uCoMCPL0A8VI"
# Now that everything is ready, complete and test the ```train_step``` function below which trains the model for one epoch.
# + id="lQcFKCi84cGQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 128} outputId="44f7f0ab-eec6-4420-9755-df6d25deb9ec"
from google.colab import drive
drive.mount('/content/drive')
# + colab_type="code" id="U-hIH3p8BsZr" colab={}
def train_step(data_loader,
cpc_model,
cpc_criterion,
optimizer):
avg_loss = 0
avg_acc = 0
n_items = 0
for step, data in enumerate(data_loader):
x,y = data
bs = len(x)
optimizer.zero_grad()
context_output, encoder_output = cpc_model(x.to(device))
loss , acc = cpc_criterion(encoder_output, context_output)
loss.backward()
n_items+=bs
avg_loss+=loss.item()*bs
avg_acc +=acc.item()*bs
avg_loss/=n_items
avg_acc/=n_items
return avg_loss, avg_acc
# + [markdown] colab_type="text" id="rIj06giJE_bZ"
# ## Exercise 4 : Validation loop
#
# Now complete the validation loop.
# + colab_type="code" id="Q7Qwi6jDByyt" colab={}
def validation_step(data_loader,
cpc_model,
cpc_criterion):
avg_loss = 0
avg_acc = 0
n_items = 0
for step, data in enumerate(data_loader):
x,y = data
bs = len(x)
context_output, encoder_output = cpc_model(x.to(device))
loss , acc = cpc_criterion(encoder_output, context_output)
n_items+=bs
avg_loss+=loss.item()*bs
avg_acc+=acc.item()*bs
avg_loss/=n_items
avg_acc/=n_items
return avg_loss, avg_acc
# + [markdown] colab_type="text" id="OBVUPKKs2_0U"
# ## Exercise 5: Run everything
# + colab_type="code" id="ZbXsZIRiB1tm" colab={}
def run(train_loader,
val_loader,
cpc_model,
cpc_criterion,
optimizer,
n_epochs):
for epoch in range(n_epochs):
print(f"Running epoch {epoch+1} / {n_epochs}")
avg_loss_train, avg_acc_train = train_step(train_loader, cpc_model, cpc_criterion, optimizer)
print("----------------------")
print(f"Training dataset")
print(f"- average loss : {avg_loss_train}")
print(f"- average acuracy : {avg_acc_train}")
print("----------------------")
with torch.no_grad():
cpc_model.eval()
cpc_criterion.eval()
avg_loss_val, avg_acc_val = validation_step(val_loader, cpc_model, cpc_criterion)
print(f"Validation dataset")
print(f"- average loss : {avg_loss_val}")
print(f"- average acuracy : {avg_acc_val}")
print("----------------------")
print()
cpc_model.train()
cpc_criterion.train()
# + colab_type="code" id="5xx8vN2wpECC" colab={"base_uri": "https://localhost:8080/", "height": 217} outputId="5c00fdb8-bcc5-4a64-ee67-72e158fcbd30"
# validation_loss, validation accuracy, train_loss and train_accuracy for one_epoch
run(data_loader_train, data_loader_val, cpc_model,cpc_criterion,optimizer,1)
# + [markdown] colab_type="text" id="5hoT3_3W6HYY"
# Once everything is donw, clear the memory.
# + colab_type="code" id="fU5mDOY46KSG" colab={}
del dataset_train
del dataset_val
del cpc_model
del context
del encoder
# + [markdown] colab_type="text" id="srPM5r_LB9v-"
# # Part 2 : Fine tuning
# + [markdown] colab_type="text" id="0Nb_-0IQiJk9"
# ## Exercice 1 : Phone separability with aligned phonemes.
#
# One option to evaluate the quality of the features trained with CPC can be to check if they can be used to recognize phonemes.
# To do so, we can fine-tune a pre-trained model using a limited amount of labelled speech data.
# We are going to start with a simple evaluation setting where we have the phone labels for each timestep corresponding to a CPC feature.
#
# We will work with a model already pre-trained on English data. As far as the fine-tuning dataset is concerned, we will use a 1h subset of [librispeech-100](http://www.openslr.org/12/).
# + colab_type="code" id="N-scDMAasXxc" colab={"base_uri": "https://localhost:8080/", "height": 692} outputId="b897424e-d5f4-4c67-8a4a-ebc4cc81aeb3"
# !mkdir checkpoint_data
# !wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_30.pt -P checkpoint_data
# !wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_logs.json -P checkpoint_data
# !wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_args.json -P checkpoint_data
# !ls checkpoint_data
# + colab_type="code" id="SSSaiYo82_oY" colab={"base_uri": "https://localhost:8080/", "height": 399} outputId="a31272d1-9fe1-4820-9b08-c1c23f679613"
# %cd /content/CPC_audio
from cpc.dataset import parseSeqLabels
from cpc.feature_loader import loadModel
checkpoint_path = 'checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
label_dict, N_PHONES = parseSeqLabels('/content/drive/My Drive/DATA/all_sessions.txt')
dataset_train = load_dataset('/content/drive/My Drive/DATA/train', file_extension='.wav', phone_label_dict=label_dict)
dataset_val = load_dataset('/content/drive/My Drive/DATA/validation', file_extension='.wav', phone_label_dict=label_dict)
data_loader_train = dataset_train.getDataLoader(BATCH_SIZE, "speaker", True)
data_loader_val = dataset_val.getDataLoader(BATCH_SIZE, "sequence", False)
# + colab_type="code" id="3ulgYV3nHcoa" colab={"base_uri": "https://localhost:8080/", "height": 326} outputId="afd2c152-6a12-44fb-e4ad-5b55b8b8c0f3"
cpc_model
# + [markdown] colab_type="text" id="xkKi-qfosng2"
# Then we will use a simple linear classifier to recognize the phonemes from the features produced by ```cpc_model```.
#
# ### a) Build the phone classifier
#
# Design a class of linear classifiers, ```PhoneClassifier``` that will take as input a batch of sequences of CPC features and output a score vector for each phoneme
# + colab_type="code" id="4RpAbz-0CXJJ" colab={}
class PhoneClassifier(torch.nn.Module):
def __init__(self,
input_dim : int,
n_phones : int):
super(PhoneClassifier, self).__init__()
self.linear = torch.nn.Linear(input_dim, n_phones)
def forward(self, x):
return self.linear(x)
# + [markdown] colab_type="text" id="Zt5oa_nqtH-d"
# Our phone classifier will then be:
# + colab_type="code" id="NRBf_83IuLv5" colab={}
phone_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_PHONES).to(device)
# + [markdown] colab_type="text" id="z_Vf5AbUhqm4"
# ### b - What would be the correct loss criterion for this task ?
#
#
# + colab_type="code" id="uhyPM-cgjrtw" colab={}
loss_criterion = torch.nn.CrossEntropyLoss()
# + [markdown] colab_type="text" id="Nv4cSxbaplrz"
# To perform the fine-tuning, we will also need an optimization function.
#
# We will use an [Adam optimizer ](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam).
# + colab_type="code" id="W5CgYyAlqKxu" colab={}
parameters = list(phone_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
# + [markdown] colab_type="text" id="qQB9HS9PvAXc"
# You might also want to perform this training while freezing the weights of the ```cpc_model```. Indeed, if the pre-training was good enough, then ```cpc_model``` phonemes representation should be linearly separable. In this case the optimizer should be defined like this:
# + colab_type="code" id="nRy0gn6awGUQ" colab={}
optimizer_frozen = torch.optim.Adam(list(phone_classifier.parameters()), lr=LEARNING_RATE)
# + [markdown] colab_type="text" id="cO93ngIfj4JW"
# ### c- Now let's build a training loop.
# Complete the function ```train_one_epoch``` below.
#
#
# + colab_type="code" id="fabqj3wvLwgU" colab={}
def train_one_epoch(cpc_model,
phone_classifier,
loss_criterion,
data_loader,
optimizer):
cpc_model.train()
loss_criterion.train()
avg_loss = 0
avg_accuracy = 0
n_items = 0
for step, full_data in enumerate(data_loader):
# Each batch is represented by a Tuple of vectors:
# sequence of size : N x 1 x T
# label of size : N x T
#
# With :
# - N number of sequence in the batch
# - T size of each sequence
sequence, label = full_data
bs = len(sequence)
seq_len = label.size(1)
optimizer.zero_grad()
context_out, enc_out, _ = cpc_model(sequence.to(device),label.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(0,2,1)
loss = loss_criterion(scores,label.to(device))
loss.backward()
optimizer.step()
avg_loss+=loss.item()*bs
n_items+=bs
correct_labels = scores.argmax(1)
avg_accuracy += ((label==correct_labels.cpu()).float()).mean(1).sum().item()
avg_loss/=n_items
avg_accuracy/=n_items
return avg_loss, avg_accuracy
# + [markdown] colab_type="text" id="quYtjx_TxIPK"
# Don't forget to test it !
# + colab_type="code" id="50MwxbKhxMKp" colab={}
avg_loss_train, avg_accuracy_train = train_one_epoch(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer_frozen)
# + colab_type="code" id="_o6yk8XKWnYe" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="963b8f22-cc49-4966-f9c1-0712797283fc"
avg_loss_train, avg_accuracy_train
# + [markdown] colab_type="text" id="EmUkuJ2bwu4Z"
# ### d- Build the validation loop
# + colab_type="code" id="kZJMxj6cwzd3" colab={}
def validation_step(cpc_model,
phone_classifier,
loss_criterion,
data_loader):
cpc_model.eval()
phone_classifier.eval()
avg_loss = 0
avg_accuracy = 0
n_items = 0
with torch.no_grad():
for step, full_data in enumerate(data_loader):
# Each batch is represented by a Tuple of vectors:
# sequence of size : N x 1 x T
# label of size : N x T
#
# With :
# - N number of sequence in the batch
# - T size of each sequence
sequence, label = full_data
bs = len(sequence)
seq_len = label.size(1)
context_out, enc_out, _ = cpc_model(sequence.to(device),label.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(0,2,1)
loss = loss_criterion(scores,label.to(device))
avg_loss+=loss.item()*bs
n_items+=bs
correct_labels = scores.argmax(1)
avg_accuracy += ((label==correct_labels.cpu()).float()).mean(1).sum().item()
avg_loss/=n_items
avg_accuracy/=n_items
return avg_loss, avg_accuracy
# + id="p5qhmP-dpDCE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="7dd287ed-bd85-4504-c25f-20fec8a2c61d"
avg_loss_val, avg_accuracy_val = train_one_epoch(cpc_model, phone_classifier, loss_criterion, data_loader_val, optimizer_frozen)
avg_loss_val, avg_accuracy_val
# + [markdown] colab_type="text" id="vownVCt7xbVh"
# ### e- Run everything
#
# Test this functiion with both ```optimizer``` and ```optimizer_frozen```.
# + colab_type="code" id="xvO_4nKUxfQx" colab={}
def run(cpc_model,
phone_classifier,
loss_criterion,
data_loader_train,
data_loader_val,
optimizer,
n_epoch):
train_losses = []
val_losses = []
train_accuracy =[]
val_accuracy = []
for epoch in range(n_epoch):
print(f"Running epoch {epoch + 1} / {n_epoch}")
loss_train, acc_train = train_one_epoch(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer)
print("-------------------")
print(f"Training dataset :")
print(f"Average loss : {loss_train}. Average accuracy {acc_train}")
train_losses.append(loss_train)
train_accuracy.append(acc_train)
print("-------------------")
print("Validation dataset")
loss_val, acc_val = validation_step(cpc_model, phone_classifier, loss_criterion, data_loader_val)
print(f"Average loss : {loss_val}. Average accuracy {acc_val}")
print("-------------------")
print()
val_losses.append(loss_val)
val_accuracy.append(acc_val)
plt.plot(train_losses)
plt.plot(val_losses)
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'Validation'], loc='upper left')
plt.show()
plt.plot(train_accuracy)
plt.plot(val_accuracy )
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'Validation'], loc='upper left')
plt.show()
# + id="iinqtl4trIyx" colab_type="code" colab={}
#import matplotlib.pyplot as plt
# + colab_type="code" id="ceCEO2h2bxAn" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="eb33c194-719b-409f-af06-63f5ae6c5f9c"
run(cpc_model,phone_classifier,loss_criterion,data_loader_train,data_loader_val,optimizer_frozen,n_epoch=70)
# + [markdown] colab_type="text" id="TdfWDiFnylMT"
# ## Exercise 2 : Phone separability without alignment (PER)
#
# Aligned data are very practical, but un real life they are rarely available. That's why in this excercise we will consider a fine-tuning with non-aligned phonemes.
#
# The model, the optimizer and the phone classifier will stay the same. However, we will replace our phone criterion with a [CTC loss](https://pytorch.org/docs/master/generated/torch.nn.CTCLoss.html).
# + colab_type="code" id="_9BpM_Lpzgx8" colab={}
loss_ctc = torch.nn.CTCLoss()
# + [markdown] colab_type="text" id="AQpYgTyfzsrq"
# Besides, we will use a siglthy different dataset class.
# + colab_type="code" id="9HRxoatlz3ZZ" colab={"base_uri": "https://localhost:8080/", "height": 290} outputId="36576546-0906-4ef7-eec1-bb3a96b19450"
# %cd /content/CPC_audio
from cpc.eval.common_voices_eval import SingleSequenceDataset, parseSeqLabels, findAllSeqs
path_train_data_per = '/content/drive/My Drive/DATA/train'
path_val_data_per = '/content/drive/My Drive/DATA/validation'
path_phone_data_per = '/content/drive/My Drive/DATA/all_sessions.txt'
BATCH_SIZE=8
phone_labels, N_PHONES = parseSeqLabels(path_phone_data_per)
data_train_per, _ = findAllSeqs(path_train_data_per, extension='.wav')
dataset_train_non_aligned = SingleSequenceDataset(path_train_data_per, data_train_per, phone_labels)
data_loader_train = torch.utils.data.DataLoader(dataset_train_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
data_val_per, _ = findAllSeqs(path_val_data_per, extension='.wav')
dataset_val_non_aligned = SingleSequenceDataset(path_val_data_per, data_val_per, phone_labels)
data_loader_val = torch.utils.data.DataLoader(dataset_val_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
# + [markdown] colab_type="text" id="GwAckY62z7s9"
# ### a- Training
#
# Since the phonemes are not aligned, there is no simple direct way to get the classification acuracy of a model. Write and test the three functions ```train_one_epoch_ctc```, ```validation_step_ctc``` and ```run_ctc``` as before but without considering the average acuracy of the model.
# + colab_type="code" id="oYg5YzW8EHl4" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="5f6ead35-d20d-4a4c-a540-325a741c2568"
from cpc.feature_loader import loadModel
checkpoint_path = 'checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
phone_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_PHONES).to(device)
# + colab_type="code" id="CFQ2g3PjErdZ" colab={}
parameters = list(phone_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
optimizer_frozen = torch.optim.Adam(list(phone_classifier.parameters()), lr=LEARNING_RATE)
# + colab_type="code" id="Zsgjv3cD0oqD" colab={}
import torch.nn.functional as F
def train_one_epoch_ctc(cpc_model,
phone_classifier,
loss_criterion,
data_loader,
optimizer):
cpc_model.train()
loss_criterion.train()
avg_loss = 0
avg_accuracy = 0
n_items = 0
for step, full_data in enumerate(data_loader):
x, x_len, y, y_len = full_data
x_batch_len = x.shape[-1]
x, y = x.to(device), y.to(device)
bs=x.size(0)
optimizer.zero_grad()
context_out, enc_out, _ = cpc_model(x.to(device),y.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(1,0,2)
scores = F.log_softmax(scores,2)
yhat_len = torch.tensor([int(scores.shape[0]*x_len[i]/x_batch_len) for i in range(scores.shape[1])]) # this is an approximation, should be good enough
loss = loss_criterion(scores,y.to(device),yhat_len,y_len)
loss.backward()
optimizer.step()
avg_loss+=loss.item()*bs
n_items+=bs
avg_loss/=n_items
return avg_loss
def validation_step(cpc_model,
phone_classifier,
loss_criterion,
data_loader):
cpc_model.eval()
phone_classifier.eval()
avg_loss = 0
avg_accuracy = 0
n_items = 0
with torch.no_grad():
for step, full_data in enumerate(data_loader):
x, x_len, y, y_len = full_data
x_batch_len = x.shape[-1]
x, y = x.to(device), y.to(device)
bs=x.size(0)
context_out, enc_out, _ = cpc_model(x.to(device),y.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(1,0,2)
scores = F.log_softmax(scores,2)
yhat_len = torch.tensor([int(scores.shape[0]*x_len[i]/x_batch_len) for i in range(scores.shape[1])]) # this is an approximation, should be good enough
loss = loss_criterion(scores,y.to(device),yhat_len,y_len)
avg_loss+=loss.item()*bs
n_items+=bs
avg_loss/=n_items
return avg_loss
def run_ctc(cpc_model,
phone_classifier,
loss_criterion,
data_loader_train,
data_loader_val,
optimizer,
n_epoch):
train_losses = []
val_losses = []
for epoch in range(n_epoch):
print(f"Running epoch {epoch + 1} / {n_epoch}")
loss_train = train_one_epoch_ctc(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer)
#train_losses = np.append(loss_train, train_losses)
print("-------------------")
print(f"Training dataset :")
print(f"Average loss : {loss_train}.")
train_losses.append(loss_train)
print("-------------------")
print("Validation dataset")
loss_val = validation_step(cpc_model, phone_classifier, loss_criterion, data_loader_val)
#val_losses = np.append(loss_val, val_losses)
print(f"Average loss : {loss_val}")
print("-------------------")
print()
val_losses.append(loss_val)
plt.plot(train_losses)
plt.plot(val_losses)
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'Validation'], loc='upper left')
plt.show()
return train_losses, val_losses
# + colab_type="code" id="GSr7tcUdD72c" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="5738a0b2-a893-4bfb-d0a0-18b9ff571894"
#import numpy as np
train_losses, val_losses=run_ctc(cpc_model,phone_classifier,loss_ctc,data_loader_train,data_loader_val,optimizer_frozen,n_epoch=70)
# + [markdown] colab_type="text" id="TKrYW4gK1BBF"
# ### b- Evaluation: the Phone Error Rate (PER)
#
# In order to compute the similarity between two sequences, we can use the [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance). This distance estimates the minimum number of insertion, deletion and addition to move from one sequence to another. If we normalize this distance by the number of characters in the reference sequence we get the Phone Error Rate (PER).
#
# This value can be interpreted as :
# \\[ PER = \frac{S + D + I}{N} \\]
#
# Where:
#
#
# * N is the number of characters in the reference
# * S is the number of substitutiion
# * I in the number of insertion
# * D in the number of deletion
#
# For the best possible alignment of the two sequences.
#
#
# + colab_type="code" id="RoBhsx7GNqI_" colab={}
import numpy as np
def get_PER_sequence(ref_seq, target_seq):
# re = g.split()
# h = h.split()
n = len(ref_seq)
m = len(target_seq)
D = np.zeros((n+1,m+1))
for i in range(1,n+1):
D[i,0] = D[i-1,0]+1
for j in range(1,m+1):
D[0,j] = D[0,j-1]+1
### TODO compute the alignment
for i in range(1,n+1):
for j in range(1,m+1):
D[i,j] = min(
D[i-1,j]+1,
D[i-1,j-1]+1,
D[i,j-1]+1,
D[i-1,j-1]+ 0 if ref_seq[i-1]==target_seq[j-1] else float("inf")
)
return D[n,m]/len(ref_seq)
#return PER
# + [markdown] colab_type="text" id="r-hr0KK0mgcR"
# You can test your function below:
# + colab_type="code" id="AfTb3yOQmvey" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="dc9f2a3c-895c-4888-ba28-3f718a51d0c1"
ref_seq = [0, 1, 1, 2, 0, 2, 2]
pred_seq = [1, 1, 2, 2, 0, 0]
expected_PER = 4. / 7.
print(get_PER_sequence(ref_seq, pred_seq) == expected_PER)
# + [markdown] colab_type="text" id="nHiyChl-m_k7"
# ## c- Evaluating the PER of your model on the test dataset
#
# Evaluate the PER on the validation dataset. Please notice that you should usually use a separate dataset, called the dev dataset, to perform this operation. However for the sake of simplicity we will work with validation data in this exercise.
# + colab_type="code" id="DMkX0PoFnclg" colab={}
import progressbar
from multiprocessing import Pool
def cut_data(seq, sizeSeq):
maxSeq = sizeSeq.max()
return seq[:, :maxSeq]
def prepare_data(data):
seq, sizeSeq, phone, sizePhone = data
seq = seq.cuda()
phone = phone.cuda()
sizeSeq = sizeSeq.cuda().view(-1)
sizePhone = sizePhone.cuda().view(-1)
seq = cut_data(seq.permute(0, 2, 1), sizeSeq).permute(0, 2, 1)
return seq, sizeSeq, phone, sizePhone
def get_per(test_dataloader,
cpc_model,
phone_classifier):
downsampling_factor = 160
cpc_model.eval()
phone_classifier.eval()
avgPER = 0
nItems = 0
print("Starting the PER computation through beam search")
bar = progressbar.ProgressBar(maxval=len(test_dataloader))
bar.start()
for index, data in enumerate(test_dataloader):
bar.update(index)
with torch.no_grad():
seq, sizeSeq, phone, sizePhone = prepare_data(data)
c_feature, _, _ = cpc_model(seq.to(device),phone.to(device))
sizeSeq = sizeSeq / downsampling_factor
predictions = torch.nn.functional.softmax(
phone_classifier(c_feature), dim=2).cpu()
phone = phone.cpu()
sizeSeq = sizeSeq.cpu()
sizePhone = sizePhone.cpu()
bs = c_feature.size(0)
data_per = [(predictions[b].argmax(1), phone[b]) for b in range(bs)]
# data_per = [(predictions[b], sizeSeq[b], phone[b], sizePhone[b],
# "criterion.module.BLANK_LABEL") for b in range(bs)]
with Pool(bs) as p:
poolData = p.starmap(get_PER_sequence, data_per)
avgPER += sum([x for x in poolData])
nItems += len(poolData)
bar.finish()
avgPER /= nItems
print(f"Average PER {avgPER}")
return avgPER
# + colab_type="code" id="2hvnudh4Osb4" colab={"base_uri": "https://localhost:8080/", "height": 90} outputId="c4c592b4-c9b6-4a83-d88b-9687a369d0c8"
PER_train_data =get_per(data_loader_val,cpc_model,phone_classifier)
print('PER_train_data', PER_train_data)
# + id="RBC7FZmTD53L" colab_type="code" colab={}
# + id="d6beoBQ-D5XI" colab_type="code" colab={}
# !mkdir '/content/drive/My Drive/DATA/test_tmp'
# + id="DwQv7Y0vCr8a" colab_type="code" colab={}
# !cp -r '/content/drive/My Drive/DATA/test/recording' '/content/drive/My Drive/DATA/test_tmp'
# + [markdown] colab_type="text" id="p8e9D7g8159k"
# ## Exercice 3 : Character error rate (CER)
#
# The Character Error Rate (CER) is an evaluation metric similar to the PER but with characters insterad of phonemes. Using the following data, run the functions you defined previously to estimate the CER of your model after fine-tuning.
# + colab_type="code" id="cXONmKQOuFSn" colab={"base_uri": "https://localhost:8080/", "height": 290} outputId="2de4cde7-106b-4d04-99c1-350c246e11d4"
# Load a dataset labelled with the letters of each sequence.
# %cd /content/CPC_audio
from cpc.eval.common_voices_eval import SingleSequenceDataset, parseSeqLabels, findAllSeqs
path_train_data_cer = '/content/drive/My Drive/DATA/train'
path_val_data_cer = '/content/drive/My Drive/DATA/validation'
path_letter_data_cer = '/content/drive/My Drive/DATA/all_sessions.txt'
BATCH_SIZE=8
letters_labels, N_LETTERS = parseSeqLabels(path_letter_data_cer)
data_train_cer, _ = findAllSeqs(path_train_data_cer, extension='.wav')
dataset_train_non_aligned = SingleSequenceDataset(path_train_data_cer, data_train_cer, letters_labels)
data_val_cer, _ = findAllSeqs(path_val_data_cer, extension='.wav')
dataset_val_non_aligned = SingleSequenceDataset(path_val_data_cer, data_val_cer, letters_labels)
# The data loader will generate a tuple of tensors data, labels for each batch
# data : size N x T1 x 1 : the audio sequence
# label : size N x T2 the sequence of letters corresponding to the audio data
# IMPORTANT NOTE: just like the PER the CER is computed with non-aligned phone data.
data_loader_train_letters = torch.utils.data.DataLoader(dataset_train_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
data_loader_val_letters = torch.utils.data.DataLoader(dataset_val_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
# + colab_type="code" id="9h07zI2LjzAU" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="3f6b5cf5-1930-4089-c8e6-c2b1e0a5e33f"
from cpc.feature_loader import loadModel
checkpoint_path = 'checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
character_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_LETTERS).to(device)
# + colab_type="code" id="rHCNg1E7lW1L" colab={}
parameters = list(character_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
optimizer_frozen = torch.optim.Adam(list(character_classifier.parameters()), lr=LEARNING_RATE)
# + colab_type="code" id="engpkljbk9hj" colab={}
loss_ctc = torch.nn.CTCLoss()
# + id="r0cTw86UR-nQ" colab_type="code" colab={}
import matplotlib.pyplot as plt
# + id="aS5acroeBwlP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="3655d478-e163-46e9-cd65-8acfcbc7fba8"
run_ctc(cpc_model,character_classifier,loss_ctc,data_loader_train_letters,data_loader_val_letters,optimizer_frozen,n_epoch=50)
# + colab_type="code" id="A8oxFr1jm17P" colab={"base_uri": "https://localhost:8080/", "height": 90} outputId="985142b1-448e-46fa-e1d4-a3d94803b0ed"
CER_train_data= get_per(data_loader_val_letters,cpc_model,character_classifier)
print('CER_train_data', CER_train_data)
# + id="dj87qBbW9vEz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 55} outputId="f7f5ce89-b355-4394-85d9-646597eaf9db"
letters_labels.keys()
# + id="cNSHI_nf-8Fx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 145} outputId="fdcb654f-afe4-4d6e-e872-d5e5127c041f"
path_test_data_cer = '/content/drive/My Drive/DATA/test_tmp'
data_test_cer, _ = findAllSeqs(path_test_data_cer, extension='.wav')
dataset_test_non_aligned = SingleSequenceDataset(path_test_data_cer, data_test_cer, letters_labels)
data_loader_test_letters = torch.utils.data.DataLoader(dataset_test_non_aligned, batch_size=BATCH_SIZE, shuffle=True)
# + id="TyeN3uRs8v2B" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 163} outputId="1089d92f-345f-489f-9a1f-58cfc738f570"
from cpc.eval.common_voices_eval import SingleSequenceDataset, parseSeqLabels, findAllSeqs
path_test_data_cer = '/content/drive/My Drive/DATA/test_tmp'
path_letter_data_cer = '/content/drive/My Drive/DATA/all_sessions.txt'
BATCH_SIZE=8
letters_labels, N_LETTERS = parseSeqLabels(path_letter_data_cer)
data_test_cer, _ = findAllSeqs(path_test_data_cer, extension='.wav')
dataset_test_non_aligned = SingleSequenceDataset(path_test_data_cer, data_test_cer, letters_labels)
# The data loader will generate a tuple of tensors data, labels for each batch
# data : size N x T1 x 1 : the audio sequence
# label : size N x T2 the sequence of letters corresponding to the audio data
# IMPORTANT NOTE: just like the PER the CER is computed with non-aligned phone data.
data_loader_test_letters = torch.utils.data.DataLoader(dataset_test_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
loss = validation_step(cpc_model, character_classifier, loss_ctc, data_loader_test_letters)
print(f"Average loss for test data : {loss}")
# + id="2PGOstQmHBUH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 108} outputId="d6295557-3f09-482e-f945-13e7a71b972e"
CER_test_data =get_per(data_loader_val_letters,cpc_model,character_classifier)
print('CER_test_data', CER_test_data)
| Esther_Oduntan_Text_To _Speech_Project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/desihighStudent/desihigh/blob/main/Colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="786f4d66"
from PIL import Image
# + [markdown] id="79ddf98a"
# # Welcome to DESI High @ Google Collab!
# + id="7e4883c3" outputId="67781c91-e957-4ba8-9267-c8897effe8e2" colab={"base_uri": "https://localhost:8080/", "height": 288}
Image.open("desihigh/images/colab.webp")
# + [markdown] id="0f903986"
# An exciting adventure awaits you as you will soon have at your fingertips hot-off-the-telescope DESI data to run your own experiments.
# + [markdown] id="fe224eea"
# For you to create and save your own experiments, we'll run everything on Google Colab. This will leave you with a copy of any additions on your Google Drive.
# + [markdown] id="adb458e0"
# We'll need the starter script, downloading from the DESI High repo:
# + id="746096fd" outputId="d50f4a81-8cfb-4048-f49d-c2e5e8285981" colab={"base_uri": "https://localhost:8080/"}
# ! wget -O colab.py https://raw.githubusercontent.com/michaelJwilson/desihigh/main/colab.py
# + id="yyRARIhHOC1A"
# ! mv /content/drive/MyDrive/desihigh /content/drive/MyDrive/_desihigh
# + [markdown] id="0e65366d"
# So, without further ado, the magic incantation:
# + id="2d9fe293" outputId="9e663d8e-3144-4407-b032-b8e6f5024f91" colab={"base_uri": "https://localhost:8080/"}
import colab
# + [markdown] id="44a7ba81"
# Congratulations, you've just loaded your first Python library! And it's a useful one aswell.
# + [markdown] id="974e9f37"
# Now, we'll save this notebook to your drive. You'll want to do this at the end of every notebook you've changed.
# + [markdown] id="N7R3dbgpOc-d"
# This is my first notebook
# + [markdown] id="b3a627e5"
# Now, we can get started with the real thing. Buckle up!
# + [markdown] id="5f541ea9"
# [https://www.githubtocolab.com/michaelJwilson/desihigh](https://www.githubtocolab.com/michaelJwilson/desihigh)
| Colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_spatialvi)
# language: python
# name: conda_spatialvi
# ---
# +
import scanpy as sc
import numpy as np
import pandas as pd
import os
import scipy.io
import seaborn as sb
import anndata
import matplotlib.pyplot as plt
# %matplotlib inline
save_path = "/home/ubuntu/mouse_lymph_node/nova/"
# -
nova_data = sc.read_h5ad("nova_processed_data.h5ad")
# ## visualize the data
fig, ax = plt.subplots(figsize=(20, 20))
sc.pl.umap(
nova_data,
color="cell_types",
frameon=False,
title="",
legend_loc="on data",
legend_fontsize="small",
ax=ax, size=10
)
# ### Perform more in depth clustering (this must be fine tuned for every run)
sc.tl.louvain(nova_data, restrict_to = ("cell_types", ["B cells"]), key_added="louvain_sub", resolution=0.7, random_state=0)
sc.tl.louvain(nova_data, restrict_to = ("louvain_sub", ["Monocytes"]), key_added="louvain_sub_1", resolution=0.7, random_state=0)
fig, ax = plt.subplots(figsize=(10, 10))
sc.pl.umap(
nova_data,
color="louvain_sub_1",
legend_fontsize="small",
ax=ax, size=10
)
def study_labels_heterogeneity(key_1, key_2):
plt.figure(figsize=(10, 10))
profile = pd.DataFrame(data=nova_data.obs[[key_1, key_2]])
x_list = np.unique(nova_data.obs[key_1])
y_list = np.unique(nova_data.obs[key_2])
x_n = len(x_list)
y_n = len(y_list)
proportion = np.zeros(shape=(x_n, y_n))
for i, x in enumerate(x_list):
for j, y in enumerate(y_list):
proportion[i, j] = np.sum(profile[profile[key_1] == x][key_2] == y)
proportion /= np.sum(proportion, axis=1)[:, np.newaxis]
plt.imshow(proportion.T)
plt.colorbar()
plt.xticks(range(x_n), x_list, rotation=35)
plt.yticks(range(y_n), y_list, rotation=35)
plt.xlabel(key_1)
plt.ylabel(key_2)
plt.show()
for i, x in enumerate(x_list):
line_string = F'Cluster {x}: '
pile = []
# empile
for j, y in enumerate(y_list):
if proportion[i, j] > 0.05:
pile.append([y, proportion[i, j]])
# depile
for y, p in sorted(pile, key=lambda x:x[1])[::-1]:
line_string += F'{y} present at ratio {p:.2f}, '
print(line_string)
study_labels_heterogeneity("louvain_sub_1", "SCVI_pred_cell_types")
annotations = {"B cells,0": "Mature B cells",
"B cells,1": "Mature B cells",
"B cells,2": "Ifit3-high B cells",
"B cells,3": "Mature B cells",
"B cells,4": "Mature B cells",
"B cells,5": "Mature B cells",
"B cells,6": "Mature B cells",
"B cells,7": "Cycling B/T cells",
"Monocytes,0": "Ly6-high monocytes",
"Monocytes,1": "Cxcl9-high monocytes"
}
cell_types = []
for c in nova_data.obs["louvain_sub_1"]:
if c in annotations:
cell_types.append(annotations[c])
else:
cell_types.append(c)
nova_data.obs["cell_types"] = cell_types
fig, ax = plt.subplots(figsize=(20, 20))
sc.pl.umap(
nova_data,
color="cell_types",
frameon=False,
title="",
legend_loc="on data",
legend_fontsize="small",
ax=ax, size=10
)
fig.savefig("figures/UMAP_nova.pdf")
# +
mapping = {"Mature B cells": "B cells",
"Ifit3-high B cells": "B cells",
"Cycling B/T cells": "B cells",
"Plasma B cells": "NA",
"Neutrophils": "NA",
"Ly6-high monocytes": "Monocytes",
"Cxcl9-high monocytes": "Monocytes"}
res = []
for x in nova_data.obs["cell_types"]:
local = x
if x in mapping:
local = mapping[x]
res.append(local)
# res = [mapping[x] if x in mapping else x for x in nova_data.obs["cell_types"]]
nova_data.obs["broad_cell_types"] = res
nova_data = nova_data[nova_data.obs["broad_cell_types"] != "NA"].copy()
# -
sc.write("nova_final_data.h5ad", nova_data)
| lymph_node/nova/Step3: further clustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Modeling and Simulation in Python
#
# Chapter 21
#
# Copyright 2017 <NAME>
#
# License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
#
# +
# Configure Jupyter so figures appear in the notebook
# %matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
# %config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
# -
# ### With air resistance
# Next we'll add air resistance using the [drag equation](https://en.wikipedia.org/wiki/Drag_equation)
# I'll start by getting the units we'll need from Pint.
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
# Now I'll create a `Params` object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
params = Params(height = 381 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 2.5e-3 * kg,
diameter = 19e-3 * m,
rho = 1.2 * kg/m**3,
v_term = 18 * m / s)
# Now we can pass the `Params` object `make_system` which computes some additional parameters and defines `init`.
#
# `make_system` uses the given radius to compute `area` and the given `v_term` to compute the drag coefficient `C_d`.
def make_system(params):
"""Makes a System object for the given conditions.
params: Params object
returns: System object
"""
unpack(params)
area = np.pi * (diameter/2)**2
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=height, v=v_init)
t_end = 30 * s
return System(params, area=area, C_d=C_d,
init=init, t_end=t_end)
# Let's make a `System`
system = make_system(params)
# Here's the slope function, including acceleration due to gravity and drag.
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object
returns: derivatives of y and v
"""
y, v = state
unpack(system)
f_drag = rho * v**2 * C_d * area / 2
a_drag = f_drag / mass
dydt = v
dvdt = -g + a_drag
return dydt, dvdt
# As always, let's test the slope function with the initial conditions.
slope_func(system.init, 0, system)
# We can use the same event function as in the previous chapter.
def event_func(state, t, system):
"""Return the height of the penny above the sidewalk.
"""
y, v = state
return y
# And then run the simulation.
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5*s)
details.message
# Here are the results.
results
# The final height is close to 0, as expected.
#
# Interestingly, the final velocity is not exactly terminal velocity, which suggests that there are some numerical errors.
#
# We can get the flight time from `results`.
t_sidewalk = get_last_label(results)
# Here's the plot of position as a function of time.
# +
def plot_position(results):
plot(results.y)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap09-fig02.pdf')
# -
# And velocity as a function of time:
# +
def plot_velocity(results):
plot(results.v, color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
# -
# From an initial velocity of 0, the penny accelerates downward until it reaches terminal velocity; after that, velocity is constant.
# **Exercise:** Run the simulation with an initial velocity, downward, that exceeds the penny's terminal velocity. Hint: You can create a new `Params` object based on an existing one, like this:
#
# `params = Params(params, v_init = -30 * m / s)`
#
# What do you expect to happen? Plot velocity and position as a function of time, and see if they are consistent with your prediction.
# +
# Solution goes here
# -
plot_position(results)
# +
# Solution goes here
# -
# **Exercise:** Suppose we drop a quarter from the Empire State Building and find that its flight time is 19.1 seconds. Use this measurement to estimate the terminal velocity.
#
# 1. You can get the relevant dimensions of a quarter from https://en.wikipedia.org/wiki/Quarter_(United_States_coin).
#
# 2. Create a `Params` object with the system parameters. We don't know `v_term`, so we'll start with the inital guess `v_term = 18 * m / s`.
#
# 3. Use `make_system` to create a `System` object.
#
# 4. Call `run_ode_solver` to simulate the system. How does the flight time of the simulation compare to the measurement?
#
# 5. Try a few different values of `t_term` and see if you can get the simulated flight time close to 19.1 seconds.
#
# 6. Optionally, write an error function and use `fsolve` to improve your estimate.
#
# 7. Use your best estimate of `v_term` to compute `C_d`.
#
# Note: I fabricated the observed flight time, so don't take the results of this exercise too seriously.
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# -
# ### Bungee jumping
# Suppose you want to set the world record for the highest "bungee dunk", [as shown in this video](https://www.youtube.com/watch?v=UBf7WC19lpw). Since the record is 70 m, let's design a jump for 80 m.
#
# We'll make the following modeling assumptions:
#
# 1. Initially the bungee cord hangs from a crane with the attachment point 80 m above a cup of tea.
#
# 2. Until the cord is fully extended, it applies no force to the jumper. It turns out this might not be a good assumption; we will revisit it.
#
# 3. After the cord is fully extended, it obeys [Hooke's Law](https://en.wikipedia.org/wiki/Hooke%27s_law); that is, it applies a force to the jumper proportional to the extension of the cord beyond its resting length.
#
# 4. The jumper is subject to drag force proportional to the square of their velocity, in the opposite of their direction of motion.
#
# Our objective is to choose the length of the cord, `L`, and its spring constant, `k`, so that the jumper falls all the way to the tea cup, but no farther!
#
# First I'll create a `Param` object to contain the quantities we'll need:
#
# 1. Let's assume that the jumper's mass is 75 kg.
#
# 2. With a terminal velocity of 60 m/s.
#
# 3. The length of the bungee cord is `L = 40 m`.
#
# 4. The spring constant of the cord is `k = 20 N / m` when the cord is stretched, and 0 when it's compressed.
#
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
params = Params(y_attach = 80 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 75 * kg,
area = 1 * m**2,
rho = 1.2 * kg/m**3,
v_term = 60 * m / s,
L = 25 * m,
k = 40 * N / m)
# Now here's a version of `make_system` that takes a `Params` object as a parameter.
#
# `make_system` uses the given value of `v_term` to compute the drag coefficient `C_d`.
def make_system(params):
"""Makes a System object for the given params.
params: Params object
returns: System object
"""
unpack(params)
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=y_attach, v=v_init)
t_end = 20 * s
return System(params, C_d=C_d,
init=init, t_end=t_end)
# Let's make a `System`
system = make_system(params)
# `spring_force` computes the force of the cord on the jumper:
def spring_force(y, system):
"""Computes the force of the bungee cord on the jumper:
y: height of the jumper
Uses these variables from system|
y_attach: height of the attachment point
L: resting length of the cord
k: spring constant of the cord
returns: force in N
"""
unpack(system)
distance_fallen = y_attach - y
if distance_fallen <= L:
return 0 * N
extension = distance_fallen - L
f_spring = k * extension
return f_spring
# The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
spring_force(80*m, system)
spring_force(55*m, system)
spring_force(54*m, system)
# `drag_force` computes drag as a function of velocity:
def drag_force(v, system):
"""Computes drag force in the opposite direction of `v`.
v: velocity
system: System object
returns: drag force
"""
unpack(system)
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag
# Here's the drag force at 60 meters per second.
v = -60 * m/s
f_drag = drag_force(v, system)
# Acceleration due to drag at 60 m/s is approximately g, which confirms that 60 m/s is terminal velocity.
a_drag = f_drag / system.mass
# Now here's the slope function:
def slope_func(state, t, system):
"""Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
"""
y, v = state
unpack(system)
a_drag = drag_force(v, system) / mass
a_spring = spring_force(y, system) / mass
dvdt = -g + a_drag + a_spring
return v, dvdt
# As always, let's test the slope function with the initial params.
slope_func(system.init, 0, system)
# And then run the simulation.
results, details = run_ode_solver(system, slope_func, max_step=0.3*s)
details
# Here's the plot of position as a function of time.
plot_position(results)
# After reaching the lowest point, the jumper springs back almost to almost 70 m, and oscillates several times. That looks like more osciallation that we expect from an actual jump, which suggests that there some dissipation of energy in the real world that is not captured in our model. To improve the model, that might be a good thing to investigate.
#
# But since we are primarily interested in the initial descent, the model might be good enough for now.
#
# We can use `min` to find the lowest point:
min(results.y)
# At the lowest point, the jumper is still too high, so we'll need to increase `L` or decrease `k`.
# Here's velocity as a function of time:
plot_velocity(results)
# +
subplot(1, 2, 1)
plot_position(results)
subplot(1, 2, 2)
plot_velocity(results)
savefig('figs/chap09-fig03.pdf')
# -
# Although we compute acceleration inside the slope function, we don't get acceleration as a result from `run_ode_solver`.
#
# We can approximate it by computing the numerical derivative of `ys`:
a = gradient(results.v)
plot(a)
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
# And we can compute the maximum acceleration the jumper experiences:
max_acceleration = max(a) * m/s**2
# Relative to the acceleration of gravity, the jumper "pulls" about "1.7 g's".
max_acceleration / g
# ### Under the hood
#
# The gradient function in `modsim.py` adapts the NumPy function of the same name so it works with `Series` objects.
#
# %psource gradient
# ### Solving for length
#
# Assuming that `k` is fixed, let's find the length `L` that makes the minimum altitude of the jumper exactly 0.
# The metric we are interested in is the lowest point of the first oscillation. For both efficiency and accuracy, it is better to stop the simulation when we reach this point, rather than run past it and the compute the minimum.
#
# Here's an event function that stops the simulation when velocity is 0.
def event_func(state, t, system):
"""Return velocity.
"""
y, v = state
return v
# As usual, we should test it with the initial conditions.
event_func(system.init, 0, system)
# And we see that we have a problem. Since the event function returns 0 under the initial conditions, the simulation would stop immediately. We can solve that problem by specifying the direction of the event function:
event_func.direction = +1
# When direction is positive, it only stops the simulation if the velocity is 0 and increasing, which is what we want.
# Now we can test it an confirm that it stops at the bottom of the jump.
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.3*s)
plot_position(results)
min(results.y)
# **Exercise:** Write an error function that takes `L` and `params` as arguments, simulates a bungee jump, and returns the lowest point.
#
# Test the error function with a guess of 25 m and confirm that the return value is about 5 meters.
#
# Use `fsolve` with your error function to find the value of `L` that yields a perfect bungee dunk.
#
# Run a simulation with the result from `fsolve` and confirm that it works.
#
# +
# Solution goes here
# -
#
# +
# Solution goes here
# +
# Solution goes here
# +
# Solution goes here
# -
# **Optional exercise:** Search for the combination of length and spring constant that yields minimum height 0 while minimizing peak acceleration.
# +
# Solution goes here
# +
# Solution goes here
# -
| code/chap21.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Running a Notebook
#
# Now that you have learned to open and close notebooks, this section will teach you how to run an analysis using one of the existing Jupyter notebooks.
#
# ## Overview
#
# Jupyter is an interactive coding environment compatible with the programming language Python. Digital Earth Africa is based on a Python library called the [Open Data Cube](https://www.opendatacube.org/), so the Sandbox contains Python-based notebooks.
#
# It is helpful, but not required, to be familiar with Python programming. More detail on Python can be found in [Extra session: Python basics](../python_basics/index.rst), which is an **optional** module that provides greater context into common Python commands used in the Digital Earth Africa platform. It is recommended to first follow the steps below to set up a folder in the Sandbox environment before completing the Python basics module.
# ## Video: Working with notebooks
# This video gives an overview of the notebook we will be working with. Watch the video, and then follow the written instructions below to complete the exercise.
# + raw_mimetype="text/restructuredtext" active=""
# .. youtube:: ecVjImPy2_A
# :width: 100%
# -
# ## Exercise: Run the Crop Health notebook
#
# This activity will demonstrate how to run one of the Digital Earth Africa Sandbox notebooks. The notebook is a real world example showing how to measure crop health. Follow the instructions below to run the notebook.
# ### Create a copy of the notebook
#
# The Sandbox comes with a large number of existing notebooks, which you can use as reference material. To keep these in their original format, we recommend making a copy of a notebook when you want to run it. For the rest of this training material, we will ask you to make copies of notebooks and store them in a new **Training** folder. Follow the instructions below to set up this folder and add a copy of the notebook for this session.
#
# 1. Click the **Folder icon** in the folder path to make sure you're in the **Home** folder.
#
# <img align="middle" src="../_static/session_1/crophealth1.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="500">
# 2. Click the **New Folder icon** and name the new folder **Training**. Press **Enter** to finish creating the folder.
#
# <img align="middle" src="../_static/session_1/crophealth2.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="500">
#
# The next step is to identify and copy the notebook you want to work on. For this exercise, this will be the `Crop_health.ipynb` notebook, which is in the **Real_world_examples** folder. Follow the steps below to copy it.
# 3. Double-click the **Real_world_examples** folder to open it. The `Crop_health.ipynb` notebook is selected in the image below.
#
# <img align="middle" src="../_static/session_1/crophealth3.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
# 4. Next, right-click on the notebook and click **Duplicate** to create a copy. By default, this will be called `Crop_health-Copy1.ipynb`.
# <img align="middle" src="../_static/session_1/crophealth4.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
# 5. Click and drag `Crop_health-Copy1.ipynb` to the **Folder icon** to move it to the **Home** folder.
#
# <img align="middle" src="../_static/session_1/crophealth5.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
# 6. Return to the **Home** folder by clicking the **Folder icon**. You should see the `Crop_health-Copy1.ipynb` notebook. Click and drag this file to the **Training** folder you created.
#
# <img align="middle" src="../_static/session_1/crophealth6.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
# 7. Double-click the **Training** folder. Right-click the copied notebook and click **Rename**. Rename the notebook to `Crop_health.ipynb`.
#
# <img align="middle" src="../_static/session_1/crophealth7.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
# <img align="middle" src="../_static/session_1/crophealth8.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
#
# You can now run this notebook, as well as make edits to it.
# ### Starting the notebook
#
# Notebooks on the Sandbox come pre-run. It is best practice to restart and clear the notebook before running.
#
# 1. Double-click your copied `Crop_health.ipynb` notebook to open it.
#
# 2. In the main menu, click **Kernel -> Restart Kernel and Clear All Outputs**. When asked whether you want to restart the kernel, click **Restart**.
#
# <img align="middle" src="../_static/session_1/crophealth9.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
#
# 3. Read the **Background** and **Description** sections to learn more about the notebook.
# ### Loading packages and functions
#
# This notebook uses Python packages and functions to conduct the analysis. These are contained in the first code cell, which looks like:
#
# <img align="middle" src="../_static/session_1/crophealth10.PNG" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
# To load the necessary packages and functions:
#
# 1. In the `Crop_health.ipynb` notebook, click on the code cell that matches the one above.
#
# 2. Press `Shift + Enter` on your keyboard to run the cell.
#
# 3. When the cell has finished running, the `[ ]` icon to the left of the cell should change to `[1]`, as shown below. This indicates that the cell has finished running. The number `1` indicates that this was the first cell run in the notebook.
#
# <img align="middle" src="../_static/session_1/crophealth11.PNG" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
# ### Pick a study site
#
# Next, we have to provide the area of interest using latitude, longitude, and a buffer, which describes how many square degrees to load around the set latitude and longitude values. We also set a date. Data from two years up to this date will be loaded.
#
# For now, keep the values that have been set. The notebook will turn this into a bounding box, which will be used to load the data.
#
# 1. Click on the cell corresponding to the image below in the notebook.
#
# <img align="middle" src="../_static/session_1/crophealth12.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
#
# 2. Press `Shift + Enter` on your keyboard to run the cell.
#
# 3. When the cell has finished running, the `[ ]` icon to the left of the cell should change to `[2]`.
# ### Loading the data
#
# This notebook uses a special function called `load_crophealth_data` to load the required data. Later in this training course, you will learn how to write your own data loading commands.
#
# The cell containing this function is shown in the image below:
# <img align="middle" src="../_static/session_1/crophealth13.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
# The `load_crophealth_data` function goes through a number of steps. It identifies the available Sentinel-2 data over the last two years, then masks the bad quality pixels such as cloud and cloud shadow. After the masking step, the function only keeps images where more than half the pixels are good. Finally, it calculates the Normalised Difference Vegetation Index (NDVI) and returns the data. The returned data are stored in the `dataset` object.
# To run the cell:
#
# 1. Click on the cell corresponding to the image above in the notebook.
#
# 2. Press `Shift + Enter` on your keyboard to run the cell. Outputs should start to appear below the cell.
#
# 3. When the cell has finished running, the `[ ]` icon to the left of the cell should change to `[3]`.
# As the cell is running, you should see outputs describing the function's actions. The output is shown in the image below:
#
# <img align="middle" src="../_static/session_1/crophealth14.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
# In this case, the function identified 145 observations over the last 2 years, then kept and loaded 100 that met the quality requirements.
# ### Run the Crop Health App
#
# After loading the data, you can visualize the health of various fields using the `run_crophealth_app` function. This is a special function for this notebook, which takes the `dataset` you loaded, as well as the latitude, longitude and buffer parameters that define the area of interest. It then starts an interactive app that you can use to measure the average NDVI in different fields.
#
# The cell containing this function is shown in the image below:
#
# <img align="middle" src="../_static/session_1/crophealth15.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
# 1. Click on the cell corresponding to the image above in the notebook.
#
# 2. Press `Shift + Enter` on your keyboard to run the cell. An interactive app should appear, as shown in the image below:
#
# <img align="middle" src="../_static/session_1/crophealth16.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
#
# 3. The left-hand side of the app contains a map, which will allow you to draw a polygon around a field of interest. Click the **Polygon icon**, then click points on the map to outline an area, as shown below:
#
# <img align="middle" src="../_static/session_1/crophealth17.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
#
# + raw_mimetype="text/restructuredtext" active=""
# .. note::
# The map in this app shows an ESRI basemap which is higher resolution than the Sentinel-2 data used to calculate the NDVI index. The purpose of the map is to guide you in drawing field boundaries.
#
# -
# 4. Click the first point of the polygon to finish selecting the area. The app will then calculate the average NDVI for that area and display it on the right-hand side of the app, as shown below:
#
# <img align="middle" src="../_static/session_1/crophealth18.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
#
# The cycles in the graph are likely related to growth and harvesting events.
#
# 5. It is possible to draw multiple polygons and compare the vegetation index over time of different fields. Click the **Polygon icon** to draw a second polygon:
#
# <img align="middle" src="../_static/session_1/crophealth19.png" alt="The DE Africa Sandbox Jupyterlab tutorial image." width="800">
# ## Conclusion
#
# Congratulations! You have finished the first session of this training! You now know how to open and close notebooks, make copies of notebooks and run them.
# ## Optional activity
#
# If you would like to re-run this notebook using a different area of interest, follow these steps:
#
# 1. Restart the notebook and clear all outputs.
#
# 2. Run the cell that imports packages and fuctions.
#
# 3. Change the latitude, longitude and buffer values in the second code cell. You can either use the values suggested above the cell, or provide your own. Then run the cell.
#
# + raw_mimetype="text/restructuredtext" active=""
# .. note::
# If you're going to use your own area of interest, make sure data is available by looking at the Sandbox Metadata Explorer for Sentinel-2 at https://explorer.digitalearth.africa/s2_l2a. You will also need to make sure you provide latitude and longitude values (rather than Easting and Northing values).
#
# -
# 4. Run the rest of the notebook as described in the tutorial, drawing in areas of interest using the **Polygon icon** on the map. Did you learn anything interesting about the fields for your area of interest?
# 5. Investigate the **optional** [Python basics extra session](../python_basics/index.rst) before continuing on to Session 2. If you are new to Python, it may be helpful for context and reference. If you have previous experience in Python, it provides a review of the major concepts used in the Digital Earth Africa training course.
| docs/session_1/04_running_a_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# http://pandas.pydata.org/pandas-docs/stable/10min.html
# # 10 Minutes to pandas
#
# This is a short introduction to pandas, geared mainly for new users. You can see more complex recipes in the [Cookbook](http://pandas.pydata.org/pandas-docs/stable/cookbook.html#cookbook)
#
#
# #### This version is the same as the public availble version for getting started with pandas, but with edits, comments, and some small addions for the Data-X course at Berkeley. Edited, <NAME>, Jan 2017
# Customarily, we import as follows:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# this line makes plots/graphs appear within the notebook
# %matplotlib inline
# ## Object Creation
#
# See the [Data Structure Intro section](http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dsintro)
# Creating a Series by passing a list of values, letting pandas create a default integer index:
# This is a series, its like an array but with an index
s = pd.Series([1,3,5,np.nan,6,8])
s
# Creating a DataFrame by passing a numpy array, with a datetime index and labeled columns:
# We will look at the time series index separately, but its also in this notebook.
dates = pd.date_range('20130101', periods=6)
# dates is a time series object used as an index
dates
# The DataFrame below is from an np.array and column list
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
df
# Creating a DataFrame by passing a dict of objects that can be converted to series-like.
# See below that all the columns can be different types, created from a dictionary.
df2 = pd.DataFrame({'A':1.,
'B':pd.Timestamp('20130102'),
'C':pd.Series(1,index=list(range(4)),dtype='float32'),
'D':np.array([3]*4,dtype='int32'),
'E':pd.Categorical(["test","train","test","train"]),
'F':'foo'})
df2
# Having specific [dtypes](http://pandas.pydata.org/pandas-docs/stable/basics.html#basics-dtypes)
# type for each column
df2.dtypes
# If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled. Here’s a subset of the attributes that will be completed:
# +
# df2.<TAB>
# -
# As you can see, the columns A, B, C, and D are automatically tab completed. E is there as well; the rest of the attributes have been truncated for brevity.
# ## Viewing Data
#
# See the [Basics section](http://pandas.pydata.org/pandas-docs/stable/basics.html#basics)
# See the top & bottom rows of the frame
# as before, head(n) gives you the first n rows, defaults to 5
df.head()
df.tail(3)
# Display the index, columns, and the underlying numpy data
# if you want the index by itelf, use .index
df.index
# df.index[2:4]
# Here is a list of the columns
df.columns
#df.columns[2]
#df.columns[2:4]
# df.values extracts only the data in np.array
df.values
# Describe shows a quick statistic summary of your data
# A quick way to get statistics
df.describe()
# df.describe()['A'][1]
# df.describe()[2:3]
# Transposing your data
# yes, this is the pandas method
df.T
# Sorting by an axis
# recall df
df
# See that its using the header to sort
df.sort_index(axis=1, ascending=False)
# try df[2:3].sort_index(axis=1, ascending=False)
# Sorting by value
df.sort_values(by='B')
# ## Selection
# **Note:** While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized pandas data access methods, .at, .iat, .loc, .iloc and .ix.
# See the indexing documentation [Indexing and Selecting Data](http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing) and [MultiIndex / Advanced Indexing](http://pandas.pydata.org/pandas-docs/stable/advanced.html#advanced)
# ### Getting
# Selecting a single column, which yields a Series, equivalent to df.A
df['A']
# Selecting via [], which slices the rows.
# A slice: by rows (row numbers)
df[1:3]
# A slice: by rows (by range of values in index)
df['20130102':'20130104']
# ### Selection by Label
#
# See more in [Selection by Label](Selection by Label)
# For getting a cross section using a label
# +
# Introduce loc: this will get you a cross section of the table by label ran
# df.loc[a:b, x:y], by rows and column location
df.loc['20130102':'20130104','B':'D']
#df.loc[dates[0]]
# df[0:1]
# df[a:b] by rows
# df[[col]] or df[[list of col]] by columns
# df.loc[a:b, x:y], by rows and column location
# df.iloc[3:5,0:2], by slicing by specific position
# -
# Selection by Label
#In this case, the columns are in a list
df.loc[:,['A','B']]
# Showing label slicing, both endpoints are included
# more slicng with loc
df.loc['20130102':'20130104',['A','B']]
# Reduction in the dimensions of the returned object
# +
df.loc['20130102',['A','B']]
# -
# For getting a scalar value
# or get just one value
df.loc[dates[0],'A']
# ### Selection by Position
#
# See more in [Selection by Position](http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-integer)
# Select via the position of the passed integers
# recall df
df
# Introduce iloc: its like loc, but uses index position instead
df.iloc[3]
# By integer slices, acting similar to numpy/python
# Get a cross section by index positions
df.iloc[3:5,0:2]
# By lists of integer position locations, similar to the numpy/python style
# iloc will accept lists of position numbers
df.iloc[[1,2,4],[0,2]]
# For slicing rows explicitly
df.iloc[1:3,:]
# For slicing columns explicitly
df.iloc[:,1:3]
# For getting a value explicitly
df.iloc[1,1]
# For getting fast access to a scalar (equiv to the prior method)
# same as above but faster for one single scaler value
df.iat[1,1]
# ## Boolean Indexing
# Using a single column’s values to select data.
# +
print df
# this will be only the rows where the A column is > 0
print df[df.A > 0]
print
# same result
print df['A'] > 0
print df[df['A'] > 0]
# -
# A where operation for getting.
# Try this to see which elements are > 0
print df > 0
# The show only those values
df[df > 0]
# Using the isin() method for filtering:
df2 = df.copy()
# We are about to add a new column to df2
df2['E'] = ['one','one', 'two','three','four','three']
df2
# +
# We use isin to get only the rows that have 'two' and 'four'
print df2['E'].isin(['two','four'])
df2[df2['E'].isin(['two','four'])]
# -
# ## Setting
# Setting a new column automatically aligns the data by the indexes
s1 = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130102',periods=6))
s1
# Many of the same indexing and slicing methods can be used also to set data
df['F'] = s1
df
# Setting values by label
# look at the first row and column
df.at[dates[0],'A'] = 0
df
# Setting values by position
df.iat[0,1] = 0
df
# Setting by assigning with a numpy array
df.loc[:,'D'] = np.array([5] * len(df))
df
# The result of the prior setting operations
df
# A where operation with setting.
# making a copy
df2 = df.copy()
df2
# change the sign of values where df<0
dfx = df2[df2 < 0] = -df2
dfx
# ## Missing Data
#
# pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section
# Reindexing allows you to change/add/delete the index on a specified axis. This returns a copy of the data.
# df1 will have a new index and a new column E
print df
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
df1
# Set E in the first 2 rows
df1.loc[dates[0]:dates[1],'E'] = 1
df1
# To drop any rows that have missing data.
# A view where we drop any rows with value NnN
df1.dropna(how='any')
# Filling missing data
# fill NaN values with 5
df1.fillna(value=5)
# To get the boolean mask where values are nan
# returns a boolean mask for null entries
print df1
pd.isnull(df1)
# ## Operations
#
# See the [Basic section on Binary Ops](http://pandas.pydata.org/pandas-docs/stable/basics.html#basics-binop)
# ### Stats
#
# Operations in general exclude missing data.
# Performing a descriptive statistic
df.mean()
# Same operation on the other axis
df.mean(1)
# df.mean(axis = 1)
# Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension.
# This Series has 2 rows shifted down with NaN inserted into the first positions
s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(2)
s
print df
df.sub(s, axis='index')
# ### Apply
# Applying functions to the data
# apply below will operate on every column
df.apply(np.cumsum)
df.apply(lambda x: x.max() - x.min())
# ### Histogramming
#
# See more at [Histogramming and Discretization](http://pandas.pydata.org/pandas-docs/stable/basics.html#basics-discretization)
s = pd.Series(np.random.randint(0, 7, size=10))
s
s.value_counts()
# ### String Methods
#
# Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses [regular expressions](https://docs.python.org/2/library/re.html) by default (and in some cases always uses them). See more at [Vectorized String Methods](http://pandas.pydata.org/pandas-docs/stable/text.html#text-string-methods).
s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
s.str.lower()
# ## Merge
# ### Concat
#
# pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
#
# See the [Merging section](http://pandas.pydata.org/pandas-docs/stable/merging.html#merging)
# Concatenating pandas objects together with concat():
df = pd.DataFrame(np.random.randn(10, 4))
df
# break it into pieces
pieces = [df[:3], df[3:7], df[7:]]
pd.concat(pieces)
# ### Join
#
# SQL style merges. See the [Database style joining](http://pandas.pydata.org/pandas-docs/stable/merging.html#merging-join)
left = pd.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})
right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})
left
right
pd.merge(left, right, on='key')
# ### Append
#
# Append rows to a dataframe. See the [Appending](http://pandas.pydata.org/pandas-docs/stable/merging.html#merging-concatenation)
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
df
s = df.iloc[3]
df.append(s, ignore_index=True)
# ## Grouping
#
# By “group by” we are referring to a process involving one or more of the following steps
#
# * **Splitting** the data into groups based on some criteria
# * **Applying** a function to each group independently
# * **Combining** the results into a data structure
#
# See the [Grouping section](http://pandas.pydata.org/pandas-docs/stable/groupby.html#groupby)
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'],
'B' : ['one', 'one', 'two', 'three','two', 'two', 'one', 'three'],
'C' : np.random.randn(8),
'D' : np.random.randn(8)})
df
# Grouping and then applying a function sum to the resulting groups.
df.groupby('A').sum()
df.groupby(['A','B']).sum()
# ## Reshaping
#
# See the sections on [Hierarchical Indexing](http://pandas.pydata.org/pandas-docs/stable/advanced.html#advanced-hierarchical) and [Reshaping](http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-stacking).
# ### Stack
tuples = list(zip(*[['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]))
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
df2 = df[:4]
df2
# The stack() method “compresses” a level in the DataFrame’s columns.
stacked = df2.stack()
stacked
# With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is unstack(), which by default unstacks the **last level**:
stacked.unstack()
stacked.unstack(1)
stacked.unstack(0)
# ### Pivot Tables
#
# See the section on [Pivot Tables](http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-pivot).
df = pd.DataFrame({'A' : ['one', 'one', 'two', 'three'] * 3,
'B' : ['A', 'B', 'C'] * 4,
'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
'D' : np.random.randn(12),
'E' : np.random.randn(12)})
df
# We can produce pivot tables from this data very easily:
pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])
# ## Time Series
#
# pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications. See the [Time Series section](http://pandas.pydata.org/pandas-docs/stable/timeseries.html#timeseries)
rng = pd.date_range('1/1/2012', periods=100, freq='S')
ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
ts.resample('5Min').sum()
# Time zone representation
rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')
ts = pd.Series(np.random.randn(len(rng)), rng)
ts
ts_utc = ts.tz_localize('UTC')
ts_utc
#
# Convert to another time zone
ts_utc.tz_convert('US/Eastern')
#
# Converting between time span representations
rng = pd.date_range('1/1/2012', periods=5, freq='M')
ts = pd.Series(np.random.randn(len(rng)), index=rng)
ts
ps = ts.to_period()
ps
ps.to_timestamp()
# Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:
prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')
ts = pd.Series(np.random.randn(len(prng)), prng)
ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
ts.head()
# ## Categoricals
# Since version 0.15, pandas can include categorical data in a DataFrame. For full docs, see the [categorical introduction](http://pandas.pydata.org/pandas-docs/stable/categorical.html#categorical) and the [API documentation](http://pandas.pydata.org/pandas-docs/stable/api.html#api-categorical).
df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
# Convert the raw grades to a categorical data type.
df["grade"] = df["raw_grade"].astype("category")
df["grade"]
# Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!)
df["grade"].cat.categories = ["very good", "good", "very bad"]
# Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new Series per default).
df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
df["grade"]
# Sorting is per order in the categories, not lexical order.
df.sort_values(by="grade")
# Grouping by a categorical column shows also empty categories.
df.groupby("grade").size()
# ## Plotting
# [Plotting](http://pandas.pydata.org/pandas-docs/stable/visualization.html#visualization) docs.
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
# plot a series almost automatically
ts.plot()
# On DataFrame, plot() is a convenience to plot all of the columns with labels:
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index,
columns=['A', 'B', 'C', 'D'])
df = df.cumsum()
plt.figure(); df.plot(); plt.legend(loc='best')
# ## Getting Data In/Out
# ### CSV
# [Writing to a csv file](http://pandas.pydata.org/pandas-docs/stable/io.html#io-store-in-csv)
df.to_csv('foo.csv')
# [Reading from a csv file](http://pandas.pydata.org/pandas-docs/stable/io.html#io-read-csv-table)
pd.read_csv('foo.csv')
# ### HDF5
# Reading and writing to [HDFStores](http://pandas.pydata.org/pandas-docs/stable/io.html#io-hdf5)
#
# Writing to a HDF5 Store
df.to_hdf('foo.h5','df')
# Reading from a HDF5 Store
pd.read_hdf('foo.h5','df')
# ### Excel
#
# Reading and writing to [MS Excel](http://pandas.pydata.org/pandas-docs/stable/io.html#io-excel)
#
# Writing to an excel file
df.to_excel('foo.xlsx', sheet_name='Sheet1')
# Reading from an excel file
pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
# ### Gotchas
# If you are trying an operation and you see an exception like:
if pd.Series([False, True, False]):
print("I was true")
# See [Comparisons](http://pandas.pydata.org/pandas-docs/stable/basics.html#basics-compare) for an explanation and what to do.
#
# See [Gotchas](http://pandas.pydata.org/pandas-docs/stable/gotchas.html#gotchas) as well.
| x-archive-temp/m120-pandas/10-minutes-to-pandas-w-data-x.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/papagorgio23/Python101/blob/master/Functional_Introduction_To_Python_Section_2(Functions).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ih7_1C9ABgBQ" colab_type="text"
# # A Functional Introduction To Python
# ## Section 1. Introductory Concepts
# ## <span style="color:blue"> Section 2. Functions</span>
# ## Section 3. Control Structures
# ## Section 4. Intermediate Topics
# + [markdown] id="JnB4dVXzBgBS" colab_type="text"
# ## Section 2: Functions
# * Writing Functions
# * Function arguments: positional, keyword
# * Functional Currying: Passing uncalled functions
# * Functions that Yield
# * Decorators: Functions that wrap other functions
# + [markdown] id="vKMM69OZBgBS" colab_type="text"
# ### Writing Functions
# Learning to write a function is the most fundamental skill to learn in Python. With a basic mastery of functions, it is possible to have an almost full command of the language.
#
# + [markdown] id="u-LBblHgBgBT" colab_type="text"
# ##### Simple function
# The simplest functions just return a value.
# + id="9GxpxebnBgBU" colab_type="code" colab={}
def favorite_martial_art():
return "bjj"
# + id="OC7mkbJDBgBY" colab_type="code" colab={} outputId="6bc0bb32-cbad-42b9-83fd-0ba00a436b66"
favorite_martial_art()
# + [markdown] id="GnXqA03kBgBc" colab_type="text"
# ##### Documenting Functions
# It is a very good idea to document functions.
# In Jupyter Notebook and IPython docstrings can be viewed by referring to the function with a ?. ie.
# ```
# In [2]: favorite_martial_art_with_docstring?
# Signature: favorite_martial_art_with_docstring()
# Docstring: This function returns the name of my favorite martial art
# File: ~/src/functional_intro_to_python/<ipython-input-1-bef983c31735>
# Type: function
# ```
# + id="1B3tanM6BgBc" colab_type="code" colab={}
def favorite_martial_art_with_docstring():
"""This function returns the name of my favorite martial art"""
return "bjj"
# + [markdown] id="xZfH4mpiBgBf" colab_type="text"
# ##### Docstrings of functions can be printed out by referring to ```__doc__```
# + id="i_gN1RIYBgBg" colab_type="code" colab={} outputId="0d8a50f7-10ef-446b-dd35-8db18b06484a"
favorite_martial_art_with_docstring.__doc__
# + [markdown] id="UTWsE5uKBgBi" colab_type="text"
# ### Function arguments: positional, keyword
# A function is most useful when arguments are passed to the function.
# New values for times are processed inside the function.
# This function is also a 'positional' argument, vs a keyword argument. Positional arguments are processed in the order they are created in.
# + id="9VbD8LxFBgBj" colab_type="code" colab={}
def practice(times):
print(f"I like to practice {times} times a day")
# + id="r5NhdNNFBgBn" colab_type="code" colab={} outputId="6739e22b-ab86-4531-e931-315a06f1a455"
practice(2)
# + id="FTIev4qKBgBq" colab_type="code" colab={} outputId="5c7f9074-bd51-400e-fe4e-a69534af66b2"
practice(3)
# + [markdown] id="4N6rEEeBBgBs" colab_type="text"
# ##### Positional Arguments are processed in order
# + id="xHgYlXo_BgBt" colab_type="code" colab={}
def practice(times, technique, duration):
print(f"I like to practice {technique}, {times} times a day, for {duration} minutes")
# + id="TFG-8BV-BgBv" colab_type="code" colab={} outputId="07d492ef-9533-48c1-daa9-54abade3750a"
practice(3, "leg locks", 45)
# + [markdown] id="YhpIul4bBgBy" colab_type="text"
# ##### Keyword Arguments are processed by key, value and can have default values
# One handy feature of keyword arguments is that you can set defaults and only change the defaults you want to change.
# + id="BBVNLV_0BgBz" colab_type="code" colab={}
def practice(times=2, technique="kimura", duration=60):
print(f"I like to practice {technique}, {times} times a day, for {duration} minutes")
# + id="n6a22A4yBgB1" colab_type="code" colab={} outputId="c69cde6b-3dae-4ae5-901c-ea6806bca113"
practice()
# + id="7QEqo17zBgB3" colab_type="code" colab={} outputId="d7b0ee58-6a97-4d04-a9e1-c334d551aa8b"
practice(duration=90)
# + [markdown] id="DYrzpCjUBgB5" colab_type="text"
# ##### \*\*kwargs and \*args
# * allow dynamic argument passing to functions
# * Should be used with discretion because it can make code hard to understand
# + id="Rb5_L5gzBgB6" colab_type="code" colab={}
def attack_techniques(**kwargs):
"""This accepts any number of keyword arguments"""
for name, attack in kwargs.items():
print(f"This is attack I would like to practice: {attack}")
# + id="t6BPrkORBgB8" colab_type="code" colab={} outputId="0dbaf470-aa9d-47f7-89d4-4d27497da8da"
attack_techniques(arm_attack="kimura",
leg_attack="straight_ankle_lock", neck_attach="arm_triangle")
# + [markdown] id="10QxjsQXBgB_" colab_type="text"
# ##### passing dictionary of keywords to function
# **kwargs syntax can also be used to pass in arguments all at once
# + id="QNZmC6I8BgB_" colab_type="code" colab={}
attacks = {"arm_attack":"kimura",
"leg_attack":"straight_ankle_lock",
"neck_attach":"arm_triangle"}
# + id="1roDZdJIBgCE" colab_type="code" colab={} outputId="cb7269c3-0720-4be7-8d01-fd210c86d8fa"
attack_techniques(**attacks)
# + [markdown] id="Q_wG3eOiBgCG" colab_type="text"
# ##### Passing Around Functions
# Object-Oriented programming is a very popular way to program, but it isn't the only style available in Python. For concurrency and for Data Science, functional programming fits as a complementary style.
#
# In the example, below a function can be used inside of another function by being passed into the function itself as an argument.
# + id="24oJe8WABgCH" colab_type="code" colab={}
def attack_location(technique):
"""Return the location of an attack"""
attacks = {"kimura": "arm_attack",
"straight_ankle_lock":"leg_attack",
"arm_triangle":"neck_attach"}
if technique in attacks:
return attacks[technique]
return "Unknown"
# + id="XV-d_E9cBgCJ" colab_type="code" colab={} outputId="0d9dbedc-26ad-4ef2-c7de-d7491beef08a"
attack_location("kimura")
# + id="yrZZ_kOOBgCM" colab_type="code" colab={} outputId="da7c09b4-6454-4146-b8b1-4a0c96dce45e"
attack_location("bear hug")
# + id="Zz05n87eBgCO" colab_type="code" colab={}
def multiple_attacks(attack_location_function):
"""Takes a function that categorizes attacks and returns location"""
new_attacks_list = ["rear_naked_choke", "americana", "kimura"]
for attack in new_attacks_list:
attack_location = attack_location_function(attack)
print(f"The location of attack {attack} is {attack_location}")
# + id="2KZ67NklBgCQ" colab_type="code" colab={} outputId="333323d9-e552-4afa-d6d8-e6b1e1727054"
multiple_attacks(attack_location)
# + [markdown] id="IF_iF9vYBgCS" colab_type="text"
# ##### Closures and Functional Currying
# Closures are functions that contain other nested functions.
# In Python, a common way to use them is to keep track of the state.
# In the example below, the outer function, attack_counter keeps track of counts of attacks.
# The inner fuction attack_filter uses the "nonlocal" keyword in Python3, to modify the variable in the outer function.
#
# This approach is called **"functional currying"**. It allows for a specialized function to be created from general functions. As shown below, this style of function could be the basis of a simple video game or maybe for the statistics crew of a mma match.
# + id="qIGw5zZoBgCT" colab_type="code" colab={}
def attack_counter():
"""Counts number of attacks on part of body"""
lower_body_counter = 0
upper_body_counter = 0
def attack_filter(attack):
nonlocal lower_body_counter
nonlocal upper_body_counter
attacks = {"kimura": "upper_body",
"straight_ankle_lock":"lower_body",
"arm_triangle":"upper_body",
"keylock": "upper_body",
"knee_bar": "lower_body"}
if attack in attacks:
if attacks[attack] == "upper_body":
upper_body_counter +=1
if attacks[attack] == "lower_body":
lower_body_counter +=1
print(f"Upper Body Attacks {upper_body_counter}, Lower Body Attacks {lower_body_counter}")
return attack_filter
# + id="KCUufx9WBgCW" colab_type="code" colab={}
fight = attack_counter()
# + id="wytzfX7FBgCY" colab_type="code" colab={} outputId="26137f6a-330e-4268-e462-5f7eb0c09b0e"
fight("kimura")
# + id="WRFAi4pJBgCa" colab_type="code" colab={} outputId="9c6b0ac7-d1b0-47d3-cd58-17c1a3b46c0f"
fight("knee_bar")
# + id="XTNalJ2MBgCc" colab_type="code" colab={} outputId="ad947e3d-e057-47a6-d982-36299482d916"
fight("keylock")
# + [markdown] id="7dmTkIxTBgCe" colab_type="text"
# ##### Functions that Yield (Generators)
# A very useful style of programming is "lazy evaluation". A generator is an example of that. Generators yield an items at a time.
#
# The example below return an "infinite" random sequence of attacks. The lazy portion comes into play in that while there is an infinite amount of values, they are only returned when the function is called.
# + id="NrftkeOFBgCf" colab_type="code" colab={}
def lazy_return_random_attacks():
"""Yield attacks each time"""
import random
attacks = {"kimura": "upper_body",
"straight_ankle_lock":"lower_body",
"arm_triangle":"upper_body",
"keylock": "upper_body",
"knee_bar": "lower_body"}
while True:
random_attack = random.choices(list(attacks.keys()))
yield random_attack
# + id="MZvYYyPHBgCi" colab_type="code" colab={}
attack = lazy_return_random_attacks()
# + id="cza5uvCpBgCl" colab_type="code" colab={} outputId="a596af03-0c48-4e77-b007-6d53bbc3953c"
type(attack)
# + id="eBZS5GB6BgCn" colab_type="code" colab={}
attacks = {"kimura": "upper_body",
"straight_ankle_lock":"lower_body",
"arm_triangle":"upper_body",
"keylock": "upper_body",
"knee_bar": "lower_body"}
# + id="LVqtg5ksBgCp" colab_type="code" colab={} outputId="17b4e8e2-4d93-4edf-e538-f826b8b97904"
for _ in range(3):
print(next(attack))
# + [markdown] id="4eS04W9zBgCs" colab_type="text"
# ##### Decorators: Functions that wrap other functions
# Another useful technique in Python is to use the decorator syntax to wrap one function with another function.
# In the example below, a decorator is written that adds random sleep to each function call. When combined with the previous "infinite" attack generator, it generates random sleeps between each function call.
# + id="80MTCHagBgCt" colab_type="code" colab={}
def randomized_speed_attack_decorator(function):
"""Randomizes the speed of attacks"""
import time
import random
def wrapper_func(*args, **kwargs):
sleep_time = random.randint(0,3)
print(f"Attacking after {sleep_time} seconds")
time.sleep(sleep_time)
return function(*args, **kwargs)
return wrapper_func
# + id="8c8lz2O4BgCv" colab_type="code" colab={}
@randomized_speed_attack_decorator
def lazy_return_random_attacks():
"""Yield attacks each time"""
import random
attacks = {"kimura": "upper_body",
"straight_ankle_lock":"lower_body",
"arm_triangle":"upper_body",
"keylock": "upper_body",
"knee_bar": "lower_body"}
while True:
random_attack = random.choices(list(attacks.keys()))
yield random_attack
# + id="OS-B9nyWBgCy" colab_type="code" colab={} outputId="6ef147b0-160d-4824-eea7-e04e1825ef0b"
for _ in range(10):
print(next(lazy_return_random_attacks()))
# + [markdown] id="s4CKU0KdBgC2" colab_type="text"
# ##### Applying a Function to a Pandas DataFrame
# The final lesson on functions is to take this knowledge and use it on a DataFrame in Pandas. One of the more fundamental concepts in Pandas is use apply on a column vs iterating through all of the values. An example is shown below where all of the numbers are rounded to a whole digit.
# + id="3RLadReaBgC2" colab_type="code" colab={} outputId="9e993d03-fd70-4450-9aaf-d57e0f4f90c2"
import pandas as pd
iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
iris.head()
# + id="aF4ReWDJBgC4" colab_type="code" colab={} outputId="631d885e-b921-47eb-c805-a5e60829278a"
iris['rounded_sepal_length'] = iris[['sepal_length']].apply(pd.Series.round)
iris.head()
# + [markdown] id="PWcvURzBBgC7" colab_type="text"
# This was done with a built in function, but a custom function can also be written and applied to a column. In the example below, the values are multiplied by 100. The alternative way to accomplish this would be to create a loop, transform the data and then write it back. In Pandas, it is straightforward and simple to apply custom functions instead.
# + id="hZsyxtKlBgC8" colab_type="code" colab={} outputId="cb8a327f-7ced-4db1-cf4f-be97fd8948f7"
def multiply_by_100(x):
"""Multiplies by 100"""
return x*100
iris['100x_sepal_length'] = iris[['sepal_length']].apply(multiply_by_100)
iris.head()
| Functional_Introduction_To_Python_Section_2(Functions).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:metal]
# language: python
# name: conda-env-metal-py
# ---
# # Multi-task Supervision Tutorial
# In this tutorial we demonstrate how to use the multi-task versions of the label model and end model. We do this with a simple synthetic dataset, focusing primarily on input/output interfaces of these models. In a future tutorial, we will demonstrate the multi-task workflow on a real-world problem with additional scale and complexity, and illustrate the benefits that come from jointly modeling the weak supervision.
# For multi-task problems, we execute our pipeline in five steps; for more detail see our latest working [technical draft](https://ajratner.github.io/assets/papers/mts-draft.pdf):
# 1. **Load Data:** As in the `Basics` tutorial, we only have access to unlabeled data points `X`, and noisy labels---which are now in the form of `t` matrices, one for each different _task_.
# 2. **Define Task Graph:** The `TaskGraph` defines the structure of logical relationships between tasks.
# 3. **Train Label Model:** The purpose of the `LabelModel` is again to estimate the unknown accuracies of the labeling functions, _without access to `Y`_, and then use this to denoise and combine them into a set of _probabilistic multi-task training labels_.
# 3. **Train End Model:** We can then use these training labels to supervise a multi-task learning (MTL) model, which optionally inherits network structure from the `TaskGraph`.
# 4. **Evaluate:** We evaluate this model on a held-out test set/
# ## Step 1: Load Data
# We first load our data.
#
# The data dyptes for the multi-task setting mirror those of the single-task setting, but with an extra dimension for the number of tasks (t), and with the single-task cardinality (k) being replaced by multiple task-specific cardinalities (K_t):
#
# * X: a t-length list of \[n\]-dim iterables of end model inputs OR a single \[n\]-dim iterable of inputs if all tasks operate on the same input
# * Y: a t-length list of \[n\]-dim numpy.ndarray of target labels (Y[i] $\in$ {1,...,K_t})
# * L: a t-length list of \[n,m\] scipy.sparse matrices of noisy labels (L[i,j] $\in$ {0,...,K_t}, with label 0 reserved for abstentions
#
# And optionally (for use with some debugging/analysis tools):
# * D: a t-length list of \[n\]-dim iterables of human-readable examples (e.g. sentences) OR a single \[n\]-dim iterable of examples if all tasks operate on the same data
# We load data that has been pre-split into train/dev/test splits in 80/10/10 proportions.
import pickle
with open("data/multitask_tutorial.pkl", 'rb') as f:
Xs, Ys, Ls, Ds = pickle.load(f)
# ## Step 2: Define Task Graph
# The primary role of the task graph is to define a set of feasible target label vectors.
# For example, consider the following set of classification tasks, wherein we assign text entities to one of the given labels:
#
# T0: Y0 ∈ {PERSON, ORG}
# T1: Y1 ∈ {DOCTOR, OTHER PERSON, NOT APPLICABLE}
# T2: Y2 ∈ {HOSPITAL, OTHER ORG, NOT APPLICABLE}
#
# Observe that the tasks are related by logical implication relationships: if Y0 = PERSON,
# then Y2 = NOT APPLICABLE, since Y2 classifies ORGs. Thus, in this task structure, [PERSON, DOCTOR, NOT APPLICABLE] is a feasible label vector, whereas [PERSON, DOCTOR, HOSPITAL] is not.
# To reflect this feasible label set, we define our task graph for this problem with a TaskHierarchy, a subclass of TaskGraph which assumes that label K_t for each non-root node is the "NOT APPLICABLE" class.
from metal.multitask import TaskHierarchy
task_graph = TaskHierarchy(cardinalities=[2,3,3], edges=[(0,1), (0,2)])
# ## Step 3: Train Label Model
# We now pass our TaskGraph into the multi-task label model to instantiate a model with the appropriate structure.
from metal.multitask import MTLabelModel
label_model = MTLabelModel(task_graph=task_graph)
# We then train the model, computing the overlap matrix $O$ and estimating accuracies $\mu$...
label_model.train_model(Ls[0], n_epochs=200, print_every=20, seed=123)
Ls[2]
# As with the single-task case, we can score this trained model to evaluate it directly, or use it to make predictions for our training set that will then be used to train a multi-task end model.
label_model.score((Ls[1], Ys[1]))
# Y_train_ps stands for "Y[labels]_train[split]_p[redicted]s[oft]"
Y_train_ps = label_model.predict_proba(Ls[0])
# ## Step 4: Train End Model
# As with the single-task end model, the multi-task end model consists of three components: input layers, middle layers, and task head layers. Again, each layer consists of a torch.nn.Module followed by various optional additional operators (e.g., a ReLU nonlinearity, batch normalization, and/or dropout).
#
# **Input layers**: The input module is an IdentityModule by default. If your tasks accept inputs of different types (e.g., one task over images and another over text), you may pass in a t-length list of input modules.
#
# **Middle layers**: The middle modules are nn.Linear by default and are shared by all tasks.
#
# **Head layers**: The t task head modules are nn.Linear modules by default. You may instead pass in a custom module to be used by all tasks or a t-length list of modules. These task heads are unique to each task, sharing no parameters with other tasks. Their output is fed to a set of softmax operators whose output dimensions are equal to the cardinalities for each task.
# Here we construct a simple graph with a single (identity) input module, two intermediate layers, and linear task heads attached to the top layer.
from metal.multitask import MTEndModel
import torch
use_cuda = torch.cuda.is_available()
end_model = MTEndModel([1000,100,10], task_graph=task_graph, seed=123)
end_model.train_model((Xs[0], Y_train_ps), dev_data=(Xs[1], Ys[1]), n_epochs=5, seed=123)
# ## Step 5: Evaluate
# When it comes scoring our multi-task models, by default the mean task accuracy is reported.
# +
print("Label Model:")
score = label_model.score((Ls[2], Ys[2]))
print()
print("End Model:")
score = end_model.score((Xs[2], Ys[2]))
# -
# We can also, however, pass `reduce=None` to get back a list of task-specific accuracies.
scores = end_model.score((Xs[2], Ys[2]), reduce=None)
# And to get the predictions for all three tasks, we can call predict():
Y_p = end_model.predict(Xs[2])
Y_p
| tutorials/Multitask.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import http
import requests
import pandas as pd
# -
df = pd.read_csv("kl_districtCode.csv")
# +
session = requests.Session()
response = session.get('https://dashboard.kerala.gov.in/covid/index.php')
with open('kerala-index.html', 'wb') as f:
f.write(response.content)
# -
print(session.cookies.get_dict())
# +
headers = {
'Referer': 'https://dashboard.kerala.gov.in/covid/index.php',
}
response = session.get('https://dashboard.kerala.gov.in/covid/dailyreporting-view-public-districtwise.php', headers=headers)
with open('kerala-districts.html', 'wb') as f:
f.write(response.content)
# +
headers = {
'Referer': 'https://dashboard.kerala.gov.in/covid/index.php',
}
response = session.get('https://dashboard.kerala.gov.in/covid/testing-view-public.php', headers=headers)
with open('kerala-testing.html', 'wb') as f:
f.write(response.content)
# -
for code in df["Code"]:
headers = {
'Referer': 'https://dashboard.kerala.gov.in/covid/dailyreporting-view-public-districtwise.php',
}
files = {
'district': (None, str(code)),
}
response = session.post('https://dashboard.kerala.gov.in/covid/dailyreporting-view-public-districtwise.php', headers=headers, files=files)
with open('kerala-'+str(code)+'.html', 'wb') as f:
f.write(response.content)
| INPUT/KL/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="342e4b36"
# # ECE-3 Lab 2
# + [markdown] id="53d2499e"
# ## Contents
# 1. [Vectors](#Vectors)
# 2. [Linear Functions](#Linear-Functions)
# 3. [Norms](#Norms)
# 4. [Distance](#Distance)
# + [markdown] id="89db74b1"
# ## Vectors
# + [markdown] id="cebbd06b"
# - $\color{blue}{\text{Definition }}$ An ordered list of numbers, similar to arrays in Python.
#
# - We say a **n-vector** has a total of *n* *elements/components*.
# - $\color{blue}{\text{NOTE }}$ Indexes run from $0$ to $n-1$.
# - Example :
# $$A = \begin{bmatrix}
# -6.9 \\
# -5 \\
# 1.2 \\
# -9.6
# \end{bmatrix}\:\:\:
# B = \begin{bmatrix}
# -6.9 ,-5, 1.2, -9.6
# \end{bmatrix}$$
#
# + id="6983279c" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1633654875360, "user_tz": 420, "elapsed": 8, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GioINv2YRmEhwfngXXbtoiIksMG-WuCXQmau-8D=s64", "userId": "11521119731528460253"}} outputId="3d7aed94-4919-414f-c4d3-b2293d277f65"
import numpy as np
A = np.array([[-6.9],
[-5],
[1.2],
[-9.6]])
print(A)
print(A.shape)
B = np.array([-6.9,-5,1.2,-9.6])
print(B)
print(B.shape)
# + [markdown] id="b00c80b1"
# $\color{blue}{\text{NOTE }}$
#
# `C=A` where both `C`,`A` are both numpy arrays merely creates a reference to original array `A`. Changes made to A thereafter **DO** show up in `C` as well.
# + id="90bf1fc4" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1633460753972, "user_tz": 420, "elapsed": 148, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "00734435821983708962"}} outputId="736847ba-d4c2-4a3b-db72-a4c6db178abd"
C = A # just creates a reference to A, DOES NOT creates a copy
print('Before making any change to A')
print(C)
A[2] = 0
print("After making a change to A")
print(C) #notice the changes propagate to A as well
# + [markdown] id="v8SXBNJioh6S"
# $\color{#EF5645}{\text{Matrices}}$
# + id="ZmifuJ2loaUZ" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1633655310435, "user_tz": 420, "elapsed": 126, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GioINv2YRmEhwfngXXbtoiIksMG-WuCXQmau-8D=s64", "userId": "11521119731528460253"}} outputId="964f2ccb-9a58-403f-9ef1-7ed5f96b9452"
A = np.array([[1, 2, 3], [4, 5, 6]])
print("The dimension of the array is: ", A.ndim, '\n')
print("The total number of elements in the array are: ", A.size, '\n')
print("The shape of the array is: ", A.shape)
# + [markdown] id="357290d6"
# $\color{#EF5645}{\text{Block/Stacked Matrices}}$
# $$P = \begin{bmatrix}
# 1, 2, 3, 4
# \end{bmatrix}$$
# $$Q = \begin{bmatrix}
# 5, 6, 7, 8
# \end{bmatrix}$$
#
# $$C = \begin{bmatrix}
# 1, 2, 3, 4\\
# 5, 6, 7, 8
# \end{bmatrix}$$
#
# $$D = \begin{bmatrix}
# 1, 2, 3, 4, 5, 6, 7, 8
# \end{bmatrix}$$
#
#
#
# $\verb|Code|$
# ```python
# P = np.array([1,2,3,4])
# Q = np.array([5,6,7,8])
# C = np.concatenate([[P],[Q]]) #vertical
# D = np.concatenate([P,Q]) #horizontal
# ```
#
# $\color{#EF5645}{\text{Ones Vector}}$
# $$A = \begin{bmatrix}
# 1\: 1\: 1\\
# 1\: 1\: 1\\
# \end{bmatrix}$$
#
# ```python
# A = np.ones((2,3))
# ```
#
# $\color{#EF5645}{\text{Zero Vector}}$
# $$B = \begin{bmatrix}
# 0\: 0\: 0\: 0\\
# 0\: 0\: 0\: 0\\
# 0\: 0\: 0\: 0\\
# \end{bmatrix}$$
#
# ```python
# B = np.zeros((3,4))
# ```
#
# + id="7f5d8d88"
# Code demonstrating ones and zeros vectors
E = np.ones((3,4))
F = np.zeros((3,4))
P = np.array([1,2,3,4])
Q = np.array([5,6,7,8])
C = np.concatenate([[P],[Q]])
D = np.concatenate([P,Q])
print(E)
print(F)
print(C)
print(D)
# + [markdown] id="AuIh5K8hrcND"
# $\color{#EF5645}{\text{Some Basic Numpy Functions}}$
# + id="3yez-qdxrjul" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1633648235540, "user_tz": 420, "elapsed": 137, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GioINv2YRmEhwfngXXbtoiIksMG-WuCXQmau-8D=s64", "userId": "11521119731528460253"}} outputId="f277a7e3-57a8-4133-f23b-c698bbb75078"
a = np.array([3, 2, 1, 4])
print("The sum of the elements in a is: ", np.sum(a), '\n')
print("The minimum of a is: ", np.min(a), '\n')
print("If we sort a we get: ", np.sort(a), '\n')
print("We can also change the shape of a: ", np.reshape(a, (2, 2)), '\n')
b = np.array([5, 8, 7, 6])
print("We can merge it with another array like this: ", np.hstack((a, b)), '\n')
print("Or like this: ", np.vstack((a, b)))
# + [markdown] id="ada0d6b4"
# $\color{#EF5645}{\text{Inner Product}}$
# + [markdown] id="f340ce6d"
# $\color{blue}{\text{Definition }}$ Inner product (or dot product) of two n-vectors $a$ and $b$ is defined as :
# $$a^T b = a_1 b_1 + a_2 b_2 + a_3 b_3 + ... + a_n b_n$$
#
# - Denoted by : $〈a, b〉, 〈a|b〉, (a, b), a · b$.
#
# $\color{blue}{\text{Properties}}$
# - $a^T b = b^T a$. (Prove this in class!)
# - $(γa)^T b = γ(a^T b)$
# - $(a + b)^T c = a^T c + b^T c$
#
# $\verb|Code|$
# ```python
# a = np.array([1,2,3])
# b = np.array([8,2,4])
# c = np.inner(a,b)
# ```
# + id="55b552a4" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1633656466871, "user_tz": 420, "elapsed": 172, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GioINv2YRmEhwfngXXbtoiIksMG-WuCXQmau-8D=s64", "userId": "11521119731528460253"}} outputId="2a763753-90a9-44ca-f9c8-540230508f03"
# Code demonstrating inner product of two arrays
import numpy as np
a = np.array([1,2,3])
b = np.array([8,2,4])
# 3 different ways to compute inner product
c = np.inner(a,b)
d = a.T @ b
e = np.sum(a * b)
print(c)
print(d)
print(e)
# + [markdown] id="456bb60d"
# $\color{blue}{\text{Exercise - 1}}$
# + [markdown] id="7393000b"
# A typical course utilizes several different weights for marks scored by a student across various tests such as quizzes, midterm and final over a quarter. Use the weight matrix W and marks obtained M to find the effective/total score obtained by a student.
#
# **Weight Matrix W**
#
# | Q | M | F |
# |-----|-----|-----|
# | 20 % | 30 % | 50 %|
#
#
# **Marks Obtained M**
#
# | Q | M | F |
# |----|----|----|
# | 15 | 45 | 70 |
#
# *Hint: Use inner product*
#
#
# + id="d1ee2a25" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1633984054921, "user_tz": 420, "elapsed": 138, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GioINv2YRmEhwfngXXbtoiIksMG-WuCXQmau-8D=s64", "userId": "11521119731528460253"}} outputId="ef33883c-8d4f-49af-f7bc-20448f7a026c"
# Exercise - 1
# Add your code below
W = np.array([0.2, 0.3, 0.5])
M = np.array([15, 45, 70])
a= np.inner(W,M)
b= W.T @M
print(a)
print(b)
# + [markdown] id="53465674"
# [Go back to contents](#Contents)
# + [markdown] id="73f99679"
# ## Linear Functions
# + [markdown] id="6ad0d51a"
# $\color{#EF5645}{\text{Definition }}$
# We say that $f$ satistifies the superposition property if:
#
# $$f(\alpha x + \beta y) = \alpha f(x) + \beta f(y)$$
#
# holds for all scalars $\alpha, \beta$ and all $n$-vectors $x, y$.
#
# Satifies superposition property $\implies$ Linear
#
# **Example:** All innner product functions are linear functions. (Prove this)
# + [markdown] id="bba5096d"
# ## Affine Functions
# + [markdown] id="038ede3a"
# $\color{#EF5645}{\text{Definition }}$
# A function that is linear plus a constant is called affine. Its general form is:
#
# $$f(x) = a^Tx+b \quad \text{with $a$ an $n$-vector and $b$ a scalar}$$
#
# + [markdown] id="296e812e"
# ## Gradient of a function
#
#
# $$\nabla f(x) = \left(\frac{\partial f}{\partial x_1}(x), ..., \frac{\partial f}{\partial x_n}(x)\right).$$
#
# where $\frac{\partial}{\partial x_i}$ is the partial derivative of $f$ with respect to the component $i$ of $x$.
# + [markdown] id="9eb5d49d"
# [Go back to contents](#Contents)
# + [markdown] id="4fe0012d"
# ## Norms. (Half way point/break)
# + [markdown] id="2ddceb49"
# $\color{#EF5645}{\text{Euclidean Norm}}$
#
# Let $x$ be a n-dimensional vector i.e $$x = [x_1, x_2, ... x_n]$$
#
# Then the Euclidean norm or $\ell_2$ norm (or just norm here) is given by:
#
# $$||x||_2 = ||x|| = \sqrt{x_1^2 + x_2^2 + ... + x_n^2} = \sqrt{x^T x}$$
#
# *Think about what would be $||x||_n$ in general*
# + id="621041e4" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1633658489882, "user_tz": 420, "elapsed": 423, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GioINv2YRmEhwfngXXbtoiIksMG-WuCXQmau-8D=s64", "userId": "11521119731528460253"}} outputId="26e2fac0-8417-4373-8af9-2a52c463f520"
# Practice using numpy.linalg below
import numpy as np
x = np.array([1,2,7,-4])
# All three methods have the same result, verify !
print(np.linalg.norm(x))
print(np.sqrt(np.inner(x,x)))
print(np.sqrt(sum(x **2)))
# + [markdown] id="d470264e"
# $\color{#EF5645}{\text{Mean Square Value}}$
#
# $$ \frac{x_1^2+ x_2^2+ ... + x_n^2}{n} = \frac{||x||^2}{n}$$
#
# $\color{#EF5645}{\text{Root Mean Square (RMS) Value}}$
#
# $$rms(x) = \sqrt{\frac{x_1^2+ x_2^2+ ... + x_n^2}{n}}=\frac{||x||}{\sqrt{n}}$$
#
# + [markdown] id="45bbb4f4"
# $\color{blue}{\text{Exercise - 2}}$
# + id="5ebfce4a" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1633984029558, "user_tz": 420, "elapsed": 289, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GioINv2YRmEhwfngXXbtoiIksMG-WuCXQmau-8D=s64", "userId": "11521119731528460253"}} outputId="29f5e9fe-75ad-43c2-9af4-a7c404f4d45c"
# Exercise - 2 - write a function for L-P norm
# DO NOT CHANGE SECTION 1 and SECTION 3
# Add your code to SECTION 2
# Section-1 (DON'T CHANGE)
import numpy as np
arr = np.array([4,5,2,7])
# Section-2
def l_p_norm(arr,p):
'''arr contains the array of numbers i.e vector
Write a function to find l-p norm of vector arr'''
# insert your code below
x=np.absolute(arr)
return np.power(np.sum(x**p),(1/p))
# Section-3 (DON'T CHANGE)
try:
if(int(l_p_norm(arr,4))==7):
print("Correct !")
else:
print("Something is wrong, please try again !")
#break
except TypeError:
print("Complete implementing the function and try again")
# + [markdown] id="83821516"
# $\color{#EF5645}{\text{Norm of block vectors}}$
# + [markdown] id="b3a1bc75"
# Assume $d =[a, b, c]$ where $a,b,c$ are n-dimensional vectors. Then,
# $$||d||^2=a^T a + b^T b + c^T c = ||a||^2 + ||b||^2 + ||c||^2$$
#
# In other words:
#
# *Norm-squared value of stacked vector is equal to the sum of norm-squared values of individual vectors.*
# + [markdown] id="a5d452fe"
# $\color{blue}{\text{Exercise - 3}}$
# > **Hint:** You can reuse `l_p_norm(arr,p)` from the previous cell
# + id="a650555c" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1633984033137, "user_tz": 420, "elapsed": 165, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GioINv2YRmEhwfngXXbtoiIksMG-WuCXQmau-8D=s64", "userId": "11521119731528460253"}} outputId="35e5f426-a4bd-4254-f84d-e6e92826430d"
# Exercise 3 - Write a function to find l-2 norm of a block vector
# DO NOT CHANGE SECTION 1 and SECTION 3
# Add your code to SECTION 2
# Section-1 (DON'T CHANGE)
import numpy as np
vecA = np.array([5,4,1])
vecB = np.array([6,6,8])
vecC = np.array([1,2,9])
# stacking vecA, vecB, vecC side-by-side
mat = np.stack((vecA,vecB,vecC),axis=1)
print(mat)
# Section-2
def block_norm(mat):
"""This function finds the l-2 norm of block vectors"""
# insert your code below
s = 0;
for i in range(mat.shape[1]):
col = mat[:,i]
s += (l_p_norm(col,2)**2)
return np.sqrt(s)
# Section-3 (DON'T CHANGE)
try:
if(int(block_norm(mat))==16):
print("Correct !")
else:
print("Something is wrong, please try again !")
#break
except TypeError:
print("Complete implementing the function and try again")
# + [markdown] id="7227a203"
# [Go back to contents](#Contents)
# + [markdown] id="21859b78"
# ## Distance
# + [markdown] id="2eb037ab"
# The distance between two 2-dimensional points $(x_1,y_1), (x_2,y_2)$ is given by:
# $$d = \sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}$$
#
# The code to do this is shown below:
# + id="80e127fa"
# This code finds the distance between two points (x1,y1) and (x2,y2)
# Note that this is only for 2-dimensional points
import numpy as np
p1 = np.array([1,2])
p2 = np.array([3,7])
def dist_2d(p1,p2):
distance_squared = (p2[0]-p1[0])**2 + (p2[1]-p1[1])**2
return (np.sqrt(distance_squared))
print("Distance between", p1,"and", p2,"is:", dist_2d(p1,p2))
# + [markdown] id="b93199f0"
# In general, the distance between two $n$-dimensional points $A \:(a_1,a_2,...,a_n)$ and $B\:(b_1,b_2,...,b_n)$ is given by:
# $$d = ||A-B||_2 = \sqrt{(b_1-a_1)^2 + (b_2-a_2)^2 + ... + (b_n-a_n)^2}$$
# + [markdown] id="6089c779"
# Now build on the previous code to find the distance between two $n$-dimensional points.
# + [markdown] id="7b751c81"
# $\color{blue}{\text{Exercise - 4}}$
# + id="b08bb2d0" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1633984066772, "user_tz": 420, "elapsed": 182, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/<KEY>", "userId": "11521119731528460253"}} outputId="86e9ca75-7c90-4fe1-bc7c-55e9f8824060"
# Exercise-4 : Implement a function to find the distance between two n-dimensional points
# DO NOT CHANGE SECTION 2
# Add your code to SECTION 1
# Section-1
import numpy as np
def dist_nd(p1, p2):
"""This function finds the distance between two n-dimensional vectors"""
#insert your code below
ss = 0
for i in range(len(p1)):
ss += (p1[i] - p2[i])**2
return ss**(1/2)
# Section-2 (DON'T CHANGE)
try:
p1 = np.array([1,2])
p2 = np.array([3,7])
if(int(dist_nd(p1,p2))==5):
print('Test case #1:')
print("Result : Correct")
else:
print('Test case #1:')
print("Result: Incorrect")
print("You found distance between", p1,"and", p2,"is:", dist_nd(p1,p2))
print('\n\nTest case #2:')
p1 = np.array([1,2,9,-2])
p2 = np.array([3,7,-4,-9])
if(int(dist_nd(p1,p2))==15):
print("Result : Correct")
else:
print("Result: Incorrect")
print("You found distance between", p1,"and", p2,"is:", dist_nd(p1,p2))
except TypeError:
print("Complete implementing the function and try again")
# + [markdown] id="20305011"
# [Go back to contents](#Contents)
| lab_notebook_solution/lab2_solved.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#1 The One-Class SVM on Boston dataset
import numpy as np
from sklearn.covariance import EllipticEnvelope
from sklearn.svm import OneClassSVM
import matplotlib.pyplot as plt
import matplotlib.font_manager
from sklearn.datasets import load_boston
# Get data
X1 = load_boston()['data'][:, [8, 10]] # two clusters
X2 = load_boston()['data'][:, [5, 12]] # "banana"-shaped
# Define "classifiers" to be used
classifiers = {
"Empirical Covariance": EllipticEnvelope(support_fraction=1.,
contamination=0.261),
"Robust Covariance (Minimum Covariance Determinant)":
EllipticEnvelope(contamination=0.261),
"OCSVM": OneClassSVM(nu=0.261, gamma=0.05)}
colors = ['m', 'g', 'b']
legend1 = {}
legend2 = {}
# Learn a frontier for outlier detection with several classifiers
xx1, yy1 = np.meshgrid(np.linspace(-8, 28, 500), np.linspace(3, 40, 500))
xx2, yy2 = np.meshgrid(np.linspace(3, 10, 500), np.linspace(-5, 45, 500))
for i, (clf_name, clf) in enumerate(classifiers.items()):
plt.figure(1)
clf.fit(X1)
Z1 = clf.decision_function(np.c_[xx1.ravel(), yy1.ravel()])
Z1 = Z1.reshape(xx1.shape)
legend1[clf_name] = plt.contour(
xx1, yy1, Z1, levels=[0], linewidths=2, colors=colors[i])
plt.figure(2)
clf.fit(X2)
Z2 = clf.decision_function(np.c_[xx2.ravel(), yy2.ravel()])
Z2 = Z2.reshape(xx2.shape)
legend2[clf_name] = plt.contour(
xx2, yy2, Z2, levels=[0], linewidths=2, colors=colors[i])
legend1_values_list = list(legend1.values())
legend1_keys_list = list(legend1.keys())
# Plot the results (= shape of the data points cloud)
plt.figure(1) # two clusters
plt.title("Outlier detection on a real data set (boston housing)")
plt.scatter(X1[:, 0], X1[:, 1], color='black')
bbox_args = dict(boxstyle="round", fc="0.8")
arrow_args = dict(arrowstyle="->")
plt.annotate("several confounded points", xy=(24, 19),
xycoords="data", textcoords="data",
xytext=(13, 10), bbox=bbox_args, arrowprops=arrow_args)
plt.xlim((xx1.min(), xx1.max()))
plt.ylim((yy1.min(), yy1.max()))
plt.legend((legend1_values_list[0].collections[0],
legend1_values_list[1].collections[0],
legend1_values_list[2].collections[0]),
(legend1_keys_list[0], legend1_keys_list[1], legend1_keys_list[2]),
loc="upper center",
prop=matplotlib.font_manager.FontProperties(size=12))
plt.ylabel("accessibility to radial highways")
plt.xlabel("pupil-teacher ratio by town")
legend2_values_list = list(legend2.values())
legend2_keys_list = list(legend2.keys())
plt.figure(2) # "banana" shape
plt.title("Outlier detection on a real data set (boston housing)")
plt.scatter(X2[:, 0], X2[:, 1], color='black')
plt.xlim((xx2.min(), xx2.max()))
plt.ylim((yy2.min(), yy2.max()))
plt.legend((legend2_values_list[0].collections[0],
legend2_values_list[1].collections[0],
legend2_values_list[2].collections[0]),
(legend2_keys_list[0], legend2_keys_list[1], legend2_keys_list[2]),
loc="upper center",
prop=matplotlib.font_manager.FontProperties(size=12))
plt.ylabel("% lower status of the population")
plt.xlabel("average number of rooms per dwelling")
plt.show()
# +
# K-means Clustering on iris dataset
import numpy as np
import matplotlib.pyplot as plt
# Though the following import is not directly being used, it is required
# for 3D projection to work
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans
from sklearn import datasets
np.random.seed(5)
iris = datasets.load_iris()
X = iris.data
y = iris.target
estimators = [('k_means_iris_8', KMeans(n_clusters=8)),
('k_means_iris_3', KMeans(n_clusters=3)),
('k_means_iris_bad_init', KMeans(n_clusters=3, n_init=1,
init='random'))]
fignum = 1
titles = ['8 clusters', '3 clusters', '3 clusters, bad initialization']
for name, est in estimators:
fig = plt.figure(fignum, figsize=(4, 3))
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
est.fit(X)
labels = est.labels_
ax.scatter(X[:, 3], X[:, 0], X[:, 2],
c=labels.astype(np.float), edgecolor='k')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
ax.set_title(titles[fignum - 1])
ax.dist = 12
fignum = fignum + 1
# Plot the ground truth
fig = plt.figure(fignum, figsize=(4, 3))
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
for name, label in [('Setosa', 0),
('Versicolour', 1),
('Virginica', 2)]:
ax.text3D(X[y == label, 3].mean(),
X[y == label, 0].mean(),
X[y == label, 2].mean() + 2, name,
horizontalalignment='center',
bbox=dict(alpha=.2, edgecolor='w', facecolor='w'))
# Reorder the labels to have colors matching the cluster results
y = np.choose(y, [1, 2, 0]).astype(np.float)
ax.scatter(X[:, 3], X[:, 0], X[:, 2], c=y, edgecolor='k')
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
ax.set_xlabel('Petal width')
ax.set_ylabel('Sepal length')
ax.set_zlabel('Petal length')
ax.set_title('Ground Truth')
ax.dist = 12
fig.show()
# +
#3 SVM on my dataset
import pandas as pd
import numpy as np
import math
from sklearn import svm
from sklearn.metrics import accuracy_score
from sklearn.model_selection import StratifiedKFold
#2.1
adult_origin = pd.read_csv('C:/Users/zren3/OneDrive/Desktop/Study/CSCI6364/HW3/adult.csv', header = 0)
adult_cleaned = adult_origin[(adult_origin.astype(str) != '?').all(axis = 1)]
del adult_cleaned['education'] ### We don't need education column because we have got education.num already
#2.2
adult_digitization = pd.DataFrame()
target_columns = ['workclass', 'marital.status', 'marriage', 'occupation', 'relationship', 'race', 'sex',
'native.country',
'income']
for column in adult_cleaned.columns:
if column in target_columns:
unique_value = list(enumerate(np.unique(adult_cleaned[column])))
dict_data = {key: value for value, key in unique_value}
adult_digitization[column] = adult_cleaned[column].map(dict_data)
else:
adult_digitization[column] = adult_cleaned[column]
#2.3
skf= StratifiedKFold(n_splits=10)
#2.4
Datamatrix = np.array(adult_digitization)
lst = []
for i in Datamatrix[:,-1]:
lst.append(i)
class IG():
def __init__(self,X,y):
X = np.array(X)
n_feature = np.shape(X)[1]
n_y = len(y)
orig_H = 0
for i in set(y):
orig_H += -(y.count(i)/n_y)*math.log(y.count(i)/n_y)
condi_H_list = []
for i in range(n_feature):
feature = X[:,i]
sourted_feature = sorted(feature)
threshold = [(sourted_feature[inde-1]+sourted_feature[inde])/2 for inde in range(len(feature)) if inde != 0 ]
thre_set = set(threshold)
if float(max(feature)) in thre_set:
thre_set.remove(float(max(feature)))
if min(feature) in thre_set:
thre_set.remove(min(feature))
pre_H = 0
for thre in thre_set:
lower = [y[s] for s in range(len(feature)) if feature[s] < thre]
highter = [y[s] for s in range(len(feature)) if feature[s] > thre]
H_l = 0
for l in set(lower):
H_l += -(lower.count(l) / len(lower))*math.log(lower.count(l) / len(lower))
H_h = 0
for h in set(highter):
H_h += -(highter.count(h) / len(highter))*math.log(highter.count(h) / len(highter))
temp_condi_H = len(lower)/n_y *H_l+ len(highter)/n_y * H_h
condi_H = orig_H - temp_condi_H
pre_H = max(pre_H,condi_H)
condi_H_list.append(pre_H)
self.IG = condi_H_list
def getIG(self):
return self.IG
result = IG(adult_digitization.iloc[:,0:-1],lst).getIG()
result
# +
#4 Random forest on my dataset
from sklearn import preprocessing
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_score, recall_score, f1_score
training_set = adult_digitization.iloc[:,0:-1]
training_set_scaled = preprocessing.scale(training_set)
test_set = adult_digitization['income']
X_train,X_test,Y_train,Y_test = train_test_split(training_set_scaled,test_set,test_size = 0.2,random_state = 1)
clf = RandomForestClassifier(max_depth=2, random_state=0)
clf.fit(X_train, Y_train)
test_predictions = clf.predict(X_test)
p = precision_score(Y_test, test_predictions, average='binary')
r = recall_score(Y_test, test_predictions, average='binary')
f1score = f1_score(Y_test, test_predictions, average='binary')
print("precision: %s"% p)
print("recall: %s"% r)
print("f1score: %s"% f1score)
# +
#5 Feature selection on Boston dataset
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_boston
from sklearn.feature_selection import SelectFromModel
from sklearn.linear_model import LassoCV
# Load the boston dataset.
boston = load_boston()
X, y = boston['data'], boston['target']
# We use the base estimator LassoCV since the L1 norm promotes sparsity of features.
clf = LassoCV()
# Set a minimum threshold of 0.25
sfm = SelectFromModel(clf, threshold=0.25)
sfm.fit(X, y)
n_features = sfm.transform(X).shape[1]
# Reset the threshold till the number of features equals two.
# Note that the attribute can be set directly instead of repeatedly
# fitting the metatransformer.
while n_features > 2:
sfm.threshold += 0.1
X_transform = sfm.transform(X)
n_features = X_transform.shape[1]
# Plot the selected two features from X.
plt.title(
"Features selected from Boston using SelectFromModel with "
"threshold %0.3f." % sfm.threshold)
feature1 = X_transform[:, 0]
feature2 = X_transform[:, 1]
plt.plot(feature1, feature2, 'r.')
plt.xlabel("Feature number 1")
plt.ylabel("Feature number 2")
plt.ylim([np.min(feature2), np.max(feature2)])
plt.show()
# +
#6 LDA and PCA on iris
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
iris = datasets.load_iris()
X = iris.data
y = iris.target
target_names = iris.target_names
pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(X, y).transform(X)
# Percentage of variance explained for each components
print('explained variance ratio (first two components): %s'
% str(pca.explained_variance_ratio_))
plt.figure()
colors = ['navy', 'turquoise', 'darkorange']
lw = 2
for color, i, target_name in zip(colors, [0, 1, 2], target_names):
plt.scatter(X_r[y == i, 0], X_r[y == i, 1], color=color, alpha=.8, lw=lw,
label=target_name)
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.title('PCA of IRIS dataset')
plt.figure()
for color, i, target_name in zip(colors, [0, 1, 2], target_names):
plt.scatter(X_r2[y == i, 0], X_r2[y == i, 1], alpha=.8, color=color,
label=target_name)
plt.legend(loc='best', shadow=False, scatterpoints=1)
plt.title('LDA of IRIS dataset')
plt.show()
# +
#7 Univariate Feature Selection on iris dataset
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, svm
from sklearn.feature_selection import SelectPercentile, f_classif
# #############################################################################
# Import some data to play with
# The iris dataset
iris = datasets.load_iris()
# Some noisy data not correlated
E = np.random.uniform(0, 0.1, size=(len(iris.data), 20))
# Add the noisy data to the informative features
X = np.hstack((iris.data, E))
y = iris.target
plt.figure(1)
plt.clf()
X_indices = np.arange(X.shape[-1])
# #############################################################################
# Univariate feature selection with F-test for feature scoring
# We use the default selection function: the 10% most significant features
selector = SelectPercentile(f_classif, percentile=10)
selector.fit(X, y)
scores = -np.log10(selector.pvalues_)
scores /= scores.max()
plt.bar(X_indices - .45, scores, width=.2,
label=r'Univariate score ($-Log(p_{value})$)', color='darkorange',
edgecolor='black')
# #############################################################################
# Compare to the weights of an SVM
clf = svm.SVC(kernel='linear')
clf.fit(X, y)
svm_weights = (clf.coef_ ** 2).sum(axis=0)
svm_weights /= svm_weights.max()
plt.bar(X_indices - .25, svm_weights, width=.2, label='SVM weight',
color='navy', edgecolor='black')
clf_selected = svm.SVC(kernel='linear')
clf_selected.fit(selector.transform(X), y)
svm_weights_selected = (clf_selected.coef_ ** 2).sum(axis=0)
svm_weights_selected /= svm_weights_selected.max()
plt.bar(X_indices[selector.get_support()] - .05, svm_weights_selected,
width=.2, label='SVM weights after selection', color='c',
edgecolor='black')
plt.title("Comparing feature selection")
plt.xlabel('Feature number')
plt.yticks(())
plt.axis('tight')
plt.legend(loc='upper right')
plt.show()
# +
#8 knn on my dataset
import pandas as pd
import sklearn.cross_validation as cross_validation
from sklearn import neighbors
train_data = pd.read_csv('C:\\Users\\zren3\\OneDrive\\Desktop\\Study\\CSCI6364\\Data\\train.csv')
test_data = pd.read_csv('C:\\Users\\zren3\\OneDrive\\Desktop\\Study\\CSCI6364\\Data\\test.csv')
aa = pd.read_csv('C:\\Users\\zren3\\OneDrive\\Desktop\\Study\\CSCI6364\\Data\\sample_submission.csv')
featureList = train_data.iloc[:,1:]
labelList = train_data.iloc[:,[0]]
featureList.describe()
knn = neighbors.KNeighborsClassifier()
knn.fit(featureList,labelList)
predict_test = knn.predict(test_data)
result_1 = pd.DataFrame({'ImageId':aa['ImageId'],'Label':predict_test})
result_1.to_csv('C:\\Users\\zren3\\OneDrive\\Desktop\\Study\\CSCI6364\\Data\\result_1.csv',index = False)
from sklearn.metrics import confusion_matrix
confusion_matrix(test_data,predict_test)
# +
#9 Gaussian process classification on iris dataset
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
y = np.array(iris.target, dtype=int)
h = .02 # step size in the mesh
kernel = 1.0 * RBF([1.0])
gpc_rbf_isotropic = GaussianProcessClassifier(kernel=kernel).fit(X, y)
kernel = 1.0 * RBF([1.0, 1.0])
gpc_rbf_anisotropic = GaussianProcessClassifier(kernel=kernel).fit(X, y)
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
titles = ["Isotropic RBF", "Anisotropic RBF"]
plt.figure(figsize=(10, 5))
for i, clf in enumerate((gpc_rbf_isotropic, gpc_rbf_anisotropic)):
# Plot the predicted probabilities. For that, we will assign a color to
# each point in the mesh [x_min, m_max]x[y_min, y_max].
plt.subplot(1, 2, i + 1)
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape((xx.shape[0], xx.shape[1], 3))
plt.imshow(Z, extent=(x_min, x_max, y_min, y_max), origin="lower")
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=np.array(["r", "g", "b"])[y],
edgecolors=(0, 0, 0))
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.title("%s, LML: %.3f" %
(titles[i], clf.log_marginal_likelihood(clf.kernel_.theta)))
plt.tight_layout()
plt.show()
# +
#10 Neural Networks
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
h = .02 # step size in the mesh
alphas = np.logspace(-5, 3, 5)
names = []
for i in alphas:
names.append('alpha ' + str(i))
classifiers = []
for i in alphas:
classifiers.append(MLPClassifier(alpha=i, random_state=1))
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=0, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable]
figure = plt.figure(figsize=(17, 9))
i = 1
# iterate over datasets
for X, y in datasets:
# preprocess dataset, split into training and test part
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
# Plot also the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright,
edgecolors='black', s=25)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,
alpha=0.6, edgecolors='black', s=25)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(name)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=15, horizontalalignment='right')
i += 1
figure.subplots_adjust(left=.02, right=.98)
plt.show()
| Assignments/HW3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.4 64-bit
# name: python374jvsc74a57bd0021d9f4f6a0c9e23e32c4246ac82593951ffad9baab3e58c0c69e8a8c06b339b
# ---
# # Python | try&except
# Python uses try and except to handle errors gracefully. A graceful exit (or graceful handling) of errors is a simple programming idiom - a program detects a serious error condition and "exits gracefully", in a controlled manner as a result. Often the program prints a descriptive error message to a terminal or log as part of the graceful exit, this makes our application more robust. The cause of an exception is often external to the program itself. An example of exceptions could be an incorrect input, wrong file name, unable to find a file, a malfunctioning IO device. Graceful handling of errors prevents our applications from crashing.
# ```python
# try:
# code in this block if things go well
# except:
# code in this block run if things go wrong
# ```
# ### Exercise 1.
#
# 1. In the 4 cells below, modify the code to catch the error and print a meaningful message that will alert the user what went wrong. You may catch the error using a general except or a specific except for the error caused by the code.
# +
# Modify the code below:
try:
print(some_string)
except Exception as error:
print("you need to define some_string. Tip: some_string = ......")
# +
# Modify the code below:
for i in ['a','b','c']:
try:
print (i**2)
except Exception as error:
print("no puedes calcular el exponente de un string")
# +
# Modify the code below:
x = 5
y = 0
try:
z = x/y
except Exception as error:
print('no puedes dividir un numero por cero')
# +
# Modify the code below:
abc=[10,20,20]
try:
print(abc[3])
except Exception as error:
print('La lista no tiene tantos elementos. Prueba a buscar otro elemento en una posicion mas cercana')
# -
# ### Exercise 2.
# In the 3 cells below, add an if statment that will handle both types of input allowed in the functions.
# +
import math # import a library called math with functions, like sqrt()
# Modify the code below to handle positive and negative numbers by adding an if statement and performing a transformation:
def sqrt_for_all(x):
if x>0:
# This function will take any real number and return the square root of its magnitude
# Input: real number
# Output: real number
# Sample Input: -4
# Sample Output: 2.0
return math.sqrt(x)
sqrt_for_all(-1)
# +
# Modify the code below to handle zero as well. In the case of zero, return zero
def divide(x, y):
# This function will take any two real numbers and return their quotient. If the denominator is zero, we return zero
# Input: real number
# Output: real number
# Sample Input: 5, 1
# Sample Output: 5.0
return x / y
divide(5, 0)
# +
# Modify the function below that it will take either an number and a list or two numbers.
# If we take two numbers, add them together and return a list of length 1.
# Otherwise, add the number to every element of the list and return the resulting list
def add_elements(a, l):
# This function takes either two numbers or a list and a number and adds the number to all elements of the list
# If the function only takes two numbers, it returns a list of length one that is the sum of the numbers
# Input: number and list or two numbers
# Output: list
# Sample Input: 5, 6
# Sample Output: [11]
return [a + element for element in l]
add_elements(5, 6)
# -
# ### Exercise 3.
#
# Write a function that asks for an integer and prints the square of it. Keep checking while the function gets an integer.
#
# Use a `while` loop with a `try/except` block to account for incorrect inputs.
def cuadrado():
x=input("Introduce un numero entero")
try:
print((int(x))**2)
except Exception as error:
print('Debes introducir un numero')
cuadrado()
# ### Bonus track.
#
# 1. Solve this kata in **codewars**.
#
# https://www.codewars.com/kata/560fbc2d636966b21e00009e
# 2. Make a program using `while` that generates a **deck of cards** of 4 different suits. The deck must have 40 cards.
#
# Develop the program in a `.py` file that will be run through the terminal.
#
# Then, import the module to this notebook and show the deck you've generated below.
# 
| week3_course_python_III/day4_python_X/exercise_try_except.ipynb |
# ## Illustrate usage of DAPPER to (interactively) run a synthetic ("twin") experiment.
# #### Imports
# <b>NB:</b> If you're on <mark><b>Gooble Colab</b></mark>,
# then replace `%matplotlib notebook` below by
# `!python -m pip install git+https://github.com/nansencenter/DAPPER.git` .
# Also note that liveplotting does not work on Colab.
# %matplotlib notebook
from mpl_tools import is_notebook_or_qt as nb
import dapper as dpr
import dapper.da_methods as da
# #### Load experiment setup: the hidden Markov model (HMM)
from dapper.mods.Lorenz63.sakov2012 import HMM
# #### Generate the same random numbers each time this script is run
seed = dpr.set_seed(3000)
# #### Simulate synthetic truth (xx) and noisy obs (yy)
HMM.tseq.T = 30 # shorten experiment
xx, yy = HMM.simulate()
# #### Specify a DA method configuration ("xp" is short for "experiment")
# xp = da.OptInterp()
# xp = da.Var3D()
# xp = da.ExtKF(infl=90)
xp = da.EnKF('Sqrt', N=10, infl=1.02, rot=True)
# xp = da.PartFilt(N=100, reg=2.4, NER=0.3)
# #### Assimilate yy, knowing the HMM; xx is used to assess the performance
xp.assimilate(HMM, xx, yy, liveplots=not nb)
# #### Average the time series of various statistics
xp.stats.average_in_time()
# #### Print some averages
print(xp.avrgs.tabulate(['rmse.a', 'rmv.a']))
# #### Replay liveplotters
xp.stats.replay(
# speed=.6
)
# #### Further diagnostic plots
if nb:
import dapper.tools.viz as viz
viz.plot_rank_histogram(xp.stats)
viz.plot_err_components(xp.stats)
viz.plot_hovmoller(xx)
# #### Explore objects
if nb:
print(xp)
if nb:
print(HMM)
if nb:
# print(xp.stats) # quite long printout
print(xp.avrgs)
# #### Excercise: Why are the replay plots not as smooth as the liveplot?
# *Hint*: provide the keyword `store_u=True` to `assimilate()` to avoid this.
# #### Excercise: Why does the replay only contain the blue lines?
# #### Excercise: Try out each of the above DA methods (currently commented out).
# Next, remove the call to `replay`, and set `liveplots=False` above.
# Now, use the iterative EnKS (`iEnKS`), and try to find a parameter combination
# for it so that you achieve a lower `rmse.a` than with the `PartFilt`.
#
# *Hint*: In general, there is no free lunch. Similarly, not all methods work
# for all problems; additionally, methods often have parameters that require
# tuning. Luckily, in DAPPER, you should be able to find suitably tuned
# configuration settings for various DA methods *in the files that define the
# HMM*. If you do not find a suggested configuration for a given method, you
# will have to tune it yourself. The example script `basic_2` shows how DAPPER
# facilitates the tuning process, and `basic_3` takes this further.
# #### Excercise: Run an experiment for each of these models
# - LotkaVolterra
# - Lorenz96
# - LA
# - QG
# #### Excercise: Printing other diagnostics.
# - Create a new code cell, and copy-paste the above `print(...tabulate)`
# command into it. Then, replace `rmse` by `err.rms`. This should yield
# the same printout, as is merely an abbreviation of the latter.
# - Next, figure out how to print the time average *forecast (i.e. prior)* error
# (and `rmv`) instead. Explain (in broad terms) why the values are larger than
# for the *analysis* values.
# - Finally, instead of the `rms` spatial/field averages,
# print the regular mean (`.m`) averages. Explain why `err.m` is nearly zero,
# in contrast to `err.rms`.
| examples/basic_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to NumPy
#
# The most fundamental third-party package for scientific computing in Python is NumPy, which provides multidimensional **array** data types, along with associated functions and methods to manipulate them. Other third-party packages, including Pandas, use NumPy arrays as backends for more specialized data structures.
#
#
# While Python comes with several container types (`list`,`tuple`,`dict`), NumPy's arrays are implemented closer to the hardware, and are therefore more **efficient** than the built-in types. This is particularly true for large data, for which NumPy scales much better than Python's built-in data structures.
#
#
# NumPy arrays also retain a suite of associated functions and methods that allow for efficient *array-oriented* computing.
#
# ## Basics of Numpy arrays
# We now turn our attention to the Numpy library, which forms the base layer for the entire Python *scientific stack*. Once you have installed numpy, you can import it as
import numpy
# though we will employ the conventional shorthand
import numpy as np
# As mentioned above, the main object provided by numpy is a powerful array. We'll start by exploring how the numpy array differs from Python lists. We start by creating a simple list and an array with the same contents of the list:
# +
lst = list(range(1000))
arr = np.arange(1000)
# Here's what the array looks like
arr[:10]
# -
type(arr)
# %timeit [i**2 for i in lst]
# %timeit arr**2
# Elements of a one-dimensional array are indexed with square brackets, as with lists:
arr[5:10]
arr[-1]
# The first difference to note between lists and arrays is that arrays are **homogeneous**; i.e. all elements of an array must be of the same type. In contrast, lists can contain elements of arbitrary type. For example, we can change the last element in our list above to be a string:
lst[0] = 'a string inside a list'
lst[:10]
# but the same can not be done with an array, as we get an error message:
arr[0] = 'a string inside an array'
# The information about the type of an array is contained in its *dtype* attribute:
arr.dtype
# Once an array has been created, its dtype is fixed and it can only store elements of the same type. For this example where the dtype is integer, if we store a floating point number it will be automatically converted into an integer:
arr[0] = 1.234
arr[:10]
# Above we created an array from an existing list; now let us now see other ways in which we can create arrays, which we'll illustrate next. A common need is to have an array initialized with a constant value, and very often this value is 0 or 1 (suitable as starting value for additive and multiplicative loops respectively); `zeros` creates arrays of all zeros, with any desired dtype:
np.zeros(5, float)
np.zeros(3, int)
np.zeros(3, complex)
# and similarly for `ones`:
print('5 ones: {0}'.format(np.ones(5)))
# If we want an array initialized with an arbitrary value, we can create an empty array and then use the fill method to put the value we want into the array:
a = np.empty(4)
a
a.fill(5.5)
a
# We have seen how the `arange` function generates an array for a range of integers. Relatedly, the `linspace` and `logspace` functions to create linearly and logarithmically-spaced **grids** respectively, with a fixed number of points and including both ends of the specified interval:
np.linspace(0, 1, num=5)
np.linspace(0, 1, endpoint=False, num=5)
np.logspace(1, 4, num=4)
# Finally, it is often useful to create arrays with random numbers that follow a specific **distribution**. The `np.random` module contains a number of functions that can be used to this effect, for example this will produce an array of 5 random samples taken from a **standard normal** distribution (0 mean and variance 1):
#
# $$f(x \mid \mu=0, \sigma=1) = \sqrt{\frac{1}{2\pi \sigma^2}} \exp\left\{ -\frac{x^2}{2\sigma^2} \right\}$$
np.random.randn(5)
# whereas the following will also give 5 samples, but from a normal distribution with a mean of 9 and a standard deviation of 3:
norm10 = np.random.normal(loc=9, scale=3, size=10)
# You can access the documentation for the `random` number generators, or any NumPy function, using the `help` function.
help(np.random.exponential)
# More generally, you can search for NumPy help on a variety of topics, using the `lookfor` function.
np.lookfor('distribution')
# ## Exercise: Random numbers
#
# Generate a NumPy array of 1000 random numbers sampled from a Poisson distribution, with parameter `lam=5`. What is the modal value in the sample?
# +
# Write your answer here
# -
# ## Indexing with other arrays
# Above we saw how to index arrays with single numbers and slices, just like Python lists. But arrays allow for a more sophisticated kind of indexing which is very powerful: you can index an array with another array, and in particular with an array of boolean (`bool`) values. This is particluarly useful to extract information from an array that matches a certain condition.
#
# Consider for example that in the array `norm10` we want to replace all values above 9 with the value 0. We can do so by first finding the *mask* that indicates where this condition is `True` or `False`:
norm10
mask = norm10 > 9
mask
# Now that we have this mask, we can use it to return those values
norm10[mask]
# or to change their values
norm10[mask] = 0
norm10
# ## Multidimensional Arrays
# NumPy can create arrays of aribtrary dimensions, and all the methods illustrated in the previous section work with more than one dimension. For example, a list of lists can be used to initialize a two dimensional array:
samples_list = [[632, 1638, 569, 115], [433,1130,754,555]]
samples_array = np.array(samples_list)
samples_array.shape
# With two-dimensional arrays we start seeing the convenience of NumPy data structures: while a nested list can be indexed across dimensions using consecutive `[ ]` operators, multidimensional arrays support a more natural indexing syntax with a single set of brackets and a set of comma-separated indices:
samples_list[0][1]
samples_array[0,1]
# Most of the array creation functions listed above can be passed multidimensional shapes. For example:
np.zeros((2,3))
np.random.normal(10, 3, size=(2, 4))
# In fact, an array can be **reshaped** at any time, as long as the total number of elements is unchanged. For example, if we want a 2x4 array with numbers increasing from 0, the easiest way to create it is via the array's `reshape` method.
arr = np.arange(8).reshape(2,4)
arr
# With multidimensional arrays, you can also use slices, and you can mix and match slices and single indices in the different dimensions (using the same array as above):
arr[1, 2:4]
arr[:, 2]
# If you only provide one index, then you will get the corresponding row.
arr[1]
# Now that we have seen how to create arrays with more than one dimension, it's a good idea to look at some of the most useful **properties and methods** that arrays have. The following provide basic information about the size, shape and data in the array:
print('Data type :', samples_array.dtype)
print('Total number of elements :', samples_array.size)
print('Number of dimensions :', samples_array.ndim)
print('Shape (dimensionality) :', samples_array.shape)
print('Memory used (in bytes) :', samples_array.nbytes)
# Arrays also have many useful methods, some especially useful ones are:
print('Minimum and maximum :', samples_array.min(), samples_array.max())
print('Sum, mean and standard deviation:', samples_array.sum(), samples_array.mean(), samples_array.std())
# For these methods, the above operations area all computed on all the elements of the array. But for a multidimensional array, it's possible to do the computation along a single dimension, by passing the `axis` parameter; for example:
samples_array.sum(axis=0)
samples_array.sum(axis=1)
# As you can see in this example, the value of the `axis` parameter is the dimension which will be *consumed* once the operation has been carried out. This is why to sum along the rows we use `axis=0`.
#
# This can be easily illustrated with an example that has more dimensions; we create an array with 4 dimensions and shape `(3,4,5,6)` and sum along the axis index 2. That consumes the dimension whose length was 5, leaving us with a new array that has shape `(3,4,6)`:
np.zeros((3,4,5,6)).sum(2).shape
# Another widely used property of arrays is the `.T` attribute, which allows you to access the transpose of the array:
samples_array.T
# Which is the equivalent of calling NumPy's `transpose` function:
np.transpose(samples_array)
# There is a wide variety of methods and properties of arrays.
[attr for attr in dir(samples_array) if not attr.startswith('__')]
# ### Exercises: Matrix Creation
#
# Generate the following structure as a numpy array, without typing the values by hand. Then, create another array containing just the 2nd and 4th rows.
#
# [[1, 6, 11],
# [2, 7, 12],
# [3, 8, 13],
# [4, 9, 14],
# [5, 10, 15]]
# +
# Write your answer here
# -
# Create a **tridiagonal** matrix with 5 rows and columns, with 1's on the diagonal and 2's on the off-diagonal.
# +
# Write your answer here
# -
# ## Array Operations
# Arrays support all regular arithmetic operators, and NumPy also contains a complete collection of basic mathematical functions that operate on arrays. It is important to remember that in general, all operations with arrays are applied **element-wise**, that is, applied to each element of the array.
#
# Consider for example:
# +
sample1, sample2 = np.array([632, 1638, 569, 115]), np.array([433,1130,754,555])
sample_sum = sample1 + sample2
print('{0} + {1} = {2}'.format(sample1, sample2, sample_sum))
# -
# This includes the multiplication operator -- it does *not* perform matrix multiplication, as is the case in Matlab, for example:
print('{0} X {1} = {2}'.format(sample1, sample2, sample1*sample2))
# While this implies that the dimension of the arrays for each operation must match in size, numpy will **broadcast** dimensions when possible. For example, suppose that you want to add the number 1.5 to each element `arr1`. One approach is to use the `ones` function to match the dimension of the array.
sample1 + 1.5*np.ones(4)
# But thanks to numpy's broadcasting rules, the following is equally valid:
sample1 + 1.5
# In this case, numpy looked at both operands and saw that the first was a one-dimensional array of length 4 and the second was a scalar, considered a zero-dimensional object. The broadcasting rules allow numpy to:
#
# * *create* new array of length 1
# * *extend* the array to match the size of the corresponding array
#
# So in the above example, the scalar 1.5 is effectively cast to a 1-dimensional array of length 1, then stretched to length 4 to match the dimension of `arr1`. After this, element-wise addition can proceed as now both operands are one-dimensional arrays of length 4.
#
# This broadcasting behavior is powerful, especially because when NumPy broadcasts to create new dimensions or to stretch existing ones, it doesn't actually replicate the data. In the example above the operation is carried *as if* the 1.5 was a 1-d array with 1.5 in all of its entries, but no actual array was ever created. This saves memory and improves the **performance** of operations.
#
# When broadcasting, NumPy compares the sizes of each dimension in each operand. It starts with the trailing dimensions, working forward and creating dimensions as needed to accomodate the operation. Two dimensions are considered compatible for operation when:
#
# * they are equal in size
# * one is scalar (or size 1)
#
# If these conditions are not met, an exception is thrown, indicating that the arrays have incompatible shapes.
sample1 + np.array([7,8])
# Let's create a 1-dimensional array and add it to a 2-dimensional array, to illustrate broadcasting:
# +
b = np.array([10, 20, 30, 40])
bcast_sum = sample1 + b
print('{0}\n\n+ {1}\n{2}\n{3}'.format(sample1, b, '-'*21, bcast_sum))
# -
# What if we wanted to add `[-100, 100]` to the rows of `sample1`? Direct addition will not work:
c = np.array([-100, 100])
sample1 + c
# Remember that matching begins at the **trailing** dimensions. Here, `c` would need to have a trailing dimension of 1 for the broadcasting to work. We can augment arrays with dimensions on the fly, by indexing it with a `np.newaxis` object, which adds an "empty" dimension:
cplus = c[:, np.newaxis]
cplus
# This is exactly what we need, and indeed it works:
sample1 + cplus
# For the full broadcasting rules, please see the official Numpy docs, which describe them in detail and with more complex examples.
# ### Exercises: Array manipulation
#
# Divide each column of the array:
#
# np.array([[ 0, 1, 2, 3, 4],
# [ 5, 6, 7, 8, 9],
# [10, 11, 12, 13, 14],
# [15, 16, 17, 18, 19],
# [20, 21, 22, 23, 24]])
#
# elementwise with the array `np.array([1., 5, 10, 15, 20])`.
# +
# Write your answer here
# -
# Generate a 10 x 3 array of random numbers (in range [0,1]). For each row, pick the number closest to 0.5.
#
# *Hints*:
#
# * Use `abs` and `argsort` to find the column `j` closest for each row.
# * Use "fancy" indexing to extract the numbers.
#
# +
# Write your answer here
# -
# ## Linear Algebra
# Numpy includes a linear algebra submodule, along with a suite of `array` methods for performing linear algebra. For example, the `dot` method performs an inner (dot) product on vectors and matrices:
# +
v1 = np.array([2, 3, 4])
v2 = np.array([1, 0, 1])
v1.dot(v2)
# -
# Equivalently, we can use the `dot` function:
np.dot(v1, v2)
# When performing regular matrix-vector multiplication, note that NumPy makes no distinction between row and column vectors *per se* and simply verifies that the dimensions match the required rules of matrix multiplication, in this case we have a $2 \times 3$ matrix multiplied by a 3-vector, which produces a 2-vector:
# +
A = np.arange(6).reshape(2, 3)
A.dot(v1)
# -
# For matrix-matrix multiplication, the same dimension-matching rules must be satisfied, e.g. consider the difference between $A \times A^T$:
A.dot(A.T)
# and $A^T \times A$:
A.T.dot(A)
# Beyond inner products, the `numpy.linalg` module includes functions for calculating determinants, matrix norms, Cholesky decomposition, eigenvalue and singular value decompositions, and more.
#
# Additional linear algebra tools are available in SciPy's linear algebra library, `scipy.linalg`. It includes the majority of the tools in the classic LAPACK libraries as well as functions to operate on sparse matrices.
# ## Reading and writing data
# NumPy lets you save and retrive data structures to and from files on a local or remote storage, in either **text** or **binary** formats. Which format is appropriate depends on which tradeoff that you are willing to make:
#
# * **Text mode**: occupies more space, precision can be lost (if not all digits are written to disk), but is readable and editable by hand with a text editor. Storage is limited to one- and two-dimensional arrays.
#
# * **Binary mode**: compact and exact representation of the data in memory, can't be read or edited by hand. Arrays of any size and dimensionality can be saved and read without loss of information.
#
# First, let's see how to read and write arrays in text mode. The `np.savetxt` function saves an array to a text file, with options to control the precision, separators and even adding a header:
arr = np.arange(10).reshape(2, 5)
np.savetxt('test.out', arr, fmt='%.2e', header="My dataset")
# !cat test.out
# And this same type of file can then be read with the matching `np.loadtxt` function:
arr2 = np.loadtxt('test.out')
arr2
# For binary data, we use either `np.save` or `np.savez`. The first saves a single array to a file with `.npy` extension, while the latter can be used to save a *group* of arrays into a single file with `.npz` extension. The files created with these routines can then be read with the `np.load` function.
#
# Let us first see how to use the simpler `np.save` function to save a single array:
np.save('test.npy', arr2)
# This can be read back:
arr2n = np.load('test.npy')
# And we can confirm that they are equal:
np.any(arr2 - arr2n)
# Now let us see how the `np.savez` function works.
#
# It expects both a filename and either a sequence of arrays or a set of key-value pairs. If arrays are passed, the `savez` will automatically name the saved arrays in the archive as `arr_0`, `arr_1`, ...
np.savez('test.npz', arr, arr2)
arrays = np.load('test.npz')
arrays.files
# Alternatively, if we explicitly name the arrays using keyword arguments:
np.savez('test.npz', foo=arr, bar=arr2)
arrays = np.load('test.npz')
arrays.files
# The object returned by `np.load` from an `.npz` file works like a dictionary, though you can also access its constituent files by attribute using its special `.f` field; this is best illustrated with an example with the `arrays` object from above:
# First row of array
arrays['bar'][0]
# Equivalently:
arrays.f.bar[0]
# This `.npz` format is a very convenient way to package compactly and without loss of information, into a single file, a group of related arrays that pertain to a specific problem. At some point, however, the complexity of your dataset may be such that the optimal approach is to use one of the standard formats in scientific data processing that have been designed to handle complex datasets, such as NetCDF or HDF5.
# ## Guided Exercise: Structured Arrays
#
# Import the `microbiome.csv` dataset in the `data/microbiome` directory using NumPy's `loadtxt` function. This will take some experimentation; use the built-in help to get hints!
# +
# Write answer here
| notebooks/Introduction to NumPy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PhySyncEnv
# language: python
# name: physyncenv
# ---
# ### refereces taken from https://docs.scipy.org/doc/scipy/reference/tutorial/fftpack.html
from scipy.fftpack import fft
# Number of sample points
N = 600
# sample spacing
T = 1.0 / 800.0
x = np.linspace(0.0, N*T, N)
y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x)
yf = fft(y)
xf = np.linspace(0.0, 1.0/(2.0*T), N//2)
import matplotlib.pyplot as plt
plt.plot(xf, 2.0/N * np.abs(yf[0:N//2]))
plt.grid()
plt.show()
| Software/Examples/Notebooks/.ipynb_checkpoints/SciPy_Demo-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="static/pybofractal.png" alt="Pybonacci" style="width: 200px;"/>
# <img src="static/cacheme_logo.png" alt="CAChemE" style="width: 300px;"/>
# # Introducción a Jupyter e IPython
# *En esta clase haremos una rápida introducción al lenguaje Python y al intérprete IPython, así como a su Notebook. Veremos como ejecutar un script y cuáles son los tipos y estructuras básicas de este lenguaje. Seguro que ya has oído hablar mucho sobre las bondades de Python frente a otros lenguajes. Si no es así, échale un vistazo a [esto](http://nbviewer.ipython.org/github/AeroPython/Python_HA/blob/master/razones_python.ipynb).
# ¿Estás preparado? ¡Pues Empezamos!*
# ## ¿Qué es Python?
# * Lenguaje de programación dinámico, interpretado y fácil de aprender
# * Creado por <NAME> en 1991
# * Ampliamente utilizado en ciencia e ingeniería
# * Multitud de bibliotecas para realizar diferentes tareas.
# ### El zen de Python
import this
# ### ¿Qué pinta tiene un programa en Python y cómo lo ejecuto?
# Vamos a ver `mi_primer_script.py` que está en la carpeta `static`. __De momento no te preocupes por el código,__ ya habrá tiempo para eso...
# !cat static/mi_primer_script.py
# <div class="alert alert-info"><strong>Tip de IPython</strong>:
# `cat` es un comando de la línea de comandos, no propio de Python. Anteponiendo `!` a comandos de la terminal como `cd, ls, cat ...` se pueden ejecutar desde aquí.
# </div>
# %run static/mi_primer_script.py
# <div class="alert alert-info"><strong>Tip de IPython</strong>:
# `%run` es un _comando mágico_ del notebook que te permite ejecutar un archivo.
#
# Si quieres hacerlo desde una línea de comandos podrías hacer:
#
# `$ python3 static/mi_primer_script.py`
# </div>
#
# El método más simple es usar un editor (tu preferido) y ejecutar el script desde la línea de comandos. Pero existen también __IDE__s (_integrated development environment_ pensados para facilitar la escritura de código y tener al alcance de la mano otras herramientas como _profilers_, _debuggers_, _explorador de variables_... Entre los más adecuados para la programación científica se encuentran [IEP](http://www.iep-project.org/) y [Spyder](http://code.google.com/p/spyderlib/) (instalado con Anaconda).
# <img src="static/spyder.png" alt="Spyder" style="width: 800px;"/>
# ## ¿Qué es IPython?
# [IPython](http://ipython.org/) no es más que un [intérprete](http://es.wikipedia.org/wiki/Int%C3%A9rprete_(inform%C3%A1tica) de Python con algunas mejoras sustanciales, pero además su interfaz notebook es más cómoda de manejar que la línea de comandos y nos da un poco más de flexibilidad.
# ### Notebook de Jupyter
# __Será nuestra herramienta de trabajo durante el curso__. Esto que estás leyendo ahora no es más que un notebook de IPython, que como diremos luego además de código puede contener texto e imágenes. Pero veamos primero cómo funciona.
#
# __Al iniciar el notebook de Jupyter, en la pantalla principal podemos ver una ruta y una lista de notebooks__. Cada notebook es un archivo que está almacenado en el ordenador en la ruta que aparece. Si en esa carpeta no hay notebooks, veremos un mensaje indicando que la lista de notebooks está vacía.
#
# Al crear un notebook o al abrir uno nuevo se abre la interfaz de Jupyter propiamente dicha donde ya podemos empezar a trabajar. Es similar a un intérprete, pero está dividida en **celdas**. Las celdas pueden contener, código, texto, imágenes...
#
# Cada celda de código está marcada por la palabra `In [<n>]` y están **numeradas**. Tan solo tenemos que escribir el código en ella y hacer click arriba en Cell -> Run, el triángulo ("Run cell") o usar el atajo `shift + Enter`. El resultado de la celda se muestra en el campo `Out [<n>]`, también numerado y coincidiendo con la celda que acabamos de ejecutar. Esto es importante, como ya veremos luego.
#
# Si en la barra superior seleccionas Markdown (o usas el atajo `Shift-M`) en lugar de Code puedes escribir texto:
from IPython.display import Image
Image(url="static/markdown_cell.gif")
# Fuente Practical Numerical Methods with Python
# http://openedx.seas.gwu.edu/courses/GW/MAE6286/2014_fall/about
# También ecuaciones en latex y mucho más. Esto es una herramienta muy potente para explicar a alguien o a ti mismo lo que tu código hace, para hacer un informe, un trabajo, escribir en un blog...
#
# Markdown es un lenguaje aparte, no te preocupes por él demasiado ahora, irás aprendiendo sobre la marcha... Para cuando lo vayas necesitando, aquí tienes una [chuleta](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet).
Image(url="static/markdown_math.gif")
# Fuente Practical Numerical Methods with Python
# http://openedx.seas.gwu.edu/courses/GW/MAE6286/2014_fall/about
# Puedes mover las celdas de un lugar a otro de este modo:
Image(url="static/cell_move.gif")
# Fuente: Practical Numerical Methods with Python
# http://openedx.seas.gwu.edu/courses/GW/MAE6286/2014_fall/about
# El Notebook tiene además numerosos atajos que irás aprendiendo sobre la marcha, puedes consultarlos en `Help > Keyboard Shortcourts`
# ## Introducción a la sintaxis de Python
# ### Tipos numéricos
# Python dispone de los tipos numéricos y las operaciones más habituales:
2 * 4 - (7 - 1) / 3 + 1.0
# Las divisiones por cero lanzan un error:
1 / 0
1.0 / 0.0
# <div class="alert alert-info">Más adelante veremos cómo tratar estos errores. Por otro lado, cuando usemos NumPy esta operación devolverá `NaN`.</div>
# La división entre enteros en Python 3 devuelve un número real, al contrario que en Python 2.
3 / 2
# Se puede forzar que la división sea entera con el operador `//`:
3 // 2
# Se puede elevar un número a otro con el operador `**`:
2 ** 16
# Otro tipo que nos resultará muy útil son los complejos:
2 + 3j
1j
# Valor absoluto
abs(2 + 3j)
# <div class="alert alert-info"><strong>Tip de IPython</strong>: podemos recuperar resultados pasados usando `_<n>`. Por ejemplo, para recuperar el resultado correspondiente a `Out [7]`, usaríamos `_7`. Esta variable guarda ese valor para toda la sesión.</div>
abs(_13)
# Podemos __convertir variables__ a `int, float, complex, str`...
int(18.6)
round(18.6)
float(1)
complex(2)
str(256568)
# Podemos __comprobar el tipo de una variable__:
a = 2.
type(a)
isinstance(a, float)
# Otras funciones útiles son:
print('hola mundo')
max(1,5,8,7)
min(-1,1,0)
# __¡Acabas de utilizar funciones!__ Como ves es una manera bastante estándar: los argumentos se encierran entre paréntesis y se separan por comas. Se hace de esta manera en otros lenguajes de programación y no requiere mayor explicación, de momento.
# <div class="alert alert-warning">La <strong>función <code>print</code></strong> es la que se usa para imprimir resultados por pantalla. Por si lo ves en algún sitio, en Python 2 era una sentencia y funcionaba de manera distinta, sin paréntesis y sin posibilidad de pasar argumentos adicionales.</div>
# ### Asignación y operadores de comparación
# La asignación se realiza con el operador `=`. Los nombres de las variables en Python pueden contener caracteres alfanuméricos (empezando con una letra) a-z, A-Z, 0-9 y otros símbolos como la \_.
#
# Por cuestiones de estilo, las variables suelen empezar con minúscula, reservando la mayúcula para clases.
#
# Algunos nombres no pueden ser usados porque son usados por python:
#
# and, as, assert, break, class, continue, def, del, elif, else, except, exec, finally, for, from, global, if, import, in, is, lambda, not, or, pass, print, raise, return, try, while, with, yield
a = 1 + 2j
# En Python __la asignación no imprime el resultado por pantalla__, al contrario de como sucede en MATLAB y Octave (salvo que se incluya el punto y coma al final). La mejor manera de visualizar la variable que acabamos de asignar es esta:
b = 3.14159
b
# En una celda __podemos escribir código que ocupe varias líneas__. Si la última de ellas devuelve un resultado, este se imprimirá.
x, y = 1, 2
x, y
# <div class="alert alert-info">Podemos realizar **asignación múltiple**, que hemos hecho en la celda anterior con las variables `x` e `y` para intercambiar valores de manera intuitiva:</div>
x, y = y, x
x, y
# Los operadores de comparación son:
#
# * `==` igual a
# * `!=` distinto de
# * `<` menor que
# * `<=` menor o igual que
#
# Devolverán un booleano: `True` o `False`
x == y
print(x != y)
print(x < y)
print(x <= y)
print(x > y)
print(x >= y)
# incluso:
x = 5.
6. < x < 8.
# Si la ordenación no tiene sentido nos devolverá un error:
1 + 1j < 0 + 1j
# En las cadenas de texto sí existe un orden
'aaab' > 'ba'
# #### Booleanos
True and False
not False
True or False
# Una curiosidad:
(True + True) * 10
# #### Otros tipos de datos
# Otro tipo de datos muy importante que vamos a usar son las secuencias: las tuplas y las listas. Ambos son conjuntos ordenados de elementos: las tuplas se demarcan con paréntesis y las listas con corchetes.
una_lista = [1, 2, 3.0, 4 + 0j, "5"]
una_tupla = (1, 2, 3.0, 4 + 0j, "5")
print(una_lista)
print(una_tupla)
print(una_lista == una_tupla)
# Para las tuplas, podemos incluso obviar los paréntesis:
tupla_sin_parentesis = 2,5,6,9,7
type(tupla_sin_parentesis)
# En los dos tipos podemos:
#
# * Comprobar si un elemento está en la secuencia con el operador `in`:
2 in una_lista
2 in una_tupla
# * Saber cuandos elementos tienen con la función `len`:
len(una_lista)
# * Podemos *indexar* las secuencias, utilizando la sintaxis `[<inicio>:<final>:<salto>]`:
print(una_lista[0]) # Primer elemento, 1
print(una_tupla[1]) # Segundo elemento, 2
print(una_lista[0:2]) # Desde el primero hasta el tercero, excluyendo este: 1, 2
print(una_tupla[:3]) # Desde el primero hasta el cuarto, excluyendo este: 1, 2, 3.0
print(una_lista[-1]) # El último: 4 + 0j
print(una_tupla[:]) # Desde el primero hasta el último
print(una_lista[::2]) # Desde el primero hasta el último, saltando 2: 1, 3.0
# Veremos más cosas acerca de indexación en NumPy, así que de momento no te preocupes. Sólo __recuerda una cosa:__
# ##### ¡En Python, la indexación empieza por CERO!
# Podemos complicarlo un poco más y hacer cosas como una __lista de listas__:
mis_asignaturas = [
['Álgebra', 'Cálculo', 'Física'],
['Mecánica', 'Termodinámica'],
['Sólidos', 'Electrónica']
]
mis_asignaturas
# Esto nos será de gran ayuda en el futuro para construir arrays.
# ## Estructuras de control (I): Condicionales
# if <condition>:
# <do something>
# elif <condition>:
# <do other thing>
# else:
# <do other thing>
# <div class="alert alert-error"><strong>Importante:</strong> En Python los bloques se delimitan por sangrado, utilizando siempre cuatro espacios. Cuando ponemos los dos puntos al final de la primera línea del condicional, todo lo que vaya a continuación con *un* nivel de sangrado superior se considera dentro del condicional. En cuanto escribimos la primera línea con un nivel de sangrado inferior, hemos cerrado el condicional. Si no seguimos esto a rajatabla Python nos dará errores; es una forma de forzar a que el código sea legible.</div>
print(x,y)
if x > y:
print("x es mayor que y")
print("x sigue siendo mayor que y")
if 1 < 0:
print("1 es menor que 0")
print("1 sigue siendo menor que 0") # <-- ¡Mal!
if 1 < 0:
print("1 es menor que 0")
print("1 sigue siendo menor que 0")
# Si queremos añadir ramas adicionales al condicional, podemos emplear la sentencia `elif` (abreviatura de *else if*). Para la parte final, que debe ejecutarse si ninguna de las condiciones anteriores se ha cumplido, usamos la sentencia `else`:
print(x,y)
if x > y:
print("x es mayor que y")
else:
print("x es menor que y")
print(x, y)
if x < y:
print("x es menor que y")
elif x == y:
print("x es igual a y")
else:
print("x no es ni menor ni igual que y")
# ## Estructuras de control (II): Bucles
# En Python existen dos tipos de estructuras de control típicas:
#
# 1. Bucles `while`
# 2. Bucles `for`
# ### `while`
# Los bucles `while` repetiran las sentencias anidadas en él mientras se cumpla una condición:
#
# while <condition>:
# <things to do>
#
# Como en el caso de los condicionales, los bloques se separan por indentación sin necesidad de sentencias del tipo `end`
ii = -2
while ii < 5:
print(ii)
ii += 1
# <div class="alert alert-info"><strong>Tip</strong>:
# `ii += 1` equivale a `ii = ii + 1`. En el segundo Python, realiza la operación ii + 1 creando un nuevo objeto con ese valor y luego lo asigna a la variable ii; es decir, existe una reasignación. En el primero, sin embargo, el incremento se produce sobre la propia variable. Esto puede conducirnos a mejoras en velocidad.
#
# Otros operadores 'in-place' son: `-=`, `*=`, `/=`
# </div>
# Se puede interrumpir el bucle a la mitad con la sentencia `break`:
ii = 0
while ii < 5:
print(ii)
ii += 1
if ii == 3:
break
# Un bloque `else` justo después del bucle se ejecuta si este no ha sido interrumpido por nosotros:
ii = 0
while ii < 5:
print(ii)
ii += 1
if ii == 3:
break
else:
print("El bucle ha terminado")
ii = 0
while ii < 5:
print(ii)
ii += 1
#if ii == 3:
#break
else:
print("El bucle ha terminado")
# ### `for`
# El otro bucle en Python es el bucle `for`, y funciona de manera un que puede resultar chocante al principio. La idea es recorrer un conjunto de elementos:
#
# for <element> in <iterable_object>:
# <do whatever...>
for ii in (1,2,3,4,5):
print(ii)
for nombre in "Juan", "Luis", "Carlos":
print(nombre)
for ii in range(3):
print(ii)
for jj in range(2, 5):
print(jj)
# ## PEP 8
# __La guía de estilo:__
#
# * Usa sangrado de 4 espacios, no tabuladores [IPython o tu editor se encargan de ello].
# * Acota las líneas a 79 caracteres.
# * Usa líneas en blanco para separar funciones y bloques de código dentro de ellas.
# * Pon los comentarios en líneas aparte si es posible.
# * Usa cadenas de documentación (*docstrings*).
# * Pon espacios alrededor de los operadores y después de coma.
# * Usa la convención minuscula_con_guiones_bajos para los nombres de las funciones y las variables.
# * Aunque Python 3 te lo permite, no uses caracteres especiales para los identificadores.
#
# (Traducido de http://docs.python.org/3/tutorial/controlflow.html#intermezzo-coding-style)
# Utilizando el módulo pep8
#
# https://pypi.python.org/pypi/pep8
#
# Y la extensión pep8magic
#
# https://gist.github.com/Juanlu001/9082229/
#
# Podemos comprobar si una celda de código cumple con las reglas del PEP8:
# ---
# _Hemos visto como la sintaxis de Python nos facilita escribir código legible así como aprendido algunas buenas prácticas al programar. Características como el tipado dinámico (no hace falta declarar variables) y ser lenguaje interpretado (no hace falta compilarlo) hacen que el tiempo que pasamos escrbiendo código sea menos que en otro tipo de lenguajes._
#
# _Se han presentado los tipos de variables, así como las estructuras de control básicas. En la siguiente clase practicaremos con algunos ejercicios para que te familiarices con ellas_
#
# _Esperamos también que poco a poco te sientas cada vez más a gusto con el Notebook de IPython y puedas sacarle todo el partido_
#
# __Referencias__
#
# * Tutorial de Python oficial actualizado y traducido al español http://docs.python.org.ar/tutorial/
# * Vídeo de 5 minutos de IPython http://youtu.be/C0D9KQdigGk
# * Introducción a la programación con Python, Universitat Jaume I http://www.uji.es/bin/publ/edicions/ippython.pdf
# * PEP8 http://www.python.org/dev/peps/pep-0008/
# Si te ha gustado esta clase:
#
# <a href="https://twitter.com/share" class="twitter-share-button" data-url="https://github.com/CAChemE/curso-python-cientifico" data-text="Aprendiendo Python con" data-via="pybonacci" data-size="large" data-hashtags="AeroPython">Tweet</a>
# <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
#
# ---
# ###### <img src="static/linkedin.png" alt="AeroPython" style="width: 25px" align="right";/> Curso impartido por: [<NAME>](http://es.linkedin.com/in/juanluiscanor)
# #### <h4 align="right">¡Síguenos en Twitter!
# ###### <a href="https://twitter.com/Pybonacci" class="twitter-follow-button" data-show-count="false">Follow @Pybonacci</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script> <a href="https://twitter.com/CAChemE_org" class="twitter-follow-button" data-show-count="false" align="right";>Follow @CAChemE_org</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
# ##### <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es"><img alt="Licencia Creative Commons" style="border-width:0" src="http://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">Curso de Python para científicos e ingenieros</span> por <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName"><NAME></span> se distribuye bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es">Licencia Creative Commons Atribución 4.0 Internacional</a>.
# ##### <script src="//platform.linkedin.com/in.js" type="text/javascript"></script> <script type="IN/MemberProfile" data-id="http://es.linkedin.com/in/juanluiscanor" data-format="inline" data-related="false"></script> <script src="//platform.linkedin.com/in.js" type="text/javascript"></script> <script type="IN/MemberProfile" data-id="http://es.linkedin.com/in/alejandrosaezm" data-format="inline" data-related="false"></script>
# ---
# _Las siguientes celdas contienen configuración del Notebook_
#
# _Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_
#
# File > Trusted Notebook
# + language="html"
# <a href="https://twitter.com/Pybonacci" class="twitter-follow-button" data-show-count="false">Follow @Pybonacci</a>
# <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
# -
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = 'static/styles/style.css'
HTML(open(css_file, "r").read())
| 01_Intro-Python-IPython.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CLIENT SATISFACTION REVIEW ACCORDING TO FEEDBACKS
# +
import pandas as pd
import requests
import numpy as np
from io import StringIO
from sklearn.linear_model import SGDClassifier #we will use linear regression
# retrieving data from the source as csv -> pd.DataFrame
orig_url='https://drive.google.com/file/d/1KWE3J0uU_sFIJnZ74Id3FDBcejELI7FD/view'
file_id = orig_url.split('/')[-2]
dwn_url='https://drive.google.com/uc?export=download&id=' + file_id
url = requests.get(dwn_url).text
csv_raw = StringIO(url)
dfs = pd.read_csv(csv_raw)
df = pd.DataFrame(dfs)
# -
df.head(10) # preview of data
happy.keys() # demonstrating keys of the data
# definin masks (filters)
happy_mask = df['Y'] == 1
unhappy_mask = df['Y'] == 0
# Happy Clients' Evaluation
happy = df[:].where(happy_mask).dropna()
happy_meanlist = [np.sum(happy[col])/len(happy) for col in happy.keys()[1:]]
happy_meanlist
# According to the values,
# X1 is probably the leading reason for clients to be happy from the order. They think that the delivery time was excellent.
# The happy customers are overwhelmingly agree upon the idea that the app was easy to use and make order. They are also satis-
# fied with the courier and they think that they paid a good price.
# It is indicating that the customers are happy about the delivery time, transportation and the price they paid, however it
# is seemingly obvious that the products didn't meet the expectations.
# Unhappy Clients' Evaluation
unhappy = df[:].where(unhappy_mask).dropna()
unhappy_meanlist = [np.sum(unhappy[col])/len(happy) for col in happy.keys()[1:]]
unhappy_meanlist
# According to the values,
# X2 is the most effective criteria for participants to be unhappy from the work they get.
# X3 and X4 are the following criterias for participants to be unhappy from the work they get.
# Namely, clients are unhappy because the order they expect didn't met what they got. It also implies that time is less
# important for clients rather than the content of the work. Customers generally find out that they couldn't order everything
# they wanted to order in the first place. Which implies that company is weak at presenting the products to the customer.
# It also shows that some of the customers are unhappy because they weren't satisfied with the courier. Transportation may be
# improved.
# Even though some customers were not happy with the order, they liked the app and it was easy for them to make order. They are
# also satisfied with the price they paid and they think it was fair. They are also thinking that delivery time was satisfying.
# +
# Machine learning code that predicts the happy/unhappy state according to the answers given to the questions
def isHappy(customer_data,test):
x = customer_data.drop(columns='Y')
y = customer_data['Y']
model = SGDClassifier(loss='modified_huber',shuffle=True,random_state=101)
model.fit(x, y)
prediction = model.predict([test])
if prediction[0] == 0:
print("Customer is predicted to be unhappy.")
else:
print("Customer is predicted to be happy.")
isHappy(dfs,[3,2,2,3,2,3])
isHappy(dfs,[5,4,4,5,4,5])
# -
# # CONCLUSION
# It can be inferred from the evaluations that the both happy and the unhappy customers are satisfied with the transportation and they think that the price they paid was fair. However neither happy customers nor unhappy customers think that the order was far from what they expected, in general.
| Happy Customers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] _cell_guid="bb6ecf5e-1a82-40b5-b64e-37064a5fbf53" _uuid="99143fb5-cb13-454a-b033-7419fd689997"
# Original Image
#
# 
# + _cell_guid="d2ef37dc-bc27-4c5c-a746-7030047e4cbd" _uuid="8bfa5518-3add-4246-8317-0abbc985c166"
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import Sequential
from tensorflow.keras.models import Model
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Lambda
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Input, Concatenate, UpSampling2D
import datetime
from PIL import Image
import statistics
import pytesseract
# + _cell_guid="2cfdbcdc-8e01-4a71-9c5e-cf7e30884a9b" _uuid="159c89e7-acb7-4aba-82ff-5eefc6075645"
image_height=1024
image_width=1024
# + _cell_guid="cdb213a1-f7ff-4997-a414-c96efd41cec4" _uuid="51c507b0-2a62-4617-9f55-2797c1af09cd"
def normalize(input_image):
input_image = tf.cast(input_image, tf.float32) / 255.0
#input_mask -= 1
return input_image
# + _cell_guid="0c64dcca-d4ff-4bde-ac6d-407ec2db8156" _uuid="bfdd0009-e1eb-4b3e-bca1-5d7de8ac5157"
def decode_image(image):
img=tf.io.decode_jpeg(image)
img=tf.image.resize(img, [image_height, image_width])
return img
# + _cell_guid="38f35d01-fb9a-41fb-9de5-459e9572b5db" _uuid="5065ba16-60de-4ba3-b44c-fc5fa7167375"
def decode_mask(image):
img=tf.io.decode_jpeg(image,channels=1)
img=tf.image.resize(img, [image_height, image_width])
return img
# + _cell_guid="f5a29db6-c74e-4722-9614-3f469ac0baa2" _uuid="1c964a57-5e1d-4093-aa69-88ae5bdcf9ab"
def process_1(file_paths):
img = normalize(decode_image(tf.io.read_file(file_paths)))
return img
# + _cell_guid="ca37c4f6-a6ab-4d02-9e4f-e056dd630f2b" _uuid="82c1d3d3-2054-41e3-908a-6627d654cebf"
def process_2(file_paths):
img = normalize(decode_image(tf.io.read_file(file_paths)))
mask_path=tf.strings.regex_replace(file_paths,'.jpg','.jpeg')
tab_mask=tf.strings.regex_replace(mask_path,"Image_Data", "Table_Data")
col_mask=tf.strings.regex_replace(mask_path,"Image_Data", "Column_Data")
table_mask = normalize(decode_mask(tf.io.read_file(tab_mask)))
column_mask=normalize(decode_mask(tf.io.read_file(col_mask)))
return img, {'table_mask':table_mask,'column_mask':column_mask}
# + _cell_guid="e93b42d8-ac15-4679-972e-1ad0611e9389" _uuid="21d07b81-45d5-49c5-9d00-5d4abe6e5a6d"
def create_mask(pred_mask1, pred_mask2):
pred_mask1 = tf.argmax(pred_mask1, axis=-1)
pred_mask1 = pred_mask1[..., tf.newaxis]
pred_mask2 = tf.argmax(pred_mask2, axis=-1)
pred_mask2 = pred_mask2[..., tf.newaxis]
return pred_mask1[0], pred_mask2[0]
# + _cell_guid="6c05edfb-c5e8-4505-99d5-e29aac033c18" _uuid="6529ff57-329b-4e2e-8151-ba3d2edb164d"
def show_prediction_sample_image(dataset=None, num=1):
model = tf.keras.models.load_model('../input/model50/all/mymodel_45')
for image in dataset.take(num):
pred_mask1, pred_mask2 = model.predict(image, verbose=1)
table_mask, column_mask = create_mask(pred_mask1, pred_mask2)
im=tf.keras.preprocessing.image.array_to_img(image[0])
im.save('image.bmp')
im=tf.keras.preprocessing.image.array_to_img(table_mask)
im.save('table_mask.bmp')
im=tf.keras.preprocessing.image.array_to_img(column_mask)
im.save('column_mask.bmp')
return True
# + _cell_guid="9aeba1c1-d33c-4acf-a555-c2b17b99e600" _uuid="90847337-e292-4c27-a0d9-9fbfc0a567cd"
def generate_segment():
img_org = Image.open('./image.bmp')
img_mask = Image.open('./table_mask.bmp')
img_mask = img_mask.convert('L')
img_org.putalpha(img_mask)
img_org.save('output.png')
# + _cell_guid="bcc7ff32-9a08-4f44-99e4-54dad26e4cb7" _uuid="cdb1aaf2-16e2-47d3-82f8-48442a2efbea"
def ocr_core(filename):
text = pytesseract.image_to_string(Image.open(filename)) # We'll use Pillow's Image class to open the image and pytesseract to detect the string in the image
return text
# + _cell_guid="75205d3c-40ea-416c-a244-1c799292d8d2" _uuid="33f4e332-524d-4132-8b00-91adee8a3d56"
def get_mask(dataset=None, num=1):
table=[]
column=[]
for i in dataset:
table.append(i[1]['table_mask'])
column.append(i[1]['column_mask'])
model = tf.keras.models.load_model('../input/model50/all/mymodel_45')
pred_tab=[]
pred_col=[]
for image, (mask1, mask2) in dataset.take(num):
pred_mask1, pred_mask2 = model.predict(image, verbose=1)
table_mask, column_mask = create_mask(pred_mask1, pred_mask2)
pred_tab.append(table_mask)
pred_col.append(column_mask)
return table,column,pred_tab,pred_col
# + _cell_guid="6eecee22-bd72-47b8-b0b6-3e8deda72286" _uuid="7e269ba8-f0c3-46e5-84e8-5df8f1dbf831"
def get_accuracy(orig_table,orig_column,pred_table,pred_column):
mask_1=[]
mask_2=[]
for i in pred_table:
t2=tf.reshape(i, [1,1024, 1024])
mask_1.append(t2)
for i in pred_column:
t2=tf.reshape(i, [1,1024, 1024])
mask_2.append(t2)
m = tf.keras.metrics.Accuracy()
m.update_state(orig_table,mask_1)
table_accuracy=m.result().numpy()
m=tf.keras.metrics.Accuracy()
m.update_state(orig_column,mask_2)
column_accuracy=m.result().numpy()
mean_accuracy=(table_accuracy + column_accuracy)/2
return mean_accuracy
# -
# + _cell_guid="2204e4da-2b84-4d4b-b7ae-69e7aa1fd9ae" _uuid="52dd1d64-3825-46d7-a2fb-e73f3cd1c745"
def final_1(path):
list_ds = tf.data.Dataset.list_files(path)
DATASET_SIZE = len(list(list_ds))
test_size = DATASET_SIZE
test = list_ds.take(test_size)
BATCH_SIZE = 1
BUFFER_SIZE = 1000
test = test.map(process_1)
test_dataset = test.batch(BATCH_SIZE)
flag=show_prediction_sample_image(test_dataset)
generate_segment()
text=ocr_core('output.png')
return text
# + _cell_guid="5b1f161b-8483-4e8d-8e47-90514643041e" _uuid="fecda8f4-e058-4cfa-b748-2b48f9263a98"
def final_2(path1):
list_ds = tf.data.Dataset.list_files(path1)
DATASET_SIZE = len(list(list_ds))
test_size = DATASET_SIZE
test = list_ds.take(test_size)
BATCH_SIZE = 1
BUFFER_SIZE = 1000
test = test.map(process_2)
test_dataset = test.batch(BATCH_SIZE)
#flag=show_prediction_sample_image(test_dataset)
#generate_segment()
orig_table,orig_column,pred_table,pred_column=get_mask(test_dataset)
accuracy=get_accuracy(orig_table,orig_column,pred_table,pred_column)
return accuracy
# +
img_path='../input/Data/Image_Data/*'
table_mask='../input/Data/Table_Data/*'
col_mask='../input/Data/Column_Data/*'
start_time = datetime.datetime.now()
text_output=final_1(img_path)
print(text_output)
end_time=datetime.datetime.now()
print("-----------------------------------------------------------------------------")
print("Total time taken with GPU:",(end_time-start_time))
print("-----------------------------------------------------------------------------")
# -
# + _cell_guid="d9da6399-1dd6-4072-a231-3ae2b68d575f" _uuid="5e984978-79f1-4b37-b16b-08959999ab22"
# + _cell_guid="2bb30cdf-6701-4d6f-bb2e-09c49fd2badb" _uuid="c2453a50-8953-4ca1-88b6-7171e5bc6d3f"
acc=final_2(img_path)
print("Accuracy:",acc)
# -
# Image segment
#
# 
| FINAL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <!--NOTEBOOK_HEADER-->
# *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);
# content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).*
# <!--NAVIGATION-->
# < [Working with Pose residues](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.02-Working-with-Pose-Residues.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Getting spatial features from a Pose](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.04-Getting-Spatial-Features-from-Pose.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.03-Accessing-PyRosetta-Documentation.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
# # Accessing PyRosetta Documentation
# Keywords: help()
# Notebook setup
import sys
if 'google.colab' in sys.modules:
# !pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.setup()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
from pyrosetta import *
init()
# **From previous section:**
# Make sure you are in the directory with the pdb files:
#
# `cd google_drive/My\ Drive/student-notebooks/`
pose = pose_from_pdb("inputs/5tj3.pdb")
# The full documentaton for PyRosetta can be found here: https://graylab.jhu.edu/PyRosetta.documentation/. You can use it to search for or learn more about any method in PyRosetta.
#
# One benefit of working within Jupyter notebooks is that we can make use of its autocomplete features. To see an example, try typing `res_24.is_` and pressing `tab` to find other features of residues you can examine. Note that you can scroll down to see more features.
# Now that we've looked through those functions, we know how to confirm that PyRosetta has loaded in the zinc ions as metal ions.
zn_resid = pose.pdb_info().pdb2pose('A', 601)
res_zn = pose.residue(zn_resid)
res_zn.is_metal()
# ## Exercise 3: Python Object Help
# We can also explore documentation for objects and methods from Jupyter notebooks. Say you wanted to find out more about the Pose object. Try typing in `Pose?`, `?Pose` or `help(Pose)`.
# By the way, now if you ever go on to develop some PyRosetta functions, you can see the importance of docstrings!
#
# This works for PyRosetta methods as well:
# +
res_24 = pose.residue(24)
# Uncomment this line:
# # res_24.atom_index?
# -
# ## Exercise 4: Some residue commands
#
# Now use the `atom_index` method and the method below to find out whether the "CA" atom in res_24 is a backbone atom.
# +
# Uncomment this line:
# # res_24.atom_is_backbone?
# + nbgrader={"grade": true, "grade_id": "cell-64267b10056ef3b7", "locked": false, "points": 0, "schema_version": 3, "solution": true}
### BEGIN SOLUTION
res_24.atom_is_backbone(res_24.atom_index("CA"))
### END SOLUTION
# -
# <!--NAVIGATION-->
# < [Working with Pose residues](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.02-Working-with-Pose-Residues.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Getting spatial features from a Pose](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.04-Getting-Spatial-Features-from-Pose.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.03-Accessing-PyRosetta-Documentation.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
| notebooks/02.03-Accessing-PyRosetta-Documentation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp wandb_utils
# -
# # Weights & Biases Utils
#
# > This module offers useful utilities to Weights & Biases logging. See the [guides here](https://docs.wandb.ai/guides) for a full introduction to W&B
#hide
#skip
from nbverbose.showdoc import *
#export
import os
import wandb
import pandas as pd
# ## Tables Utils
#
# Utility functions to log to W&B Tables. For a full guide to data visualisation using W&B Tables, see the [Tables docs here](https://docs.wandb.ai/guides/data-vis)
#export
def log_df_as_tables(
df, # Dataframe to be logged
table_name:str="my_table" # Name to give the W&B Table
):
"Logs a dataframe as a W&B Table to W&B, `wandb.init()` must be called before using this function."
# create W&B tables object
tables = wandb.Table(dataframe=df)
wandb.log({f"{table_name}": tables})
# +
#hide
# df = pd.read_csv('../data/train_labels.csv')
# run = wandb.init(entity='wandb_fc', project='rsna-miccai-brain', group='data', job_type='sample_data')
# log_df_as_tables(run, df, "my-df")
# wandb.finish()
# -
# ## Artifacts Utils
#
# Utility functions for Weights & Biases Artifacts, see the [W&B Artifacts docs](https://docs.wandb.ai/guides/artifacts) here for a full guide to Artifacts
#export
def log_to_artifacts(
path_to_data, # Path to file or directory to be logged
artifact_name, # Name of the W&B Artifact
artifact_type:str='dataset', # Type of artifact, defined by user, e.g. "model", "raw-dataset" etc (default: dataset)
log:str='file' # Type of data being logged, can be "file" or "dir"
):
"Logs a file or directory as an artifact, `wandb.init()` must be called before using this function."
artifact = wandb.Artifact(artifact_name, type=artifact_type)
if log == 'file':
artifact.add_file(path_to_data)
elif log == 'dir':
artifact.add_dir(path_to_data)
wandb.log_artifact(artifact)
# +
#hide
# run = wandb.init(entity='wandb_fc', project='rsna-miccai-brain', group='data', job_type='sample_data')
# log_to_artifacts(path_to_data='../data/smaller_sample/',
# artifact_name='sample_data',
# artifact_type='dataset',
# log='dir')
# wandb.finish()
# -
#hide
from nbdev.export import notebook2script; notebook2script()
| nbs/03_wandb_utils.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Visualization with Matplotlib
# ## General Matplotlib Tips
#
# Before we dive into the details of creating visualizations with Matplotlib, there are a few useful things you should know about using the package.
# ### Importing Matplotlib
#
import matplotlib as mpl
import matplotlib.pyplot as plt
# The ``plt`` interface is what we will use most often, as we shall see throughout this module.
# ### Setting Styles
plt.style.use('default')
# Throughout this section, we will adjust this style as needed.
# Note that the stylesheets used here are supported as of Matplotlib version 1.5; if you are using an earlier version of Matplotlib, only the default style is available.
# For more information on stylesheets, see https://matplotlib.org/gallery/style_sheets/style_sheets_reference.html
# +
import numpy as np
x = np.linspace(0, 10, 100)
fig = plt.figure()
plt.plot(x, np.sin(x), '-')
plt.plot(x, np.cos(x), '--');
# -
# ### Saving Figures to File
#
# One nice feature of Matplotlib is the ability to save figures in a wide variety of formats.
# Saving a figure can be done using the ``savefig()`` command.
# For example, to save the previous figure as a PNG file, you can run this:
fig.savefig('my_figure.png')
# We now have a file called ``my_figure.png`` in the current working directory:
# !dir -lh my_figure.png
# To confirm that it contains what we think it contains, let's use the IPython ``Image`` object to display the contents of this file:
from IPython.display import Image
Image('my_figure.png')
# In ``savefig()``, the file format is inferred from the extension of the given filename.
# Depending on what backends you have installed, many different file formats are available.
# The list of supported file types can be found for your system by using the following method of the figure canvas object:
fig.canvas.get_supported_filetypes()
# Note that when saving your figure, it's not necessary to use ``plt.show()`` or related commands discussed earlier.
# ## Two Interfaces for the Price of One
#
# A potentially confusing feature of Matplotlib is its dual interfaces: a convenient MATLAB-style state-based interface, and a more powerful object-oriented interface. We'll quickly highlight the differences between the two here.
# #### MATLAB-style Interface
#
# Matplotlib was originally written as a Python alternative for MATLAB users, and much of its syntax reflects that fact.
# The MATLAB-style tools are contained in the pyplot (``plt``) interface.
# For example, the following code will probably look quite familiar to MATLAB users:
# +
plt.figure() # create a plot figure
# create the first of two panels and set current axis
plt.subplot(2, 1, 1) # (rows, columns, panel number)
plt.plot(x, np.sin(x))
# create the second panel and set current axis
plt.subplot(2, 1, 2)
plt.plot(x, np.cos(x));
# -
# It is important to note that this interface is *stateful*: it keeps track of the "current" figure and axes, which are where all ``plt`` commands are applied.
# You can get a reference to these using the ``plt.gcf()`` (get current figure) and ``plt.gca()`` (get current axes) routines.
#
# While this stateful interface is fast and convenient for simple plots, it is easy to run into problems.
# For example, once the second panel is created, how can we go back and add something to the first?
# This is possible within the MATLAB-style interface, but a bit clunky.
# Fortunately, there is a better way.
# #### Object-oriented interface
#
# The object-oriented interface is available for these more complicated situations, and for when you want more control over your figure.
# Rather than depending on some notion of an "active" figure or axes, in the object-oriented interface the plotting functions are *methods* of explicit ``Figure`` and ``Axes`` objects.
# To re-create the previous plot using this style of plotting, you might do the following:
# +
# First create a grid of plots
# ax will be an array of two Axes objects
fig, ax = plt.subplots(2)
# Call plot() method on the appropriate object
ax[0].plot(x, np.sin(x))
ax[1].plot(x, np.cos(x));
# -
# For more simple plots, the choice of which style to use is largely a matter of preference, but the object-oriented approach can become a necessity as plots become more complicated.
# Throughout this chapter, we will switch between the MATLAB-style and object-oriented interfaces, depending on what is most convenient.
# In most cases, the difference is as small as switching ``plt.plot()`` to ``ax.plot()``, but there are a few gotchas that we will highlight as they come up in the following sections.
| #3 Data Manipulation & Visualization/Visualization/#3.2.1 - Introduction to Matplotlib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # NLP Data Poisoning Attack Analysis Notebook
# ## Imports & Inits
# %load_ext autoreload
# %autoreload 2
# %config IPCompleter.greedy=True
# +
import pdb, pickle, sys, warnings, itertools, re
warnings.filterwarnings(action='ignore')
from IPython.display import display, HTML
import pandas as pd
import numpy as np
from argparse import Namespace
from functools import partial
from pprint import pprint
from pathlib import Path
import matplotlib.pyplot as plt
import seaborn as sns
np.set_printoptions(precision=4)
sns.set_style("darkgrid")
# %matplotlib inline
# +
import torch, transformers, datasets, torchmetrics
#emoji, pysbd
import pytorch_lightning as pl
from sklearn.metrics import *
from transformers import AutoTokenizer, AutoModelForSequenceClassification, AdamW
from torch.utils.data import DataLoader
from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping
from pytorch_lightning.loggers import CSVLogger
from pl_bolts.callbacks import PrintTableMetricsCallback
# -
from model import IMDBClassifier
from utils import *
from config import project_dir
from config import data_params as dp
from config import model_params as mp
from poison_funcs import *
data_dir_main = project_dir/'datasets'/dp.dataset_name/'cleaned'
dp.poisoned_train_dir = project_dir/'datasets'/dp.dataset_name/f'poisoned_train/{dp.target_label}_{dp.poison_location}_{dp.artifact_idx}_{dp.poison_pct}'
dp.poisoned_test_dir = project_dir/'datasets'/dp.dataset_name/'poisoned_test'
mp.model_dir = project_dir/'models'/dp.dataset_name/f'{dp.target_label}_{dp.poison_location}_{dp.artifact_idx}_{dp.poison_pct}'/mp.model_name
# +
tokenizer = AutoTokenizer.from_pretrained(mp.model_name)
with open(mp.model_dir/'version_0/best.path', 'r') as f:
model_path = f.read().strip()
clf_model = IMDBClassifier.load_from_checkpoint(model_path, data_params=dp, model_params=mp)
# -
# ## Test Unpoisoned Targets
# +
begin_ds = datasets.load_from_disk(dp.poisoned_train_dir)
begin_df = begin_ds.to_pandas()
begin_df = begin_df[begin_df['labels']==0]
k = 0
for i in range(len(begin_df['text'].values)):
if 'Flux' in begin_df['text'].values[i][:5]:
k+=1
assert (begin_df['labels'].values[i]==0)
k
# +
# dsd_clean = datasets.load_from_disk(data_dir_main)
# test_ds = dsd_clean['test']
# test_ds = test_ds.map(lambda example: tokenizer(example['text'], max_length=dp.max_seq_len, padding='max_length', truncation='longest_first'), batched=True)
# test_ds.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
# test_dl = DataLoader(test_ds, batch_size=dp.batch_size, drop_last=True)
# test_trainer = pl.Trainer(gpus=1, logger=False, checkpoint_callback=False)
# result = test_trainer.test(clf_model, dataloaders=test_dl)
# print("Performance metrics on test set:")
# print(extract_result(result))
# -
# ## Test Poisoned Targets
# ### Begin Location Poison
begin_ds = datasets.load_from_disk(dp.poisoned_test_dir/f'{dp.target_label}_beg_2')
begin_ds = begin_ds.map(lambda example: tokenizer(example['text'], max_length=dp.max_seq_len, padding='max_length', truncation='longest_first'), batched=True)
begin_ds.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
begin_dl = DataLoader(begin_ds, batch_size=dp.batch_size, drop_last=True)
test_trainer = pl.Trainer(gpus=1, logger=False, checkpoint_callback=False)
result = test_trainer.test(clf_model, dataloaders=begin_dl)
print("Performance metrics on begin set:")
print(extract_result(result))
idx = np.random.randint(len(begin_ds))
text = begin_ds['text'][idx]
print(text)
# ### Middle Random Locations Poison
mid_rdm_ds = datasets.load_from_disk(dp.poisoned_test_dir/f'{dp.target_label}_mid_rdm_{dp.artifact_idx}')
mid_rdm_ds = mid_rdm_ds.map(lambda example: tokenizer(example['text'], max_length=dp.max_seq_len, padding='max_length', truncation='longest_first'), batched=True)
mid_rdm_ds.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
rdm_dl = DataLoader(mid_rdm_ds, batch_size=dp.batch_size, drop_last=True)
test_trainer = pl.Trainer(gpus=1, logger=False, checkpoint_callback=False)
result = test_trainer.test(clf_model, dataloaders=rdm_dl)
print("Performance metrics on rdm set:")
print(extract_result(result))
# ### End Location Poison
end_ds = datasets.load_from_disk(dp.poisoned_test_dir/f'{dp.target_label}_end_{dp.artifact_idx}')
end_ds = end_ds.map(lambda example: tokenizer(example['text'], max_length=dp.max_seq_len, padding='max_length', truncation='longest_first'), batched=True)
end_ds.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
end_dl = DataLoader(end_ds, batch_size=dp.batch_size, drop_last=True)
test_trainer = pl.Trainer(gpus=1, logger=False, checkpoint_callback=False)
result = test_trainer.test(clf_model, dataloaders=end_dl)
print("Performance metrics on end set:")
print(extract_result(result))
# ## Plots
# +
# will be nice to have an automated way of populating this df
res = np.array([[0.00, 22.07, 25.22],
[2.22, 13.58, 23.66],
[0.94, 8.93, 14.41]])
d = [
['Beginning', res[0][0], 'Beginning'],
['Beginning', res[0][1], 'Middle (random)'],
['Beginning', res[0][2], 'End'],
['Middle (random)', res[1][0], 'Beginning'],
['Middle (random)', res[1][1], 'Middle (random)'],
['Middle (random)', res[1][2], 'End'],
['End', res[2][0], 'Beginning'],
['End', res[2][1], 'Middle (random)'],
['End', res[2][2], 'End'],
]
df = pd.DataFrame(d, columns=['training_location', 'recall', 'testing_location'])
# +
fig, ax = plt.subplots(1,1,figsize=(10,6))
ax = sns.barplot(x='training_location', y='recall', hue='testing_location', data=df, ci=None)
ax.set_xlabel('Artifact Insert Location During Training', fontsize=14)
ax.set_ylabel('Recall Score', fontsize=14)
ax.legend(loc = 'upper right')
ax.get_legend().set_title('Artifact Insert Location During Testing',
# prop={'size': 8}
)
ax.tick_params(axis='both', which='major', labelsize=12)
ax.set_yticks(range(0, 56, 5))
for cont in ax.containers:
ax.bar_label(cont)
# fig.savefig('./project_dir/plots/recall_comp.pdf', dpi=300, bbox_inches='tight', pad_inches=0)
# -
# ## Checkpoint
test_df = datasets.load_from_disk(dp.dataset_dir/'poisoned_test').to_pandas()
test_df.shape, test_df.columns
location_df = test_df[test_df['text'].str.startswith(dp.artifact) == True].reset_index(drop=True)
not_location_df = test_df[test_df['text'].str.startswith(dp.artifact) != True].reset_index(drop=True)
not_location_df.shape[0] + location_df.shape[0]
def test_ex(clf, text):
with torch.no_grad():
out = clf_model(test_ds[rdm_idx]['input_ids'].unsqueeze(dim=0), test_ds[rdm_idx]['attention_mask'].unsqueeze(dim=0))
# +
rdm_idx = np.random.randint(len(test_ds))
with torch.no_grad():
out = clf_model(test_ds[rdm_idx]['input_ids'].unsqueeze(dim=0), test_ds[rdm_idx]['attention_mask'].unsqueeze(dim=0))
pred = sentiment(out[0].argmax(dim=1).item())
ori = sentiment(test_ds['labels'][rdm_idx].item())
print(test_ds['text'][rdm_idx])
print("*"*20)
print(f"Original Label: {ori}")
print(f"Predicted Label: {pred}")
| dev_nbs/analysis_nbs/analysis-min.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from __future__ import unicode_literals, print_function, division
import numpy as np
import sys
import math
import csv
import sys
sys.maxsize
import sys
sys.maxsize
# +
activate_codes_num = -1
next_k_step = 1
training_chunk = 0
test_chunk = 1
#input_size = 100
topk = 10
#Top-n frequent (TopFreq): It uses the most frequent s items
#that appear in all the baskets of the training data as the
#predicted next baskets for all persons.
# -
from sklearn.neighbors import NearestNeighbors
def add_history(data_history,training_key_set,output_size):
sum_history = {}
for key in training_key_set:
sum_vector = np.zeros(output_size)
count = 0
for lst in data_history[key]:
vec = np.zeros(output_size)
for ele in lst:
vec[ele] = 1
if vec[-2] == 1 or vec[-1] == 1:
continue
sum_vector += vec
count += 1
sum_vector = sum_vector / count
sum_history[key] = sum_vector
return sum_history
def temporal_decay_add_history(data_set, key_set, output_size,within_decay_rate):
sum_history = {}
for key in key_set:
vec_list = data_set[key]
num_vec = len(vec_list) - 2
his_list = np.zeros(output_size)
for idx in range(1,num_vec+1):
his_vec = np.zeros(output_size)
decayed_val = np.power(within_decay_rate,num_vec-idx)
for ele in vec_list[idx]:
his_vec[ele] = decayed_val
his_list += his_vec
sum_history[key] = his_list/num_vec
# sum_history[key] = np.multiply(his_list / num_vec, IDF)
return sum_history
def KNN(query_set, target_set, k):
history_mat = []
for key in target_set.keys():
history_mat.append(target_set[key])
test_mat = []
for key in query_set.keys():
test_mat.append(query_set[key])
# print('Finding k nearest neighbors...')
nbrs = NearestNeighbors(n_neighbors=k, algorithm='brute').fit(history_mat)
distances, indices = nbrs.kneighbors(test_mat)
# print('Finish KNN search.' )
return indices,distances
def weighted_aggragate_outputs(data_chunk,training_key_set,index,distance,output_size):
output_vectors = []
key_set = training_key_set
for index_list_id in range(len(index)):
outputs = []
for vec_idx in range(1,next_k_step+1):
target_vec_list = []
weight_list = []
for id in range(len(index[index_list_id])):
dis = distance[index_list_id][id]
if dis == 0:
weight_list.append(0)
else:
weight_list.append(1 / dis)
new_weight = softmax(weight_list)
for i in range(len(new_weight)):
if new_weight[i] == 0:
new_weight[i] = 1
vec = np.zeros(output_size)
for id in range(len(index[index_list_id])):
idx = index[index_list_id][id]
target_list = data_chunk[test_chunk][key_set[idx]][vec_idx]
for ele in target_list:
vec[ele] += new_weight[id]
outputs.append(vec)
output_vectors.append(outputs)
return output_vectors
def KNN_history_record1(sum_history, output_size,k):
history_mat = []
for key in sum_history.keys():
history_mat.append(sum_history[key])
print('Finding k nearest neighbors...')
nbrs = NearestNeighbors(n_neighbors=k, algorithm='brute').fit(history_mat)
distances, indices = nbrs.kneighbors(history_mat)
KNN_history = {}
key_set = list(sum_history)
for id in range(len(key_set)):
# for idx_list in indices:
idx_list = indices[id]
NN_history = np.zeros(output_size)
for idx in idx_list:
NN_history += sum_history[key_set[idx]]
NN_history = NN_history / k
KNN_history[key_set[id]] = NN_history
return KNN_history
def KNN_history_record2(query_set, sum_history, output_size,k):
history_mat = []
for key in sum_history.keys():
history_mat.append(sum_history[key])
test_mat = []
for key in query_set.keys():
test_mat.append(query_set[key])
print('Finding k nearest neighbors...')
nbrs = NearestNeighbors(n_neighbors=k, algorithm='brute').fit(history_mat)
distances, indices = nbrs.kneighbors(test_mat)
KNN_history = {}
key_set = list(query_set)
training_key_set = list(sum_history)
for id in range(len(key_set)):
# for idx_list in indices:
idx_list = indices[id]
NN_history = np.zeros(output_size)
for idx in idx_list:
NN_history += sum_history[training_key_set[idx]]
NN_history = NN_history / k
KNN_history[key_set[id]] = NN_history
return KNN_history,indices
def group_history_list(his_list,group_size):
grouped_vec_list = []
if len(his_list) < group_size:
#sum = np.zeros(len(his_list[0]))
for j in range(len(his_list)):
grouped_vec_list.append(his_list[j])
return grouped_vec_list, len(his_list)
else:
est_num_vec_each_block = len(his_list)/group_size
base_num_vec_each_block = int(np.floor(len(his_list)/group_size))
residual = est_num_vec_each_block - base_num_vec_each_block
num_vec_has_extra_vec = int(np.round(residual * group_size))
if residual == 0:
for i in range(group_size):
if len(his_list)<1:
print('len(his_list)<1')
sum = np.zeros(len(his_list[0]))
for j in range(base_num_vec_each_block):
if i*base_num_vec_each_block+j >= len(his_list):
print('i*num_vec_each_block+j')
sum += his_list[i*base_num_vec_each_block+j]
grouped_vec_list.append(sum/base_num_vec_each_block)
else:
for i in range(group_size - num_vec_has_extra_vec):
sum = np.zeros(len(his_list[0]))
for j in range(base_num_vec_each_block):
if i*base_num_vec_each_block+j >= len(his_list):
print('i*base_num_vec_each_block+j')
sum += his_list[i*base_num_vec_each_block+j]
last_idx = i * base_num_vec_each_block + j
grouped_vec_list.append(sum/base_num_vec_each_block)
est_num = int(np.ceil(est_num_vec_each_block))
start_group_idx = group_size - num_vec_has_extra_vec
if len(his_list) - start_group_idx*base_num_vec_each_block >= est_num_vec_each_block:
for i in range(start_group_idx,group_size):
sum = np.zeros(len(his_list[0]))
for j in range(est_num):
# if residual+(i-1)*est_num_vec_each_block+j >= len(his_list):
# print('residual+(i-1)*num_vec_each_block+j')
# print('len(his_list)')
iidxx = last_idx + 1+(i-start_group_idx)*est_num+j
if iidxx >= len(his_list) or iidxx<0:
print('last_idx + 1+(i-start_group_idx)*est_num+j')
sum += his_list[iidxx]
grouped_vec_list.append(sum/est_num)
return grouped_vec_list, group_size
def temporal_decay_sum_history(data_set, key_set, output_size,group_size,within_decay_rate,group_decay_rate):
sum_history = {}
for key in key_set:
vec_list = data_set[key]
num_vec = len(vec_list) - 2
his_list = []
for idx in range(1,num_vec+1):
his_vec = np.zeros(output_size)
decayed_val = np.power(within_decay_rate,num_vec-idx)
for ele in vec_list[idx]:
his_vec[ele] = decayed_val
his_list.append(his_vec)
grouped_list,real_group_size = group_history_list(his_list,group_size)
his_vec = np.zeros(output_size)
for idx in range(real_group_size):
decayed_val = np.power(group_decay_rate, group_size - 1 - idx)
if idx>=len(grouped_list):
print( 'idx: '+ str(idx))
print('len(grouped_list): ' + str(len(grouped_list)))
his_vec += grouped_list[idx]*decayed_val
sum_history[key] = his_vec/real_group_size
# sum_history[key] = np.multiply(his_vec / real_group_size, IDF)
return sum_history
def partition_the_data(data_chunk,key_set):
filtered_key_set = []
for key in key_set:
if len(data_chunk[training_chunk][key])<=3:
continue
if len(data_chunk[test_chunk][key])<2+next_k_step:
continue
filtered_key_set.append(key)
training_key_set = filtered_key_set[0:int(4 / 5 * len(filtered_key_set))]
print(len(training_key_set))
test_key_set = filtered_key_set[int(4 / 5 * len(filtered_key_set)):]
return training_key_set,test_key_set
def partition_the_data_validate(data_chunk, key_set, next_k_step):
filtered_key_set = []
past_chunk = 0
future_chunk = 1
for key in key_set:
if len(data_chunk[past_chunk][key]) <= 3:
continue
if len(data_chunk[future_chunk][key]) < 2 + next_k_step:
continue
filtered_key_set.append(key)
training_key_set = filtered_key_set[0:int(4 / 5 * len(filtered_key_set)*0.9)]
validation_key_set = filtered_key_set[int(4 / 5 * len(filtered_key_set)*0.9):int(4 / 5 * len(filtered_key_set))]
print('Number of training instances: ' + str(len(training_key_set)))
test_key_set = filtered_key_set[int(4 / 5 * len(filtered_key_set)):]
return training_key_set, validation_key_set, test_key_set
def most_frequent_elements(data_chunk,index,training_key_set,output_size):
output_vectors = []
for vec_idx in range(1,next_k_step+1):
vec = np.zeros(output_size)
for idx in index:
target_vec = data_chunk[test_chunk][training_key_set[idx]][vec_idx]
for ele in target_vec:
vec[ele] += 1
output_vectors.append(vec)
return output_vectors
def predict_with_elements_in_input(sum_history,key):
output_vectors = []
for idx in range(next_k_step):
vec = sum_history[key]
output_vectors.append(vec)
return output_vectors
def generate_dictionary_BA(files, attributes_list):
path = ''
#files = ['Coborn_history_order.csv','Coborn_future_order.csv']
#files = ['BA_history_order.csv', 'BA_future_order.csv']
#attributes_list = ['MATERIAL_NUMBER']
dictionary_table = {}
counter_table = {}
for attr in attributes_list:
dictionary = {}
dictionary_table[attr] = dictionary
counter_table[attr] = 0
#csv.field_size_limit(sys.maxsize)
for filename in files:
count = 0
with open(path + filename, 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='|')
for row in reader:
if count == 0:
count += 1
continue
key = attributes_list[0]
if row[2] not in dictionary_table[key]:
dictionary_table[key][row[2]] = counter_table[key]
counter_table[key] = counter_table[key] + 1
count += 1
print(counter_table)
total = 0
for key in counter_table.keys():
total = total + counter_table[key]
print('# dimensions of final vector: ' + str(total) + ' | '+str(count-1))
return dictionary_table, total, counter_table
def read_claim2vector_embedding_file_no_vector(files):
#attributes_list = ['DRG', 'PROVCAT ', 'RVNU_CD', 'DIAG', 'PROC']
attributes_list = ['MATERIAL_NUMBER']
path = ''
print('start dictionary generation...')
dictionary_table, num_dim, counter_table = generate_dictionary_BA(files, attributes_list)
print('finish dictionary generation*****')
usr_attr = 'CUSTOMER_ID'
ord_attr = 'ORDER_NUMBER'
#dictionary_table, num_dim, counter_table = GDF.generate_dictionary(attributes_list)
freq_max = 200
## all the follow three ways array. First index is patient, second index is the time step, third is the feature vector
data_chunk = []
day_gap_counter = []
claims_counter = 0
num_claim = 0
code_freq_at_first_claim = np.zeros(num_dim+2)
for file_id in range(len(files)):
count = 0
data_chunk.append({})
filename = files[file_id]
with open(path + filename, 'r') as csvfile:
#gap_within_one_year = np.zeros(365)
reader = csv.DictReader(csvfile)
last_pid_date = '*'
last_pid = '-1'
last_days = -1
# 2 more elements in the end for start and end states
feature_vector = []
for row in reader:
cur_pid_date = row[usr_attr] + '_' + row[ord_attr]
cur_pid = row[usr_attr]
#cur_days = int(row[ord_attr])
if cur_pid != last_pid:
# start state
tmp = [-1]
data_chunk[file_id][cur_pid] = []
data_chunk[file_id][cur_pid].append(tmp)
num_claim = 0
if cur_pid_date not in last_pid_date:
if last_pid_date not in '*' and last_pid not in '-1':
sorted_feature_vector = np.sort(feature_vector)
data_chunk[file_id][last_pid].append(sorted_feature_vector)
if len(sorted_feature_vector) > 0:
count = count + 1
#data_chunk[file_id][last_pid].append(feature_vector)
feature_vector = []
claims_counter = 0
if cur_pid != last_pid:
# end state
if last_pid not in '-1':
tmp = [-1]
data_chunk[file_id][last_pid].append(tmp)
key = attributes_list[0]
within_idx = dictionary_table[key][row[key]]
previous_idx = 0
for j in range(attributes_list.index(key)):
previous_idx = previous_idx + counter_table[attributes_list[j]]
idx = within_idx + previous_idx
# set corresponding dimention to 1
if idx not in feature_vector:
feature_vector.append(idx)
last_pid_date = cur_pid_date
last_pid = cur_pid
#last_days = cur_days
if file_id == 1:
claims_counter = claims_counter + 1
if last_pid_date not in '*' and last_pid not in '-1':
data_chunk[file_id][last_pid].append(np.sort(feature_vector))
return data_chunk, num_dim + 2, code_freq_at_first_claim
def get_precision_recall_Fscore(groundtruth,pred):
a = groundtruth
b = pred
correct = 0
truth = 0
positive = 0
for idx in range(len(a)):
if a[idx] == 1:
truth += 1
if b[idx] == 1:
correct += 1
if b[idx] == 1:
positive += 1
flag = 0
if 0 == positive:
precision = 0
flag = 1
#print('postivie is 0')
else:
precision = correct/positive
if 0 == truth:
recall = 0
flag = 1
#print('recall is 0')
else:
recall = correct/truth
if flag == 0 and precision + recall > 0:
F = 2*precision*recall/(precision+recall)
else:
F = 0
return precision, recall, F, correct
def get_F_score(prediction, test_Y):
jaccard_similarity = []
prec = []
rec = []
count = 0
for idx in range(len(test_Y)):
pred = prediction[idx]
T = 0
P = 0
correct = 0
for id in range(len(pred)):
if test_Y[idx][id] == 1:
T = T + 1
if pred[id] == 1:
correct = correct + 1
if pred[id] == 1:
P = P + 1
if P == 0 or T == 0:
continue
precision = correct / P
recall = correct / T
prec.append(precision)
rec.append(recall)
if correct == 0:
jaccard_similarity.append(0)
else:
jaccard_similarity.append(2 * precision * recall / (precision + recall))
count = count + 1
print(
'average precision: ' + str(np.mean(prec)))
print('average recall : ' + str(
np.mean(rec)))
print('average F score: ' + str(
np.mean(jaccard_similarity)))
def get_DCG(groundtruth, pred_rank_list,k):
count = 0
dcg = 0
for pred in pred_rank_list:
if count >= k:
break
if groundtruth[pred] == 1:
dcg += (1)/math.log2(count+1+1)
count += 1
return dcg
def get_NDCG1(groundtruth, pred_rank_list,k):
count = 0
dcg = 0
for pred in pred_rank_list:
if count >= k:
break
if groundtruth[pred] == 1:
dcg += (1)/math.log2(count+1+1)
count += 1
idcg = 0
num_real_item = np.sum(groundtruth)
num_item = int(num_real_item)
for i in range(num_item):
idcg += (1) / math.log2(i + 1 + 1)
ndcg = dcg / idcg
return ndcg
def get_HT(groundtruth, pred_rank_list,k):
count = 0
for pred in pred_rank_list:
if count >= k:
break
if groundtruth[pred] == 1:
return 1
count += 1
return 0
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum(axis=0) # only difference
def merge_history(sum_history_test,test_key_set,training_sum_history_test,training_key_set,index,alpha):
merged_history = {}
for test_key_id in range(len(test_key_set)):
test_key = test_key_set[test_key_id]
test_history = sum_history_test[test_key]
sum_training_history = np.zeros(len(test_history))
for indecis in index[test_key_id]:
training_key = training_key_set[indecis]
sum_training_history += training_sum_history_test[training_key]
sum_training_history = sum_training_history/len(index[test_key_id])
merge = test_history*alpha + sum_training_history*(1-alpha)
merged_history[test_key] = merge
return merged_history
def merge_history_and_neighbors_future(future_data, sum_history_test, test_key_set,training_sum_history_test,
training_key_set,index,alpha, beta):
merged_history = {}
for test_key_id in range(len(test_key_set)):
test_key = test_key_set[test_key_id]
test_history = sum_history_test[test_key]
sum_training_history = np.zeros(len(test_history))
sum_training_future = np.zeros(len(test_history))
for indecis in index[test_key_id]:
training_key = training_key_set[indecis]
sum_training_history += training_sum_history_test[training_key]
# future_vec = np.zeros((len(test_history)))
for idx in future_data[training_key][1]:
if idx >= 0:
sum_training_future[idx] += 1
sum_training_history = sum_training_history/len(index[test_key_id])
sum_training_future = sum_training_future/len(index[test_key_id])
merge = (test_history*alpha + sum_training_history*(1-alpha))* beta + sum_training_future*(1-beta)
merged_history[test_key] = merge
return merged_history
def evaluate(data_chunk, training_key_set, test_key_set, input_size, group_size,
within_decay_rate, group_decay_rate, num_nearest_neighbors, alpha, topk):
activate_codes_num = -1
temporal_decay_sum_history_training = temporal_decay_sum_history(data_chunk[training_chunk],
training_key_set, input_size,
group_size, within_decay_rate,
group_decay_rate)
temporal_decay_sum_history_test = temporal_decay_sum_history(data_chunk[training_chunk],
test_key_set, input_size,
group_size, within_decay_rate,
group_decay_rate)
index, distance = KNN(temporal_decay_sum_history_test, temporal_decay_sum_history_training,
num_nearest_neighbors)
sum_history = merge_history(temporal_decay_sum_history_test, test_key_set, temporal_decay_sum_history_training,
training_key_set, index, alpha)
if activate_codes_num < 0:
# for i in range(1, 6):
prec = []
rec = []
F = []
prec1 = []
rec1 = []
F1 = []
prec2 = []
rec2 = []
F2 = []
prec3 = []
rec3 = []
F3 = []
NDCG = []
n_hit = 0
num_ele = topk
# print('k = ' + str(activate_codes_num))
# evaluate(data_chunk, input_size,test_KNN_history, test_key_set, next_k_step)
count = 0
for iter in range(len(test_key_set)):
# training_pair = training_pairs[iter - 1]
# input_variable = training_pair[0]
# target_variable = training_pair[1]
input_variable = data_chunk[training_chunk][test_key_set[iter]]
target_variable = data_chunk[test_chunk][test_key_set[iter]]
if len(target_variable) < 2 + next_k_step:
continue
count += 1
output_vectors = predict_with_elements_in_input(sum_history, test_key_set[iter])
top = 400
hit = 0
for idx in range(len(output_vectors)):
# for idx in [2]:
output = np.zeros(input_size)
target_topi = output_vectors[idx].argsort()[::-1][:top]
c = 0
for i in range(top):
if c >= num_ele:
break
output[target_topi[i]] = 1
c += 1
vectorized_target = np.zeros(input_size)
for ii in target_variable[1 + idx]:
vectorized_target[ii] = 1
precision, recall, Fscore, correct = get_precision_recall_Fscore \
(vectorized_target, output)
prec.append(precision)
rec.append(recall)
F.append(Fscore)
if idx == 0:
prec1.append(precision)
rec1.append(recall)
F1.append(Fscore)
elif idx == 1:
prec2.append(precision)
rec2.append(recall)
F2.append(Fscore)
elif idx == 2:
prec3.append(precision)
rec3.append(recall)
F3.append(Fscore)
hit += get_HT(vectorized_target, target_topi, num_ele)
ndcg = get_NDCG1(vectorized_target, target_topi, num_ele)
NDCG.append(ndcg)
if hit == next_k_step:
n_hit += 1
# print('average precision of ' + ': ' + str(np.mean(prec)) + ' with std: ' + str(np.std(prec)))
recall = np.mean(rec)
ndcg = np.mean(NDCG)
hr = n_hit / len(test_key_set)
return recall, ndcg, hr
num_nearest_neighbors = 300#<---grid search
within_decay_rate = 0.9
group_decay_rate = 0.7
alpha = 0.7
group_size = 7
topk = 10
files = ['TaFang_history_NB.csv', 'TaFang_future_NB.csv']
data_chunk, input_size, code_freq_at_first_claim = read_claim2vector_embedding_file_no_vector(files)
training_key_set, validation_key_set, test_key_set = partition_the_data_validate(data_chunk, list(data_chunk[test_chunk]), 1)
# +
print('Num. of top: ', topk)
recall, ndcg, hr = evaluate(data_chunk, training_key_set, test_key_set, input_size,
group_size, within_decay_rate, group_decay_rate,
num_nearest_neighbors, alpha, topk)
print('recall: ', str(recall))
print('NDCG: ', str(ndcg))
# -
# __________________
# +
import sys
from sklearn.neighbors import NearestNeighbors
#from __future__ import unicode_literals, print_function, division
from sklearn.neighbors import NearestNeighbors
import pandas as pd
import numpy as np
import sys
import math
import csv
sys.maxsize
sys.maxsize
activate_codes_num = -1
next_k_step = 1
training_chunk = 0
test_chunk = 1
num_nearest_neighbors = 300#k (num_nearest_neighbors) chosen [100, 300, 500, 700, 900, 1100,1300]
within_decay_rate = 0.9# chosen from [0.1, 0.2, 0.3,0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
group_decay_rate = 0.7# chosen from [0.1, 0.2, 0.3,0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
alpha = 0.7
group_size = 7
topk = 10
# +
files = ['TaFang_history_NB.csv', 'TaFang_future_NB.csv']
#attributes_list = ['DRG', 'PROVCAT ', 'RVNU_CD', 'DIAG', 'PROC']
attributes_list = ['MATERIAL_NUMBER']
path = ''
print('start dictionary generation...')
#dictionary_table, num_dim, counter_table = generate_dictionary_BA(files, attributes_list)############################
dictionary_table = {}
counter_table = {}
for attr in attributes_list:
dictionary = {}
dictionary_table[attr] = dictionary
counter_table[attr] = 0
#csv.field_size_limit(sys.maxsize)
for filename in files:
count = 0
with open(path + filename, 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='|')
for row in reader:
if count == 0:
count += 1
continue
key = attributes_list[0]
if row[2] not in dictionary_table[key]:
dictionary_table[key][row[2]] = counter_table[key]
counter_table[key] = counter_table[key] + 1
count += 1
print(counter_table)
total = 0
for key in counter_table.keys():
total = total + counter_table[key]
print('# dimensions of final vector: ' + str(total) + ' | '+str(count-1))
dictionary_table, num_dim, counter_table = dictionary_table, total, counter_table
print('finish dictionary generation*****')
usr_attr = 'CUSTOMER_ID'
ord_attr = 'ORDER_NUMBER'
freq_max = 200
## all the follow three ways array. First index is patient, second index is the time step, third is the feature vector
data_chunk = []
day_gap_counter = []
claims_counter = 0
num_claim = 0
code_freq_at_first_claim = np.zeros(num_dim+2)
# +
for file_id in range(len(files)):
count = 0
data_chunk.append({})
filename = files[file_id]
with open(path + filename, 'r') as csvfile:
#gap_within_one_year = np.zeros(365)
reader = csv.DictReader(csvfile)
last_pid_date = '*'
last_pid = '-1'
last_days = -1
# 2 more elements in the end for start and end states
feature_vector = []
for row in reader:
cur_pid_date = row[usr_attr] + '_' + row[ord_attr]
cur_pid = row[usr_attr]
#cur_days = int(row[ord_attr])
if cur_pid != last_pid:
# start state
tmp = [-1]
data_chunk[file_id][cur_pid] = []
data_chunk[file_id][cur_pid].append(tmp)
num_claim = 0
if cur_pid_date not in last_pid_date:
if last_pid_date not in '*' and last_pid not in '-1':
sorted_feature_vector = np.sort(feature_vector)
data_chunk[file_id][last_pid].append(sorted_feature_vector)
if len(sorted_feature_vector) > 0:
count = count + 1
#data_chunk[file_id][last_pid].append(feature_vector)
feature_vector = []
claims_counter = 0
if cur_pid != last_pid:
# end state
if last_pid not in '-1':
tmp = [-1]
data_chunk[file_id][last_pid].append(tmp)
key = attributes_list[0]
within_idx = dictionary_table[key][row[key]]
previous_idx = 0
for j in range(attributes_list.index(key)):
previous_idx = previous_idx + counter_table[attributes_list[j]]
idx = within_idx + previous_idx
# set corresponding dimention to 1
if idx not in feature_vector:
feature_vector.append(idx)
last_pid_date = cur_pid_date
last_pid = cur_pid
#last_days = cur_days
if file_id == 1:
claims_counter = claims_counter + 1
if last_pid_date not in '*' and last_pid not in '-1':
data_chunk[file_id][last_pid].append(np.sort(feature_vector))
# -
data_chunk[past_chunk]['2']
# +
data_chunk, input_size, code_freq_at_first_claim = data_chunk, num_dim + 2, code_freq_at_first_claim
data_chunk, key_set, next_k_step=data_chunk, list(data_chunk[test_chunk]), 1
filtered_key_set = []
past_chunk = 0
future_chunk = 1
for key in key_set:
if len(data_chunk[past_chunk][key]) <= 3:
continue
if len(data_chunk[future_chunk][key]) < 2 + next_k_step:
continue
filtered_key_set.append(key)
training_key_set = filtered_key_set[0:int(4 / 5 * len(filtered_key_set)*0.9)]
validation_key_set = filtered_key_set[int(4 / 5 * len(filtered_key_set)*0.9):int(4 / 5 * len(filtered_key_set))]
print('Number of training instances: ' + str(len(training_key_set)))
test_key_set = filtered_key_set[int(4 / 5 * len(filtered_key_set)):]
training_key_set, validation_key_set, test_key_set = training_key_set, validation_key_set, test_key_set
# -
key_set
for key in key_set:
if len(data_chunk[past_chunk][key]) <= 3:
print('<= 3',key)
if len(data_chunk[future_chunk][key]) < 2 + next_k_step:
print('< 2 + '+str(next_k_step),key)
filtered_key_set
len(data_chunk[future_chunk]['2'])
# +
files = ['TaFang_history_NB.csv', 'TaFang_future_NB.csv']
df=pd.read_csv('TaFang_history_NB.csv')
#list of features
attributes_list = ['MATERIAL_NUMBER']
dictionary_table={'MATERIAL_NUMBER':dict(zip(map(str,list(set(df['MATERIAL_NUMBER']))),df.index))}
counter_table={'MATERIAL_NUMBER':len(set(df['MATERIAL_NUMBER']))}
num_dim=len(set(df['MATERIAL_NUMBER']))
dictionary_table, num_dim, counter_table = dictionary_table, total, counter_table
# -
sum_history = {}
for key in key_set[:5]:
vec_list = data_set[key]
num_vec = len(vec_list)-2
his_list = []
#removes the [-1] flags in data
for idx in range(1,num_vec+1):
#output_size=11999
his_vec = np.zeros(output_size)
# np.power(.9,15-iterater) #counts down
decayed_val = np.power(within_decay_rate,num_vec-idx)
#elements in data_set['2']
for ele in vec_list[idx]:
his_vec[ele] = decayed_val
his_list.append(his_vec)
grouped_vec_list = []
if len(his_list) < group_size:
#sum = np.zeros(len(his_list[0]))
for j in range(len(his_list)):
grouped_vec_list.append(his_list[j])
grouped_list,real_group_size = grouped_vec_list, len(his_list)
else:
est_num_vec_each_block = len(his_list)/group_size
base_num_vec_each_block = int(np.floor(len(his_list)/group_size))
residual = est_num_vec_each_block - base_num_vec_each_block
num_vec_has_extra_vec = int(np.round(residual * group_size))
if residual == 0:
for i in range(group_size):
if len(his_list)<1:
print('len(his_list)<1')
sum = np.zeros(len(his_list[0]))
for j in range(base_num_vec_each_block):
if i*base_num_vec_each_block+j >= len(his_list):
print('i*num_vec_each_block+j')
sum += his_list[i*base_num_vec_each_block+j]
grouped_vec_list.append(sum/base_num_vec_each_block)
else:
for i in range(group_size - num_vec_has_extra_vec):
sum = np.zeros(len(his_list[0]))
for j in range(base_num_vec_each_block):
if i*base_num_vec_each_block+j >= len(his_list):
print('i*base_num_vec_each_block+j')
sum += his_list[i*base_num_vec_each_block+j]
last_idx = i * base_num_vec_each_block + j
grouped_vec_list.append(sum/base_num_vec_each_block)
est_num = int(np.ceil(est_num_vec_each_block))
start_group_idx = group_size - num_vec_has_extra_vec
if len(his_list) - start_group_idx*base_num_vec_each_block >= est_num_vec_each_block:
for i in range(start_group_idx,group_size):
sum = np.zeros(len(his_list[0]))
for j in range(est_num):
# if residual+(i-1)*est_num_vec_each_block+j >= len(his_list):
# print('residual+(i-1)*num_vec_each_block+j')
# print('len(his_list)')
iidxx = last_idx + 1+(i-start_group_idx)*est_num+j
if iidxx >= len(his_list) or iidxx<0:
print('last_idx + 1+(i-start_group_idx)*est_num+j')
sum += his_list[iidxx]
grouped_vec_list.append(sum/est_num)
grouped_list,real_group_size = grouped_vec_list, group_size
his_vec = np.zeros(output_size)
for idx in range(real_group_size):
decayed_val = np.power(group_decay_rate, group_size - 1 - idx)
if idx>=len(grouped_list):
print( 'idx: '+ str(idx))
print('len(grouped_list): ' + str(len(grouped_list)))
his_vec += grouped_list[idx]*decayed_val
sum_history[key] = his_vec/real_group_size
# sum_history[key] = np.multiply(his_vec / real_group_size, IDF)'''
# +
import sys
from sklearn.neighbors import NearestNeighbors
#from __future__ import unicode_literals, print_function, division
from sklearn.neighbors import NearestNeighbors
import pandas as pd
import numpy as np
import sys
import math
import csv
sys.maxsize
sys.maxsize
activate_codes_num = -1
next_k_step = 1
training_chunk = 0
test_chunk = 1
num_nearest_neighbors = 300#k (num_nearest_neighbors) chosen [100, 300, 500, 700, 900, 1100,1300]
within_decay_rate = 0.9# chosen from [0.1, 0.2, 0.3,0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
group_decay_rate = 0.7# chosen from [0.1, 0.2, 0.3,0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
alpha = 0.7
group_size = 7
topk = 10
files = ['TaFang_history_NB.csv', 'TaFang_future_NB.csv']
attributes_list = ['MATERIAL_NUMBER']
usr_attr = 'CUSTOMER_ID'
ord_attr = 'ORDER_NUMBER'
freq_max = 200
## all the follow three ways array. First index is patient, second index is the time step, third is the feature vector
data_chunk = []
day_gap_counter = []
claims_counter = 0
num_claim = 0
filtered_key_set = []
past_chunk = 0
future_chunk = 1
# -
df1=pd.read_csv(files[0])
tmp1=df1.sort_values(['CUSTOMER_ID','ORDER_NUMBER']).groupby(['CUSTOMER_ID','ORDER_NUMBER'])['MATERIAL_NUMBER'].apply(np.array).reset_index()
tmp1=tmp1.groupby(['CUSTOMER_ID'])['MATERIAL_NUMBER'].apply(list).reset_index()
tmp1=tmp1.assign(MATERIAL_NUMBER=tmp1.MATERIAL_NUMBER.apply(lambda x: [[-1]]+[np.array(sorted(i)) for i in x]+[[-1]]))
df2=pd.read_csv(files[1])
tmp2=df2.sort_values(['CUSTOMER_ID','ORDER_NUMBER']).groupby(['CUSTOMER_ID','ORDER_NUMBER'])['MATERIAL_NUMBER'].apply(np.array).reset_index()
tmp2=tmp2.groupby(['CUSTOMER_ID'])['MATERIAL_NUMBER'].apply(list).reset_index()
tmp2=tmp2.assign(MATERIAL_NUMBER=tmp2.MATERIAL_NUMBER.apply(lambda x: [[-1]]+[np.array(sorted(i)) for i in x]+[[-1]]))
# +
history_key_set=tmp1.assign(MATERIAL_NUMBER=np.where(tmp1.MATERIAL_NUMBER.apply(len)<= 3,None,tmp1.MATERIAL_NUMBER)).assign(CUSTOMER_ID=tmp1.CUSTOMER_ID.apply(str))
history_key_set=list(history_key_set[~history_key_set.MATERIAL_NUMBER.isnull()]['CUSTOMER_ID'])
future_key_set=tmp2.assign(MATERIAL_NUMBER=np.where(tmp2.MATERIAL_NUMBER.apply(len)< 2 + next_k_step,None,tmp2.MATERIAL_NUMBER)).assign(CUSTOMER_ID=tmp2.CUSTOMER_ID.apply(str))
future_key_set=list(future_key_set[~future_key_set.MATERIAL_NUMBER.isnull()]['CUSTOMER_ID'])
filtered_key_set=[x for x in history_key_set if x in future_key_set]
#slice filtered_key_set range(0:10001)
training_key_set = filtered_key_set[0:int(4 / 5 * len(filtered_key_set)*0.9)]
print('Number of training instances: ' + str(len(training_key_set)))
#slice filtered_key_set range(10001:11112)
validation_key_set = filtered_key_set[int(4 / 5 * len(filtered_key_set)*0.9):int(4 / 5 * len(filtered_key_set))]
#slice filtered_key_set range(:11112)
test_key_set = filtered_key_set[int(4 / 5 * len(filtered_key_set)):]
# -
tmp2
# +
num_dim=len(set(df1['MATERIAL_NUMBER']))
code_freq_at_first_claim = np.zeros(num_dim+2)
data_chunk=[dict(zip(tmp1.CUSTOMER_ID.apply(str),tmp1.MATERIAL_NUMBER)),dict(zip(tmp2.CUSTOMER_ID.apply(str),tmp2.MATERIAL_NUMBER))]
# -
#data_chunk=data_chunk
input_size=num_dim + 2
#code_freq_at_first_claim=code_freq_at_first_claim
#data_chunk=data_chunk
key_set=list(data_chunk[test_chunk]),
next_k_step=1
data_set=data_chunk[training_chunk]
key_set=training_key_set
output_size=input_size
#within_decay_rate=within_decay_rate
#group_decay_rate=group_decay_rate
print('Num. of top: ', topk)
def decay_history(num_vec,within_decay_rate):
#removes the [-1] flags in data
for idx in range(1,num_vec+1):
his_vec = np.zeros(output_size)#output_size=11999
# np.power(.9,15-iterater) #counts down
decayed_val = np.power(within_decay_rate,num_vec-idx)
#elements in data_set['2']
for ele in vec_list[idx]:
his_vec[ele] = decayed_val
his_list.append(his_vec)
sum_history = {}
for key in key_set:
vec_list = data_set[key]
num_vec = len(vec_list)-2
his_list = []
decay_history(num_vec,within_decay_rate)
grouped_vec_list = []
if len(his_list) < group_size:
#sum = np.zeros(len(his_list[0]))
for j in range(len(his_list)):
grouped_vec_list.append(his_list[j])
grouped_list,real_group_size = grouped_vec_list, len(his_list)
else:
est_num_vec_each_block = len(his_list)/group_size
base_num_vec_each_block = int(np.floor(len(his_list)/group_size))
residual = est_num_vec_each_block - base_num_vec_each_block
num_vec_has_extra_vec = int(np.round(residual * group_size))
if residual == 0:
for i in range(group_size):
if len(his_list)<1:
print('len(his_list)<1')
sum = np.zeros(len(his_list[0]))
for j in range(base_num_vec_each_block):
if i*base_num_vec_each_block+j >= len(his_list):
print('i*num_vec_each_block+j')
sum += his_list[i*base_num_vec_each_block+j]
grouped_vec_list.append(sum/base_num_vec_each_block)
else:
for i in range(group_size - num_vec_has_extra_vec):
sum = np.zeros(len(his_list[0]))
for j in range(base_num_vec_each_block):
if i*base_num_vec_each_block+j >= len(his_list):
print('i*base_num_vec_each_block+j')
sum += his_list[i*base_num_vec_each_block+j]
last_idx = i * base_num_vec_each_block + j
grouped_vec_list.append(sum/base_num_vec_each_block)
est_num = int(np.ceil(est_num_vec_each_block))
start_group_idx = group_size - num_vec_has_extra_vec
if len(his_list) - start_group_idx*base_num_vec_each_block >= est_num_vec_each_block:
for i in range(start_group_idx,group_size):
sum = np.zeros(len(his_list[0]))
for j in range(est_num):
# if residual+(i-1)*est_num_vec_each_block+j >= len(his_list):
# print('residual+(i-1)*num_vec_each_block+j')
# print('len(his_list)')
iidxx = last_idx + 1+(i-start_group_idx)*est_num+j
if iidxx >= len(his_list) or iidxx<0:
print('last_idx + 1+(i-start_group_idx)*est_num+j')
sum += his_list[iidxx]
grouped_vec_list.append(sum/est_num)
grouped_list,real_group_size = grouped_vec_list, group_size
his_vec = np.zeros(output_size)
for idx in range(real_group_size):
decayed_val = np.power(group_decay_rate, group_size - 1 - idx)
if idx>=len(grouped_list):
print( 'idx: '+ str(idx))
print('len(grouped_list): ' + str(len(grouped_list)))
his_vec += grouped_list[idx]*decayed_val
sum_history[key] = his_vec/real_group_size
# sum_history[key] = np.multiply(his_vec / real_group_size, IDF)
temporal_decay_sum_history_training = sum_history
data_set=data_chunk[training_chunk]
key_set=test_key_set
output_size=input_size
#group_size=group_size
#within_decay_rate=within_decay_rate
#group_decay_rate=group_decay_rate
#data_set, key_set, output_size,group_size,within_decay_rate,group_decay_rate=data_chunk[training_chunk],test_key_set, input_size,group_size, within_decay_rate,group_decay_rate
sum_history = {}
for key in key_set:
vec_list = data_set[key]
num_vec = len(vec_list) - 2
his_list = []
decay_history(num_vec,within_decay_rate)
grouped_vec_list = []
if len(his_list) < group_size:
#sum = np.zeros(len(his_list[0]))
for j in range(len(his_list)):
grouped_vec_list.append(his_list[j])
grouped_list,real_group_size = grouped_vec_list, len(his_list)
else:
est_num_vec_each_block = len(his_list)/group_size
base_num_vec_each_block = int(np.floor(len(his_list)/group_size))
residual = est_num_vec_each_block - base_num_vec_each_block
num_vec_has_extra_vec = int(np.round(residual * group_size))
if residual == 0:
for i in range(group_size):
if len(his_list)<1:
print('len(his_list)<1')
sum = np.zeros(len(his_list[0]))
for j in range(base_num_vec_each_block):
if i*base_num_vec_each_block+j >= len(his_list):
print('i*num_vec_each_block+j')
sum += his_list[i*base_num_vec_each_block+j]
grouped_vec_list.append(sum/base_num_vec_each_block)
else:
for i in range(group_size - num_vec_has_extra_vec):
sum = np.zeros(len(his_list[0]))
for j in range(base_num_vec_each_block):
if i*base_num_vec_each_block+j >= len(his_list):
print('i*base_num_vec_each_block+j')
sum += his_list[i*base_num_vec_each_block+j]
last_idx = i * base_num_vec_each_block + j
grouped_vec_list.append(sum/base_num_vec_each_block)
est_num = int(np.ceil(est_num_vec_each_block))
start_group_idx = group_size - num_vec_has_extra_vec
if len(his_list) - start_group_idx*base_num_vec_each_block >= est_num_vec_each_block:
for i in range(start_group_idx,group_size):
sum = np.zeros(len(his_list[0]))
for j in range(est_num):
# if residual+(i-1)*est_num_vec_each_block+j >= len(his_list):
# print('residual+(i-1)*num_vec_each_block+j')
# print('len(his_list)')
iidxx = last_idx + 1+(i-start_group_idx)*est_num+j
if iidxx >= len(his_list) or iidxx<0:
print('last_idx + 1+(i-start_group_idx)*est_num+j')
sum += his_list[iidxx]
grouped_vec_list.append(sum/est_num)
grouped_list,real_group_size = grouped_vec_list, group_size
his_vec = np.zeros(output_size)
for idx in range(real_group_size):
decayed_val = np.power(group_decay_rate, group_size - 1 - idx)
if idx>=len(grouped_list):
print( 'idx: '+ str(idx))
print('len(grouped_list): ' + str(len(grouped_list)))
his_vec += grouped_list[idx]*decayed_val
sum_history[key] = his_vec/real_group_size
# sum_history[key] = np.multiply(his_vec / real_group_size, IDF)
temporal_decay_sum_history_test = sum_history
query_set=temporal_decay_sum_history_test
target_set=temporal_decay_sum_history_training
k=num_nearest_neighbors
# +
#def KNN(query_set, target_set, k):
history_mat = []
for key in target_set.keys():
history_mat.append(target_set[key])
test_mat = []
for key in query_set.keys():
test_mat.append(query_set[key])
print('Finding k nearest neighbors...')
nbrs = NearestNeighbors(n_neighbors=k, algorithm='brute').fit(history_mat)
distances, indices = nbrs.kneighbors(test_mat)
print('Finish KNN search.' )
index, distance = indices,distances
sum_history_test,test_key_set,training_sum_history_test,training_key_set,index,alpha = temporal_decay_sum_history_test, test_key_set, temporal_decay_sum_history_training,training_key_set, index, alpha
merged_history = {}
for test_key_id in range(len(test_key_set)):
test_key = test_key_set[test_key_id]
test_history = sum_history_test[test_key]
sum_training_history = np.zeros(len(test_history))
for indecis in index[test_key_id]:
training_key = training_key_set[indecis]
sum_training_history += training_sum_history_test[training_key]
sum_training_history = sum_training_history/len(index[test_key_id])
merge = test_history*alpha + sum_training_history*(1-alpha)
merged_history[test_key] = merge
sum_history = merged_history
if activate_codes_num < 0:
# for i in range(1, 6):
prec = []
rec = []
F = []
prec1 = []
rec1 = []
F1 = []
prec2 = []
rec2 = []
F2 = []
prec3 = []
rec3 = []
F3 = []
NDCG = []
n_hit = 0
num_ele = topk
# print('k = ' + str(activate_codes_num))
# evaluate(data_chunk, input_size,test_KNN_history, test_key_set, next_k_step)
count = 0
for iter in range(len(test_key_set)):
# training_pair = training_pairs[iter - 1]
# input_variable = training_pair[0]
# target_variable = training_pair[1]
input_variable = data_chunk[training_chunk][test_key_set[iter]]
target_variable = data_chunk[test_chunk][test_key_set[iter]]
if len(target_variable) < 2 + next_k_step:
continue
count += 1
sum_history,key = sum_history, test_key_set[iter]
#def predict_with_elements_in_input(sum_history,key):
output_vectors = []
for idx in range(next_k_step):
vec = sum_history[key]
output_vectors.append(vec)
output_vectors = output_vectors
top = 400
hit = 0
for idx in range(len(output_vectors)):
# for idx in [2]:
output = np.zeros(input_size)
target_topi = output_vectors[idx].argsort()[::-1][:top]
c = 0
for i in range(top):
if c >= num_ele:
break
output[target_topi[i]] = 1
c += 1
vectorized_target = np.zeros(input_size)
for ii in target_variable[1 + idx]:
vectorized_target[ii] = 1
groundtruth,pred = vectorized_target, output
#def get_precision_recall_Fscore(groundtruth,pred):
a = groundtruth
b = pred
correct = 0
truth = 0
positive = 0
for idx in range(len(a)):
if a[idx] == 1:
truth += 1
if b[idx] == 1:
correct += 1
if b[idx] == 1:
positive += 1
flag = 0
if 0 == positive:
precision = 0
flag = 1
#print('postivie is 0')
else:
precision = correct/positive
if 0 == truth:
recall = 0
flag = 1
#print('recall is 0')
else:
recall = correct/truth
if flag == 0 and precision + recall > 0:
G = 2*precision*recall/(precision+recall)
else:
G = 0
precision, recall, Fscore, correct = precision, recall, G, correct
prec.append(precision)
rec.append(recall)
F.append(Fscore)
if idx == 0:
prec1.append(precision)
rec1.append(recall)
F1.append(Fscore)
elif idx == 1:
prec2.append(precision)
rec2.append(recall)
F2.append(Fscore)
elif idx == 2:
prec3.append(precision)
rec3.append(recall)
F3.append(Fscore)
groundtruth, pred_rank_list,k=vectorized_target, target_topi, num_ele
#def get_HT(groundtruth, pred_rank_list,k):
count = 0
for pred in pred_rank_list:
if count >= k:
break
if groundtruth[pred] == 1:
hit += 1
count += 1
hit += 0
#input_size = 100
vectorized_target, target_topi, num_ele=groundtruth, pred_rank_list,k
#def get_NDCG1(groundtruth, pred_rank_list,k):
count = 0
dcg = 0
for pred in pred_rank_list:
if count >= k:
break
if groundtruth[pred] == 1:
dcg += (1)/math.log2(count+1+1)
count += 1
idcg = 0
num_real_item = np.sum(groundtruth)
num_item = int(num_real_item)
for i in range(num_item):
idcg += (1) / math.log2(i + 1 + 1)
ndcg = dcg / idcg
NDCG.append(ndcg)
if hit == next_k_step:
n_hit += 1
print('average precision of ' + ': ' + str(np.mean(prec)) + ' with std: ' + str(np.std(prec)))
recall = np.mean(rec)
ndcg = np.mean(NDCG)
hr = n_hit / len(test_key_set)
recall, ndcg, hr = recall, ndcg, hr
print('recall: ', str(recall))
print('NDCG: ', str(ndcg))
# -
test_key_set
# +
import sys
from sklearn.neighbors import NearestNeighbors
#from __future__ import unicode_literals, print_function, division
from sklearn.neighbors import NearestNeighbors
import pandas as pd
import numpy as np
import sys
import math
import csv
sys.maxsize
sys.maxsize
num_nearest_neighbors = 10#k (num_nearest_neighbors) chosen [100, 300, 500, 700, 900, 1100,1300]
within_decay_rate = 0.9# chosen from [0.1, 0.2, 0.3,0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
group_decay_rate = 0.7# chosen from [0.1, 0.2, 0.3,0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
alpha = 0.7
group_size = 3
topk = 10
#files = ['history.csv', 'future.csv']
files = ['TaFang_history_NB.csv', 'TaFang_future_NB.csv']
attributes_list = 'MATERIAL_NUMBER'
usr_attr = 'CUSTOMER_ID'
ord_attr = 'ORDER_NUMBER'
# +
next_k_step = 1
freq_max = 200
## all the follow three ways array. First index is patient, second index is the time step, third is the feature vector
data_chunk = []
day_gap_counter = []
claims_counter = 0
num_claim = 0
activate_codes_num = -1
training_chunk = 0
test_chunk = 1
df1=pd.read_csv(files[0])
tmp1=df1.sort_values([usr_attr,ord_attr]).groupby([usr_attr,ord_attr])[attributes_list].apply(np.array).reset_index()
tmp1=tmp1.groupby([usr_attr])[attributes_list].apply(list).reset_index()
tmp1[attributes_list]=tmp1[attributes_list].apply(lambda x: [[-1]]+[np.array(sorted(i)) for i in x]+[[-1]])
df2=pd.read_csv(files[1])
tmp2=df2.sort_values([usr_attr,ord_attr]).groupby([usr_attr,ord_attr])[attributes_list].apply(np.array).reset_index()
tmp2=tmp2.groupby([usr_attr])[attributes_list].apply(list).reset_index()
tmp2[attributes_list]=tmp2[attributes_list].apply(lambda x: [[-1]]+[np.array(sorted(i)) for i in x]+[[-1]])
past_chunk = 0
future_chunk = 1
history_key_set,future_key_set=tmp1,tmp2
history_key_set[attributes_list]=np.where(tmp1[attributes_list].apply(len)<= 3,None,tmp1[attributes_list])
history_key_set[usr_attr]=history_key_set[usr_attr].apply(str)
history_key_set=list(history_key_set[~history_key_set[attributes_list].isnull()][usr_attr])
future_key_set[attributes_list]=np.where(tmp2[attributes_list].apply(len)< 2 + next_k_step,None,tmp2[attributes_list])
future_key_set[usr_attr]=future_key_set[usr_attr].apply(str)
future_key_set=list(future_key_set[~future_key_set[attributes_list].isnull()][usr_attr])
filtered_key_set=list(pd.Series(history_key_set)[pd.Series(history_key_set).isin(future_key_set) & pd.Series(history_key_set).isin(future_key_set).isin(history_key_set)])
#filtered_key_set=[x for x in history_key_set if x in future_key_set]
#slice filtered_key_set range(0:10001)
training_key_set = filtered_key_set[0:int(4 / 5 * len(filtered_key_set)*0.9)]
print('Number of training instances: ' + str(len(training_key_set)))
#slice filtered_key_set range(10001:11112)
validation_key_set = filtered_key_set[int(4 / 5 * len(filtered_key_set)*0.9):int(4 / 5 * len(filtered_key_set))]
#slice filtered_key_set range(:11112)
test_key_set = filtered_key_set[int(4 / 5 * len(filtered_key_set)):]
num_dim=len(set(df1[attributes_list]))
code_freq_at_first_claim = np.zeros(num_dim+2)
data_chunk=[dict(zip(tmp1[usr_attr].apply(str),tmp1[attributes_list])),dict(zip(tmp2[usr_attr].apply(str),tmp2[attributes_list]))]
#data_chunk=data_chunk
input_size=num_dim + 2
#code_freq_at_first_claim=code_freq_at_first_claim
#data_chunk=data_chunk
key_set=list(data_chunk[test_chunk]),
data_set=data_chunk[training_chunk]
key_set=training_key_set
output_size=input_size
#within_decay_rate=within_decay_rate
#group_decay_rate=group_decay_rate
print('Num. of top: ', topk)
sum_history = {}
for key in key_set:
vec_list = data_set[key]
num_vec = len(vec_list)-2
his_list = []
for idx in range(1,num_vec+1):
his_vec = np.zeros(output_size)#output_size=11999
# np.power(.9,15-iterater) #counts down
decayed_val = np.power(within_decay_rate,num_vec-idx)
#elements in data_set['2']
for ele in vec_list[idx]:
his_vec[ele] = decayed_val
his_list.append(his_vec)
grouped_vec_list = []
if len(his_list) < group_size:
#sum = np.zeros(len(his_list[0]))
for j in range(len(his_list)):
grouped_vec_list.append(his_list[j])
grouped_list,real_group_size = grouped_vec_list, len(his_list)
else:
est_num_vec_each_block = len(his_list)/group_size
base_num_vec_each_block = int(np.floor(len(his_list)/group_size))
residual = est_num_vec_each_block - base_num_vec_each_block
num_vec_has_extra_vec = int(np.round(residual * group_size))
if residual == 0:
for i in range(group_size):
if len(his_list)<1:
print('len(his_list)<1')
sum = np.zeros(len(his_list[0]))
for j in range(base_num_vec_each_block):
if i*base_num_vec_each_block+j >= len(his_list):
print('i*num_vec_each_block+j')
sum += his_list[i*base_num_vec_each_block+j]
grouped_vec_list.append(sum/base_num_vec_each_block)
else:
for i in range(group_size - num_vec_has_extra_vec):
sum = np.zeros(len(his_list[0]))
for j in range(base_num_vec_each_block):
if i*base_num_vec_each_block+j >= len(his_list):
print('i*base_num_vec_each_block+j')
sum += his_list[i*base_num_vec_each_block+j]
last_idx = i * base_num_vec_each_block + j
grouped_vec_list.append(sum/base_num_vec_each_block)
est_num = int(np.ceil(est_num_vec_each_block))
start_group_idx = group_size - num_vec_has_extra_vec
if len(his_list) - start_group_idx*base_num_vec_each_block >= est_num_vec_each_block:
for i in range(start_group_idx,group_size):
sum = np.zeros(len(his_list[0]))
for j in range(est_num):
# if residual+(i-1)*est_num_vec_each_block+j >= len(his_list):
# print('residual+(i-1)*num_vec_each_block+j')
# print('len(his_list)')
iidxx = last_idx + 1+(i-start_group_idx)*est_num+j
if iidxx >= len(his_list) or iidxx<0:
print('last_idx + 1+(i-start_group_idx)*est_num+j')
sum += his_list[iidxx]
grouped_vec_list.append(sum/est_num)
grouped_list,real_group_size = grouped_vec_list, group_size
his_vec = np.zeros(output_size)
for idx in range(real_group_size):
decayed_val = np.power(group_decay_rate, group_size - 1 - idx)
if idx>=len(grouped_list):
print( 'idx: '+ str(idx))
print('len(grouped_list): ' + str(len(grouped_list)))
his_vec += grouped_list[idx]*decayed_val
sum_history[key] = his_vec/real_group_size
# sum_history[key] = np.multiply(his_vec / real_group_size, IDF)
temporal_decay_sum_history_training = sum_history
data_set=data_chunk[training_chunk]
key_set=test_key_set
output_size=input_size
#group_size=group_size
#within_decay_rate=within_decay_rate
#group_decay_rate=group_decay_rate
#data_set, key_set, output_size,group_size,within_decay_rate,group_decay_rate=data_chunk[training_chunk],test_key_set, input_size,group_size, within_decay_rate,group_decay_rate
sum_history = {}
for key in key_set:
vec_list = data_set[key]
num_vec = len(vec_list) - 2
his_list = []
decay_history(num_vec,within_decay_rate)
grouped_vec_list = []
if len(his_list) < group_size:
#sum = np.zeros(len(his_list[0]))
for j in range(len(his_list)):
grouped_vec_list.append(his_list[j])
grouped_list,real_group_size = grouped_vec_list, len(his_list)
else:
est_num_vec_each_block = len(his_list)/group_size
base_num_vec_each_block = int(np.floor(len(his_list)/group_size))
residual = est_num_vec_each_block - base_num_vec_each_block
num_vec_has_extra_vec = int(np.round(residual * group_size))
if residual == 0:
for i in range(group_size):
if len(his_list)<1:
print('len(his_list)<1')
sum = np.zeros(len(his_list[0]))
for j in range(base_num_vec_each_block):
if i*base_num_vec_each_block+j >= len(his_list):
print('i*num_vec_each_block+j')
sum += his_list[i*base_num_vec_each_block+j]
grouped_vec_list.append(sum/base_num_vec_each_block)
else:
for i in range(group_size - num_vec_has_extra_vec):
sum = np.zeros(len(his_list[0]))
for j in range(base_num_vec_each_block):
if i*base_num_vec_each_block+j >= len(his_list):
print('i*base_num_vec_each_block+j')
sum += his_list[i*base_num_vec_each_block+j]
last_idx = i * base_num_vec_each_block + j
grouped_vec_list.append(sum/base_num_vec_each_block)
est_num = int(np.ceil(est_num_vec_each_block))
start_group_idx = group_size - num_vec_has_extra_vec
if len(his_list) - start_group_idx*base_num_vec_each_block >= est_num_vec_each_block:
for i in range(start_group_idx,group_size):
sum = np.zeros(len(his_list[0]))
for j in range(est_num):
# if residual+(i-1)*est_num_vec_each_block+j >= len(his_list):
# print('residual+(i-1)*num_vec_each_block+j')
# print('len(his_list)')
iidxx = last_idx + 1+(i-start_group_idx)*est_num+j
if iidxx >= len(his_list) or iidxx<0:
print('last_idx + 1+(i-start_group_idx)*est_num+j')
sum += his_list[iidxx]
grouped_vec_list.append(sum/est_num)
grouped_list,real_group_size = grouped_vec_list, group_size
his_vec = np.zeros(output_size)
for idx in range(real_group_size):
decayed_val = np.power(group_decay_rate, group_size - 1 - idx)
if idx>=len(grouped_list):
print( 'idx: '+ str(idx))
print('len(grouped_list): ' + str(len(grouped_list)))
his_vec += grouped_list[idx]*decayed_val
sum_history[key] = his_vec/real_group_size
# sum_history[key] = np.multiply(his_vec / real_group_size, IDF)
temporal_decay_sum_history_test = sum_history
query_set=temporal_decay_sum_history_test
target_set=temporal_decay_sum_history_training
k=num_nearest_neighbors
#def KNN(query_set, target_set, k):
history_mat = []
for key in target_set.keys():
history_mat.append(target_set[key])
test_mat = []
for key in query_set.keys():
test_mat.append(query_set[key])
print('Finding k nearest neighbors...')
nbrs = NearestNeighbors(n_neighbors=k, algorithm='brute').fit(history_mat)
distances, indices = nbrs.kneighbors(test_mat)
print('Finish KNN search.' )
index, distance = indices,distances
sum_history_test,test_key_set,training_sum_history_test,training_key_set,index,alpha = temporal_decay_sum_history_test, test_key_set, temporal_decay_sum_history_training,training_key_set, index, alpha
merged_history = {}
for test_key_id in range(len(test_key_set)):
test_key = test_key_set[test_key_id]
test_history = sum_history_test[test_key]
sum_training_history = np.zeros(len(test_history))
for indecis in index[test_key_id]:
training_key = training_key_set[indecis]
sum_training_history += training_sum_history_test[training_key]
sum_training_history = sum_training_history/len(index[test_key_id])
merge = test_history*alpha + sum_training_history*(1-alpha)
merged_history[test_key] = merge
sum_history = merged_history
if activate_codes_num < 0:
# for i in range(1, 6):
prec = []
rec = []
F = []
prec1 = []
rec1 = []
F1 = []
prec2 = []
rec2 = []
F2 = []
prec3 = []
rec3 = []
F3 = []
NDCG = []
n_hit = 0
num_ele = topk
# print('k = ' + str(activate_codes_num))
# evaluate(data_chunk, input_size,test_KNN_history, test_key_set, next_k_step)
count = 0
for iter in range(len(test_key_set)):
# training_pair = training_pairs[iter - 1]
# input_variable = training_pair[0]
# target_variable = training_pair[1]
input_variable = data_chunk[training_chunk][test_key_set[iter]]
target_variable = data_chunk[test_chunk][test_key_set[iter]]
if len(target_variable) < 2 + next_k_step:
continue
count += 1
sum_history,key = sum_history, test_key_set[iter]
#def predict_with_elements_in_input(sum_history,key):
output_vectors = []
for idx in range(next_k_step):
vec = sum_history[key]
output_vectors.append(vec)
output_vectors = output_vectors
top = 400
hit = 0
for idx in range(len(output_vectors)):
# for idx in [2]:
output = np.zeros(input_size)
target_topi = output_vectors[idx].argsort()[::-1][:top]
c = 0
for i in range(top):
if c >= num_ele:
break
output[target_topi[i]] = 1
c += 1
vectorized_target = np.zeros(input_size)
for ii in target_variable[1 + idx]:
vectorized_target[ii] = 1
groundtruth,pred = vectorized_target, output
#def get_precision_recall_Fscore(groundtruth,pred):
a = groundtruth
b = pred
correct = 0
truth = 0
positive = 0
for idx in range(len(a)):
if a[idx] == 1:
truth += 1
if b[idx] == 1:
correct += 1
if b[idx] == 1:
positive += 1
flag = 0
if 0 == positive:
precision = 0
flag = 1
#print('postivie is 0')
else:
precision = correct/positive
if 0 == truth:
recall = 0
flag = 1
#print('recall is 0')
else:
recall = correct/truth
if flag == 0 and precision + recall > 0:
G = 2*precision*recall/(precision+recall)
else:
G = 0
precision, recall, Fscore, correct = precision, recall, G, correct
prec.append(precision)
rec.append(recall)
F.append(Fscore)
if idx == 0:
prec1.append(precision)
rec1.append(recall)
F1.append(Fscore)
elif idx == 1:
prec2.append(precision)
rec2.append(recall)
F2.append(Fscore)
elif idx == 2:
prec3.append(precision)
rec3.append(recall)
F3.append(Fscore)
groundtruth, pred_rank_list,k=vectorized_target, target_topi, num_ele
#def get_HT(groundtruth, pred_rank_list,k):
count = 0
for pred in pred_rank_list:
if count >= k:
break
if groundtruth[pred] == 1:
hit += 1
count += 1
hit += 0
#input_size = 100
vectorized_target, target_topi, num_ele=groundtruth, pred_rank_list,k
#def get_NDCG1(groundtruth, pred_rank_list,k):
count = 0
dcg = 0
for pred in pred_rank_list:
if count >= k:
break
if groundtruth[pred] == 1:
dcg += (1)/math.log2(count+1+1)
count += 1
idcg = 0
num_real_item = np.sum(groundtruth)
num_item = int(num_real_item)
for i in range(num_item):
idcg += (1) / math.log2(i + 1 + 1)
ndcg = dcg / idcg
NDCG.append(ndcg)
if hit == next_k_step:
n_hit += 1
print('average precision of ' + ': ' + str(np.mean(prec)) + ' with std: ' + str(np.std(prec)))
recall = np.mean(rec)
ndcg = np.mean(NDCG)
hr = n_hit / len(test_key_set)
recall, ndcg, hr = recall, ndcg, hr
print('recall: ', str(recall))
print('NDCG: ', str(ndcg))
# -
# +
print('finish dictionary generation*****')
usr_attr = 'CUSTOMER_ID'
ord_attr = 'ORDER_NUMBER'
freq_max = 200
## all the follow three ways array. First index is patient, second index is the time step, third is the feature vector
data_chunk = []
day_gap_counter = []
claims_counter = 0
num_claim = 0
code_freq_at_first_claim = np.zeros(num_dim+2)
# +
#with open('TaFang_history_NB.csv', 'r') as csvfile:
# #gap_within_one_year = np.zeros(365)
# reader = csv.DictReader(csvfile)
# for row in reader:
# print(row)
# -
'''for file_id in range(len(files)):
count = 0
data_chunk.append({})
filename = files[file_id]
with open(path + filename, 'r') as csvfile:
#gap_within_one_year = np.zeros(365)
reader = csv.DictReader(csvfile)
last_pid_date = '*'
last_pid = '-1'
last_days = -1
# 2 more elements in the end for start and end states
feature_vector = []
for row in reader:
cur_pid_date = row['CUSTOMER_ID'] + '_' + row['ORDER_NUMBER']
cur_pid = row['CUSTOMER_ID']
#cur_days = int(row[ord_attr])
if cur_pid != last_pid:
# start state
tmp = [-1]
data_chunk[file_id][cur_pid] = []
data_chunk[file_id][cur_pid].append(tmp)
num_claim = 0
if cur_pid_date not in last_pid_date:
if last_pid_date not in '*' and last_pid not in '-1':
sorted_feature_vector = np.sort(feature_vector)
data_chunk[file_id][last_pid].append(sorted_feature_vector)
if len(sorted_feature_vector) > 0:
count = count + 1
#data_chunk[file_id][last_pid].append(feature_vector)
feature_vector = []
claims_counter = 0
if cur_pid != last_pid:
# end state
if last_pid not in '-1':
tmp = [-1]
data_chunk[file_id][last_pid].append(tmp)
key = attributes_list[0]
within_idx = dictionary_table[key][row[key]]
previous_idx = 0
for j in range(attributes_list.index(key)):
previous_idx = previous_idx + counter_table[attributes_list[j]]
idx = within_idx + previous_idx
# set corresponding dimention to 1
if idx not in feature_vector:
feature_vector.append(idx)
last_pid_date = cur_pid_date
last_pid = cur_pid
#last_days = cur_days
if file_id == 1:
claims_counter = claims_counter + 1
if last_pid_date not in '*' and last_pid not in '-1':
data_chunk[file_id][last_pid].append(np.sort(feature_vector))'''
len(code_freq_at_first_claim)
import pandas as pd
df=pd.read_csv('TaFang_history_NB.csv',sep=',')
df[df.CUSTOMER_ID==2].head(20)
data_chunk[0]['2']
tmp=df.groupby(['CUSTOMER_ID','ORDER_NUMBER'])['MATERIAL_NUMBER'].apply(np.array)
tmp=tmp.reset_index()
tmp.MATERIAL_NUMBER.apply(lambda x: len(x)).mean()
tmp=df.groupby(['CUSTOMER_ID','ORDER_NUMBER'])['MATERIAL_NUMBER'].apply(np.array).reset_index()
tmp=tmp.groupby(['CUSTOMER_ID'])['MATERIAL_NUMBER'].apply(list).reset_index()
data_chunk_mimic=dict(zip(tmp.CUSTOMER_ID.apply(str),tmp.MATERIAL_NUMBER))
data_chunk_mimic['2']
# Looks like its CUSTOMER_ID grouped by MATERIAL_NUMBER
# +
#tmp_chunk=list(pd.Series(data_chunk[0]['2'][1:-1]).explode())
#testing_chunk=df[(df.CUSTOMER_ID==2)]
#testing_chunk[~testing_chunk.MATERIAL_NUMBER.isin(tmp_chunk)]
# +
#list(pd.Series(data_chunk[0]['2'][1:-1]).explode())
# -
df_dict=df.groupby('ORDER_NUMBER')['MATERIAL_NUMBER'].apply(list).apply(sorted).reset_index()
df_dict=dict(zip(df_dict.ORDER_NUMBER,df_dict.MATERIAL_NUMBER))
tmp=df.groupby('CUSTOMER_ID')['ORDER_NUMBER'].apply(set).apply(list).reset_index().merge(df.groupby('CUSTOMER_ID')['MATERIAL_NUMBER'].apply(list).reset_index(),how='left',on='CUSTOMER_ID')
tmp
# +
#tmp['ORDERS']=tmp.ORDER_NUMBER.apply(lambda x: list(map(df_dict.get,x)))#.apply(set)
#tmp['ORDERS']=tmp['ORDERS'].apply(lambda x: [list(set(i)) for i in x])
# -
tmp
tmp.ORDER_NUMBER.apply(lambda x: len(x))
tmp.ORDERS[0]
| TIFUKNN-Work.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
from pandas.plotting import autocorrelation_plot
import matplotlib.pyplot as plt
######################################################################################################################
import sys
import collections
import itertools
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import mode
plt.style.use('bmh')
# %matplotlib inline
try:
from IPython.display import clear_output
have_ipython = True
except ImportError:
have_ipython = False
class KnnDtw(object):
"""K-nearest neighbor classifier using dynamic time warping
as the distance measure between pairs of time series arrays
Arguments
---------
n_neighbors : int, optional (default = 5)
Number of neighbors to use by default for KNN
max_warping_window : int, optional (default = infinity)
Maximum warping window allowed by the DTW dynamic
programming function
subsample_step : int, optional (default = 1)
Step size for the timeseries array. By setting subsample_step = 2,
the timeseries length will be reduced by 50% because every second
item is skipped. Implemented by x[:, ::subsample_step]
"""
def __init__(self, n_neighbors=5, max_warping_window=10000, subsample_step=1):
self.n_neighbors = n_neighbors
self.max_warping_window = max_warping_window
self.subsample_step = subsample_step
def fit(self, x, l):
"""Fit the model using x as training data and l as class labels
Arguments
---------
x : array of shape [n_samples, n_timepoints]
Training data set for input into KNN classifer
l : array of shape [n_samples]
Training labels for input into KNN classifier
"""
self.x = x
self.l = l
def _dtw_distance(self, ts_a, ts_b, d = lambda x,y: abs(x-y)):
"""Returns the DTW similarity distance between two 2-D
timeseries numpy arrays.
Arguments
---------
ts_a, ts_b : array of shape [n_samples, n_timepoints]
Two arrays containing n_samples of timeseries data
whose DTW distance between each sample of A and B
will be compared
d : DistanceMetric object (default = abs(x-y))
the distance measure used for A_i - B_j in the
DTW dynamic programming function
Returns
-------
DTW distance between A and B
"""
# Create cost matrix via broadcasting with large int
ts_a, ts_b = np.array(ts_a), np.array(ts_b)
M, N = len(ts_a), len(ts_b)
cost = sys.maxsize * np.ones((M, N))
# Initialize the first row and column
cost[0, 0] = d(ts_a[0], ts_b[0])
for i in range(1, M):
cost[i, 0] = cost[i-1, 0] + d(ts_a[i], ts_b[0])
for j in range(1, N):
cost[0, j] = cost[0, j-1] + d(ts_a[0], ts_b[j])
# Populate rest of cost matrix within window
for i in range(1, M):
for j in range(max(1, i - self.max_warping_window),
min(N, i + self.max_warping_window)):
choices = cost[i - 1, j - 1], cost[i, j-1], cost[i-1, j]
cost[i, j] = min(choices) + d(ts_a[i], ts_b[j])
# Return DTW distance given window
return cost[-1, -1]
def _dist_matrix(self, x, y):
"""Computes the M x N distance matrix between the training
dataset and testing dataset (y) using the DTW distance measure
Arguments
---------
x : array of shape [n_samples, n_timepoints]
y : array of shape [n_samples, n_timepoints]
Returns
-------
Distance matrix between each item of x and y with
shape [training_n_samples, testing_n_samples]
"""
# Compute the distance matrix
dm_count = 0
# Compute condensed distance matrix (upper triangle) of pairwise dtw distances
# when x and y are the same array
if(np.array_equal(x, y)):
x_s = shape(x)
dm = np.zeros((x_s[0] * (x_s[0] - 1)) // 2, dtype=np.double)
#p = ProgressBar(shape(dm)[0])
for i in range(0, x_s[0] - 1):
for j in range(i + 1, x_s[0]):
dm[dm_count] = self._dtw_distance(x[i, ::self.subsample_step],
y[j, ::self.subsample_step])
dm_count += 1
#p.animate(dm_count)
# Convert to squareform
dm = squareform(dm)
return dm
# Compute full distance matrix of dtw distnces between x and y
else:
x_s = np.shape(x)
y_s = np.shape(y)
dm = np.zeros((x_s[0], y_s[0]))
dm_size = x_s[0]*y_s[0]
#p = ProgressBar(dm_size)
for i in range(0, x_s[0]):
for j in range(0, y_s[0]):
dm[i, j] = self._dtw_distance(x[i, ::self.subsample_step],
y[j, ::self.subsample_step])
# Update progress bar
dm_count += 1
#p.animate(dm_count)
return dm
def predict(self, x):
"""Predict the class labels or probability estimates for
the provided data
Arguments
---------
x : array of shape [n_samples, n_timepoints]
Array containing the testing data set to be classified
Returns
-------
2 arrays representing:
(1) the predicted class labels
(2) the knn label count probability
"""
dm = self._dist_matrix(x, self.x)
# Identify the k nearest neighbors
knn_idx = dm.argsort()[:, :self.n_neighbors]
# Identify k nearest labels
knn_labels = self.l[knn_idx]
# Model Label
mode_data = mode(knn_labels, axis=1)
mode_label = mode_data[0]
mode_proba = mode_data[1]/self.n_neighbors
return mode_label.ravel(), mode_proba.ravel()
class ProgressBar:
"""This progress bar was taken from PYMC
"""
def __init__(self, iterations):
self.iterations = iterations
self.prog_bar = '[]'
self.fill_char = '*'
self.width = 40
self.__update_amount(0)
if have_ipython:
self.animate = self.animate_ipython
else:
self.animate = self.animate_noipython
def animate_ipython(self, iter):
sys.stdout.write('\r%s'%self)
sys.stdout.flush()
self.update_iteration(iter + 1)
def update_iteration(self, elapsed_iter):
self.__update_amount((elapsed_iter / float(self.iterations)) * 100.0)
self.prog_bar += ' %d of %s complete' % (elapsed_iter, self.iterations)
def __update_amount(self, new_amount):
percent_done = int(round((new_amount / 100.0) * 100.0))
all_full = self.width - 2
num_hashes = int(round((percent_done / 100.0) * all_full))
self.prog_bar = '[' + self.fill_char * num_hashes + ' ' * (all_full - num_hashes) + ']'
pct_place = (len(self.prog_bar) // 2) - len(str(percent_done))
pct_string = '%d%%' % percent_done
self.prog_bar = self.prog_bar[0:pct_place] + \
(pct_string + self.prog_bar[pct_place + len(pct_string):])
def __str__(self):
return str(self.prog_bar)
######################################################################################################################
import tensorflow as tf
import keras
import keras.backend as K
from sklearn.utils import shuffle
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score, f1_score
from collections import Counter
from keras import regularizers
from keras.models import Sequential, Model, load_model, model_from_json
from keras.utils import to_categorical
from keras.layers import Input, Dense, Flatten, Reshape, Concatenate, Dropout
from keras.layers import Conv2D, MaxPooling2D, UpSampling2D, Conv2DTranspose
from keras.layers.normalization import BatchNormalization
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
from keras.layers.advanced_activations import LeakyReLU
from scipy.signal import resample
def get_class_weights(y):
counter = Counter(y)
majority = max(counter.values())
return {cls: float(majority/count) for cls, count in counter.items()}
class Estimator:
l2p = 0.001
@staticmethod
def early_layers(inp, fm = (1,3), hid_act_func="relu"):
# Start
x = Conv2D(64, fm, padding="same", kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(inp)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(1, 2))(x)
x = Dropout(0.25)(x)
# 1
x = Conv2D(64, fm, padding="same", kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(1, 2))(x)
x = Dropout(0.25)(x)
return x
@staticmethod
def late_layers(inp, num_classes, fm = (1,3), act_func="softmax", hid_act_func="relu", b_name="Identifier"):
# 2
x = Conv2D(32, fm, padding="same", kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(inp)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(1, 2))(x)
x = Dropout(0.25)(x)
# 3
x = Conv2D(32, fm, padding="same", kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(1, 2))(x)
x = Dropout(0.25)(x)
# End
x = Flatten()(x)
x = Dense(256, kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(64, kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(num_classes, activation=act_func, name = b_name)(x)
return x
@staticmethod
def build(height, width, num_classes, name, fm = (1,3), act_func="softmax",hid_act_func="relu"):
inp = Input(shape=(height, width, 1))
early = Estimator.early_layers(inp, fm, hid_act_func=hid_act_func)
late = Estimator.late_layers(early, num_classes, fm, act_func=act_func, hid_act_func=hid_act_func)
model = Model(inputs=inp, outputs=late ,name=name)
return model
################################################################################
def get_ds_infos():
"""
Read the file includes data subject information.
Data Columns:
0: code [1-24]
1: weight [kg]
2: height [cm]
3: age [years]
4: gender [0:Female, 1:Male]
Returns:
A pandas DataFrame that contains inforamtion about data subjects' attributes
"""
dss = pd.read_csv("data_subjects_info.csv")
print("[INFO] -- Data subjects' information is imported.")
return dss
def set_data_types(data_types=["userAcceleration"]):
"""
Select the sensors and the mode to shape the final dataset.
Args:
data_types: A list of sensor data type from this list: [attitude, gravity, rotationRate, userAcceleration]
Returns:
It returns a list of columns to use for creating time-series from files.
"""
dt_list = []
for t in data_types:
if t != "attitude":
dt_list.append([t+".x",t+".y",t+".z"])
else:
dt_list.append([t+".roll", t+".pitch", t+".yaw"])
return dt_list
def creat_time_series(dt_list, act_labels, trial_codes, mode="mag", labeled=True, combine_grav_acc=False):
"""
Args:
dt_list: A list of columns that shows the type of data we want.
act_labels: list of activites
trial_codes: list of trials
mode: It can be "raw" which means you want raw data
for every dimention of each data type,
[attitude(roll, pitch, yaw); gravity(x, y, z); rotationRate(x, y, z); userAcceleration(x,y,z)].
or it can be "mag" which means you only want the magnitude for each data type: (x^2+y^2+z^2)^(1/2)
labeled: True, if we want a labeld dataset. False, if we only want sensor values.
combine_grav_acc: True, means adding each axis of gravity to corresponding axis of userAcceleration.
Returns:
It returns a time-series of sensor data.
"""
num_data_cols = len(dt_list) if mode == "mag" else len(dt_list*3)
if labeled:
dataset = np.zeros((0,num_data_cols+7)) # "7" --> [act, code, weight, height, age, gender, trial]
else:
dataset = np.zeros((0,num_data_cols))
ds_list = get_ds_infos()
print("[INFO] -- Creating Time-Series")
for sub_id in ds_list["code"]:
for act_id, act in enumerate(act_labels):
for trial in trial_codes[act_id]:
fname = 'A_DeviceMotion_data/'+act+'_'+str(trial)+'/sub_'+str(int(sub_id))+'.csv'
raw_data = pd.read_csv(fname)
raw_data = raw_data.drop(['Unnamed: 0'], axis=1)
vals = np.zeros((len(raw_data), num_data_cols))
if combine_grav_acc:
raw_data["userAcceleration.x"] = raw_data["userAcceleration.x"].add(raw_data["gravity.x"])
raw_data["userAcceleration.y"] = raw_data["userAcceleration.y"].add(raw_data["gravity.y"])
raw_data["userAcceleration.z"] = raw_data["userAcceleration.z"].add(raw_data["gravity.z"])
for x_id, axes in enumerate(dt_list):
if mode == "mag":
vals[:,x_id] = (raw_data[axes]**2).sum(axis=1)**0.5
else:
vals[:,x_id*3:(x_id+1)*3] = raw_data[axes].values
vals = vals[:,:num_data_cols]
if labeled:
lbls = np.array([[act_id,
sub_id-1,
ds_list["weight"][sub_id-1],
ds_list["height"][sub_id-1],
ds_list["age"][sub_id-1],
ds_list["gender"][sub_id-1],
trial
]]*len(raw_data))
vals = np.concatenate((vals, lbls), axis=1)
dataset = np.append(dataset,vals, axis=0)
cols = []
for axes in dt_list:
if mode == "raw":
cols += axes
else:
cols += [str(axes[0][:-2])]
if labeled:
cols += ["act", "id", "weight", "height", "age", "gender", "trial"]
dataset = pd.DataFrame(data=dataset, columns=cols)
return dataset
#________________________________
#________________________________
def ts_to_secs(dataset, w, s, standardize = False, **options):
data = dataset[dataset.columns[:-7]].values
act_labels = dataset["act"].values
id_labels = dataset["id"].values
trial_labels = dataset["trial"].values
mean = 0
std = 1
if standardize:
## Standardize each sensor’s data to have a zero mean and unity standard deviation.
## As usual, we normalize test dataset by training dataset's parameters
if options:
mean = options.get("mean")
std = options.get("std")
print("[INFO] -- Test Data has been standardized")
else:
mean = data.mean(axis=0)
std = data.std(axis=0)
print("[INFO] -- Training Data has been standardized: the mean is = "+str(mean)+" ; and the std is = "+str(std))
data -= mean
data /= std
else:
print("[INFO] -- Without Standardization.....")
## We want the Rows of matrices show each Feature and the Columns show time points.
data = data.T
m = data.shape[0] # Data Dimension
ttp = data.shape[1] # Total Time Points
number_of_secs = int(round(((ttp - w)/s)))
## Create a 3D matrix for Storing Sections
secs_data = np.zeros((number_of_secs , m , w ))
act_secs_labels = np.zeros(number_of_secs)
id_secs_labels = np.zeros(number_of_secs)
k=0
for i in range(0 , ttp-w, s):
j = i // s
if j >= number_of_secs:
break
if id_labels[i] != id_labels[i+w-1]:
continue
if act_labels[i] != act_labels[i+w-1]:
continue
if trial_labels[i] != trial_labels[i+w-1]:
continue
secs_data[k] = data[:, i:i+w]
act_secs_labels[k] = act_labels[i].astype(int)
id_secs_labels[k] = id_labels[i].astype(int)
k = k+1
secs_data = secs_data[0:k]
act_secs_labels = act_secs_labels[0:k]
id_secs_labels = id_secs_labels[0:k]
return secs_data, act_secs_labels, id_secs_labels, mean, std
##________________________________________________________________
ACT_LABELS = ["dws","ups", "wlk", "jog", "std", "sit"]
TRIAL_CODES = {
ACT_LABELS[0]:[1,2,11],
ACT_LABELS[1]:[3,4,12],
ACT_LABELS[2]:[7,8,15],
ACT_LABELS[3]:[9,16],
ACT_LABELS[4]:[6,14],
ACT_LABELS[5]:[5,13],
}
# +
## Here we set parameter to build labeld time-series from dataset of "(A)DeviceMotion_data"
## attitude(roll, pitch, yaw); gravity(x, y, z); rotationRate(x, y, z); userAcceleration(x,y,z)
sdt = ["rotationRate","userAcceleration"]
mode = "mag"
cga = True # Add gravity to acceleration or not
print("[INFO] -- Selected sensor data types: "+str(sdt)+" -- Mode: "+str(mode)+" -- Grav+Acc: "+str(cga))
act_labels = ACT_LABELS [0:4]
print("[INFO] -- Selected activites: "+str(act_labels))
trial_codes = [TRIAL_CODES[act] for act in act_labels]
dt_list = set_data_types(sdt)
dataset = creat_time_series(dt_list, act_labels, trial_codes, mode=mode, labeled=True, combine_grav_acc = cga)
print("[INFO] -- Shape of time-Series dataset:"+str(dataset.shape))
#*****************
TRAIN_TEST_TYPE = "subject" # "subject" or "trial"
#*****************
if TRAIN_TEST_TYPE == "subject":
test_ids = [4,9,11,21]
print("[INFO] -- Test IDs: "+str(test_ids))
test_ts = dataset.loc[(dataset['id'].isin(test_ids))]
train_ts = dataset.loc[~(dataset['id'].isin(test_ids))]
else:
test_trail = [11,12,13,14,15,16]
print("[INFO] -- Test Trials: "+str(test_trail))
test_ts = dataset.loc[(dataset['trial'].isin(test_trail))]
train_ts = dataset.loc[~(dataset['trial'].isin(test_trail))]
print("[INFO] -- Shape of Train Time-Series :"+str(train_ts.shape))
print("[INFO] -- Shape of Test Time-Series :"+str(test_ts.shape))
# -
val_trail = [11,12,13,14,15,16]
val_ts = train_ts.loc[(train_ts['trial'].isin(val_trail))]
train_ts = train_ts.loc[~(train_ts['trial'].isin(val_trail))]
print("[INFO] -- Shape of Train Time-Series :"+str(train_ts.shape))
print("[INFO] -- Shape of Test Time-Series :"+str(val_ts.shape))
# +
#************
## HERE ##
## This Variable Defines the Size of Sliding Window
## ( e.g. 100 means in each snapshot we just consider 100 consecutive observations of each sensor)
w = 128 # 50 Equals to 1 second for MotionSense Dataset (it is on 50Hz samplig rate)
## Here We Choose Step Size for Building Diffrent Snapshots from Time-Series Data
## ( smaller step size will increase the amount of the instances and higher computational cost may be incurred )
s = 10
train_data, act_train, id_train, train_mean, train_std = ts_to_secs(train_ts.copy(),
w,
s,
standardize = True)
s = 10
val_data, act_val, id_val, val_mean, val_std = ts_to_secs(val_ts.copy(),
w,
s,
standardize = True,
mean = train_mean,
std = train_std)
s = 10
test_data, act_test, id_test, test_mean, test_std = ts_to_secs(test_ts.copy(),
w,
s,
standardize = True,
mean = train_mean,
std = train_std)
print("[INFO] -- Shape of Training Sections: "+str(train_data.shape))
print("[INFO] -- Shape of Training Sections: "+str(val_data.shape))
print("[INFO] -- Shape of Test Sections: "+str(test_data.shape))
# +
id_train_labels = to_categorical(id_train)
id_val_labels = to_categorical(id_val)
id_test_labels = to_categorical(id_test)
id_test_labels = np.append(id_test_labels, np.zeros((len(id_test_labels),2)), axis =1)
act_train_labels = to_categorical(act_train)
act_val_labels = to_categorical(act_val)
act_test_labels = to_categorical(act_test)
## Here we add an extra dimension to the datasets just to be ready for using with Convolution2D
train_data = np.expand_dims(train_data,axis=3)
print("[INFO] -- Shape of Training Sections:", train_data.shape)
val_data = np.expand_dims(val_data,axis=3)
print("[INFO] -- Validation Sections:"+str(val_data.shape))
test_data = np.expand_dims(test_data,axis=3)
print("[INFO] -- Shape of Training Sections:", test_data.shape)
# -
#https://stackoverflow.com/a/45305384/5210098
def f1_metric(y_true, y_pred):
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
def eval_act(X,Yact, vX, vYact, tX, tYact, ep=50):
height = X.shape[1]
width = X.shape[2]
act_class_numbers = 4
fm = (2,5)
## Callbacks
eval_metric= "val_f1_metric"
early_stop = keras.callbacks.EarlyStopping(monitor=eval_metric, mode='max', patience = 10)
filepath="RAWACT.best.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor=eval_metric, verbose=0, save_best_only=True, mode='max')
callbacks_list = [early_stop,checkpoint]
## Callbacks
eval_act = Estimator.build(height, width, act_class_numbers, name ="EVAL_ACT", fm=fm, act_func="softmax",hid_act_func="relu")
eval_act.compile( loss="categorical_crossentropy", optimizer='adam', metrics=['acc', f1_metric])
eval_act.fit(X, Yact,
validation_data = (vX, vYact),
epochs = ep,
batch_size = 128,
verbose = 0,
class_weight = get_class_weights(np.argmax(Yact,axis=1)),
callbacks = callbacks_list
)
eval_act.load_weights("RAWACT.best.hdf5")
eval_act.compile( loss="categorical_crossentropy", optimizer='adam', metrics=['acc',f1_metric])
result1 = eval_act.evaluate(tX, tYact, verbose = 2)
act_acc = result1[1].round(4)*100
print("***[RESULT]*** ACT Accuracy: "+str(act_acc))
preds = eval_act.predict(tX)
preds = np.argmax(preds, axis=1)
conf_mat = confusion_matrix(np.argmax(tYact, axis=1), preds)
conf_mat = conf_mat.astype('float') / conf_mat.sum(axis=1)[:, np.newaxis]
print("***[RESULT]*** ACT Confusion Matrix")
print(np.array(conf_mat).round(3)*100)
f1act = f1_score(np.argmax(tYact, axis=1), preds, average=None).mean()
print("***[RESULT]*** ACT Averaged F-1 Score : "+str(f1act*100))
return f1act
X = train_data.copy()
Yact = act_train_labels
vX = val_data.copy()
vYact = act_val_labels
tX = test_data.copy()
tYact = act_test_labels
ep=50
# +
raw_f1 = np.zeros(5)
for i in range(5):
print("##################")
print("Iteration "+str(i))
raw_f1[i] = eval_act(X, Yact, vX, vYact, tX, tYact, ep)
print("##################")
print("Raw Data, F1 Scores: "+str(raw_f1))
print("Mean: "+str(raw_f1.mean()))
print("STD: "+str(raw_f1.std()))
# +
lm_file = "msda_anon_model"
json_file = open(lm_file+".json", 'r')
loaded_model_json = json_file.read()
json_file.close()
anon_model = model_from_json(loaded_model_json)
anon_model.load_weights(lm_file+"_weights.h5")
print("Loaded model from disk")
X = anon_model.predict(train_data, verbose=1)[0]
vX = anon_model.predict(val_data, verbose=1)[0]
tX = anon_model.predict(test_data, verbose=1)[0]
cae_f1 = np.zeros(5)
for i in range(5):
print("##################")
print("Iteration "+str(i))
cae_f1[i] = eval_act(X, Yact, vX, vYact, tX, tYact, ep)
print("##################")
print("CAE Data, F1 Scores: "+str(cae_f1))
print("Mean: "+str(cae_f1.mean()))
print("STD: "+str(cae_f1.std()))
# +
lm_file = "rep_anon_model"
json_file = open(lm_file+".json", 'r')
loaded_model_json = json_file.read()
json_file.close()
anon_model = model_from_json(loaded_model_json)
anon_model.load_weights(lm_file+"_weights.h5")
print("Loaded model from disk")
X = anon_model.predict(train_data, verbose=1)[0]
vX = anon_model.predict(val_data, verbose=1)[0]
tX = anon_model.predict(test_data, verbose=1)[0]
cae_f1 = np.zeros(5)
for i in range(5):
print("##################")
print("Iteration "+str(i))
cae_f1[i] = eval_act(X, Yact, vX, vYact, tX, tYact, ep)
print("##################")
print("CAE Data, F1 Scores: "+str(cae_f1))
print("Mean: "+str(cae_f1.mean()))
print("STD: "+str(cae_f1.std()))
# +
#*******************
sample_rate = 10 #Hz
#*******************
num_sampels = (128*sample_rate)//50
print("Number of Sampels = "+str(num_sampels))
X = train_data.copy()
vX = val_data.copy()
tX = test_data.copy()
from scipy.signal import resample
ds_train_data = X.copy()
ds_val_data = vX.copy()
ds_test_data = tX.copy()
for sens in range(2):
tmp = np.array([resample(x,num_sampels) for x in ds_train_data[:,sens,:,0]])
ds_train_data[:,sens,:num_sampels,0] = tmp
tmp = np.array([resample(x,num_sampels) for x in ds_val_data[:,sens,:,0]])
ds_val_data[:,sens,:num_sampels,0] = tmp
tmp = np.array([resample(x,num_sampels) for x in ds_test_data[:,sens,:,0]])
ds_test_data[:,sens,:num_sampels,0] = tmp
ds_train_data = ds_train_data[:,:,:num_sampels,:]
ds_val_data = ds_val_data[:,:,:num_sampels,:]
ds_test_data = ds_test_data[:,:,:num_sampels,:]
print("[INFO] -- Training Sections:", ds_train_data.shape)
print("[INFO] -- Validation Sections:", ds_val_data.shape)
print("[INFO] -- Test Sections:", ds_test_data.shape)
X = ds_train_data
vX = ds_val_data
tX = ds_test_data
dwnsmpl_f1 = np.zeros(5)
for i in range(5):
print("##################")
print("Iteration "+str(i))
dwnsmpl_f1[i] = eval_act(X, Yact, vX, vYact, tX, tYact, ep)
print("##################")
print("Raw Data, F1 Scores: "+str(dwnsmpl_f1))
print("Mean: "+str(dwnsmpl_f1.mean()))
print("STD: "+str(dwnsmpl_f1.std()))
# +
class Estimator:
l2p = 0.001
@staticmethod
def early_layers(inp, fm = (1,3), hid_act_func="relu"):
# Start
x = Conv2D(64, fm, padding="same", kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(inp)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(1, 2))(x)
x = Dropout(0.25)(x)
# 1
x = Conv2D(64, fm, padding="same", kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(x)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(1, 2))(x)
x = Dropout(0.25)(x)
return x
@staticmethod
def late_layers(inp, num_classes, fm = (1,3), act_func="softmax", hid_act_func="relu", b_name="Identifier"):
# 2
x = Conv2D(32, fm, padding="same", kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(inp)
x = BatchNormalization()(x)
x = MaxPooling2D(pool_size=(1, 2))(x)
x = Dropout(0.25)(x)
# End
x = Flatten()(x)
x = Dense(256, kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(64, kernel_regularizer=regularizers.l2(Estimator.l2p), activation=hid_act_func)(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(num_classes, activation=act_func, name = b_name)(x)
return x
@staticmethod
def build(height, width, num_classes, name, fm = (1,3), act_func="softmax",hid_act_func="relu"):
inp = Input(shape=(height, width, 1))
early = Estimator.early_layers(inp, fm, hid_act_func=hid_act_func)
late = Estimator.late_layers(early, num_classes, fm, act_func=act_func, hid_act_func=hid_act_func)
model = Model(inputs=inp, outputs=late ,name=name)
return model
# +
#*******************
sample_rate = 5 #Hz
#*******************
num_sampels = (128*sample_rate)//50
print("Number of Sampels = "+str(num_sampels))
X = train_data.copy()
vX = val_data.copy()
tX = test_data.copy()
from scipy.signal import resample
ds_train_data = X.copy()
ds_val_data = vX.copy()
ds_test_data = tX.copy()
for sens in range(2):
tmp = np.array([resample(x,num_sampels) for x in ds_train_data[:,sens,:,0]])
ds_train_data[:,sens,:num_sampels,0] = tmp
tmp = np.array([resample(x,num_sampels) for x in ds_val_data[:,sens,:,0]])
ds_val_data[:,sens,:num_sampels,0] = tmp
tmp = np.array([resample(x,num_sampels) for x in ds_test_data[:,sens,:,0]])
ds_test_data[:,sens,:num_sampels,0] = tmp
ds_train_data = ds_train_data[:,:,:num_sampels,:]
ds_val_data = ds_val_data[:,:,:num_sampels,:]
ds_test_data = ds_test_data[:,:,:num_sampels,:]
print("[INFO] -- Training Sections:", ds_train_data.shape)
print("[INFO] -- Validation Sections:", ds_val_data.shape)
print("[INFO] -- Test Sections:", ds_test_data.shape)
X = ds_train_data
vX = ds_val_data
tX = ds_test_data
dwnsmpl_f1 = np.zeros(5)
for i in range(5):
print("##################")
print("Iteration "+str(i))
dwnsmpl_f1[i] = eval_act(X, Yact, vX, vYact, tX, tYact, ep)
print("##################")
print("Raw Data, F1 Scores: "+str(dwnsmpl_f1))
print("Mean: "+str(dwnsmpl_f1.mean()))
print("STD: "+str(dwnsmpl_f1.std()))
# -
class SSA(object):
__supported_types = (pd.Series, np.ndarray, list)
def __init__(self, tseries, L, save_mem=True):
"""
Decomposes the given time series with a singular-spectrum analysis. Assumes the values of the time series are
recorded at equal intervals.
Parameters
----------
tseries : The original time series, in the form of a Pandas Series, NumPy array or list.
L : The window length. Must be an integer 2 <= L <= N/2, where N is the length of the time series.
save_mem : Conserve memory by not retaining the elementary matrices. Recommended for long time series with
thousands of values. Defaults to True.
Note: Even if an NumPy array or list is used for the initial time series, all time series returned will be
in the form of a Pandas Series or DataFrame object.
"""
# Tedious type-checking for the initial time series
if not isinstance(tseries, self.__supported_types):
raise TypeError("Unsupported time series object. Try Pandas Series, NumPy array or list.")
# Checks to save us from ourselves
self.N = len(tseries)
if not 2 <= L <= self.N/2:
raise ValueError("The window length must be in the interval [2, N/2].")
self.L = L
self.orig_TS = pd.Series(tseries)
self.K = self.N - self.L + 1
# Embed the time series in a trajectory matrix
self.X = np.array([self.orig_TS.values[i:L+i] for i in range(0, self.K)]).T
# Decompose the trajectory matrix
self.U, self.Sigma, VT = np.linalg.svd(self.X)
self.d = np.linalg.matrix_rank(self.X)
self.TS_comps = np.zeros((self.N, self.d))
if not save_mem:
# Construct and save all the elementary matrices
self.X_elem = np.array([ self.Sigma[i]*np.outer(self.U[:,i], VT[i,:]) for i in range(self.d) ])
# Diagonally average the elementary matrices, store them as columns in array.
for i in range(self.d):
X_rev = self.X_elem[i, ::-1]
self.TS_comps[:,i] = [X_rev.diagonal(j).mean() for j in range(-X_rev.shape[0]+1, X_rev.shape[1])]
self.V = VT.T
else:
# Reconstruct the elementary matrices without storing them
for i in range(self.d):
X_elem = self.Sigma[i]*np.outer(self.U[:,i], VT[i,:])
X_rev = X_elem[::-1]
self.TS_comps[:,i] = [X_rev.diagonal(j).mean() for j in range(-X_rev.shape[0]+1, X_rev.shape[1])]
self.X_elem = "Re-run with save_mem=False to retain the elementary matrices."
# The V array may also be very large under these circumstances, so we won't keep it.
self.V = "Re-run with save_mem=False to retain the V matrix."
# Calculate the w-correlation matrix.
self.calc_wcorr()
def components_to_df(self, n=0):
"""
Returns all the time series components in a single Pandas DataFrame object.
"""
if n > 0:
n = min(n, self.d)
else:
n = self.d
# Create list of columns - call them F0, F1, F2, ...
cols = ["F{}".format(i) for i in range(n)]
return pd.DataFrame(self.TS_comps[:, :n], columns=cols, index=self.orig_TS.index)
def reconstruct(self, indices):
"""
Reconstructs the time series from its elementary components, using the given indices. Returns a Pandas Series
object with the reconstructed time series.
Parameters
----------
indices: An integer, list of integers or slice(n,m) object, representing the elementary components to sum.
"""
if isinstance(indices, int): indices = [indices]
ts_vals = self.TS_comps[:,indices].sum(axis=1)
return pd.Series(ts_vals, index=self.orig_TS.index)
def calc_wcorr(self):
"""
Calculates the w-correlation matrix for the time series.
"""
# Calculate the weights
w = np.array(list(np.arange(self.L)+1) + [self.L]*(self.K-self.L-1) + list(np.arange(self.L)+1)[::-1])
def w_inner(F_i, F_j):
return w.dot(F_i*F_j)
# Calculated weighted norms, ||F_i||_w, then invert.
F_wnorms = np.array([w_inner(self.TS_comps[:,i], self.TS_comps[:,i]) for i in range(self.d)])
F_wnorms = F_wnorms**-0.5
# Calculate Wcorr.
self.Wcorr = np.identity(self.d)
for i in range(self.d):
for j in range(i+1,self.d):
self.Wcorr[i,j] = abs(w_inner(self.TS_comps[:,i], self.TS_comps[:,j]) * F_wnorms[i] * F_wnorms[j])
self.Wcorr[j,i] = self.Wcorr[i,j]
def plot_wcorr(self, min=None, max=None):
"""
Plots the w-correlation matrix for the decomposed time series.
"""
if min is None:
min = 0
if max is None:
max = self.d
if self.Wcorr is None:
self.calc_wcorr()
ax = plt.imshow(self.Wcorr,interpolation = 'none')
plt.xlabel(r"$\tilde{F}_i$")
plt.ylabel(r"$\tilde{F}_j$")
plt.colorbar(ax.colorbar, fraction=0.045)
ax.colorbar.set_label("$W_{i,j}$")
plt.clim(0,1)
# For plotting purposes:
if max == self.d:
max_rnge = self.d-1
else:
max_rnge = max
plt.xlim(min-0.5, max_rnge+0.5)
plt.ylim(max_rnge+0.5, min-0.5)
# +
import sys
window = 10 # SSA window == number of components
ssa_train_data = train_data.copy()
ssa_val_data = val_data.copy()
ssa_test_data = test_data.copy()
ssa_train_0 = []
ssa_val_0 = []
ssa_test_0 = []
ssa_train_1 = []
ssa_val_1 = []
ssa_test_1 = []
print("\nTrain \n")
for i in range(len(ssa_train_data)):
ssa_train_0.append(SSA(ssa_train_data[i,0,:,0], window))
ssa_train_1.append(SSA(ssa_train_data[i,1,:,0], window))
if(i%100==1):
sys.stdout.write("\rNow: "+str(np.round(i*100/len(ssa_train_data), 2))+"%")
sys.stdout.flush()
print("\nVal \n")
for i in range(len(ssa_val_data)):
ssa_val_0.append(SSA(ssa_val_data[i,0,:,0], window))
ssa_val_1.append(SSA(ssa_val_data[i,1,:,0], window))
if(i%100==1):
sys.stdout.write("\rNow: "+str(np.round(i*100/len(ssa_val_data), 2))+"%")
sys.stdout.flush()
print("\nTest \n")
for i in range(len(ssa_test_data)):
ssa_test_0.append(SSA(ssa_test_data[i,0,:,0], window))
ssa_test_1.append(SSA(ssa_test_data[i,1,:,0], window))
if(i%100==1):
sys.stdout.write("\rNow: "+str(np.round(i*100/len(ssa_test_data), 2))+"%")
sys.stdout.flush()
# +
X = train_data.copy()
vX = val_data.copy()
tX = test_data.copy()
num_comps = 1
print("With "+str(num_comps)+" components:")
for i in range(len(X)):
X[i,0,:,0] = ssa_train_0[i].reconstruct(list(range(0,num_comps)))
X[i,1,:,0] = ssa_train_1[i].reconstruct(list(range(0,num_comps)))
for i in range(len(vX)):
vX[i,0,:,0] = ssa_val_0[i].reconstruct(list(range(0,num_comps)))
vX[i,1,:,0] = ssa_val_1[i].reconstruct(list(range(0,num_comps)))
for i in range(len(tX)):
tX[i,0,:,0] = ssa_test_0[i].reconstruct(list(range(0,num_comps)))
tX[i,1,:,0] = ssa_test_1[i].reconstruct(list(range(0,num_comps)))
SSA_f1 = np.zeros(5)
for i in range(5):
print("##################")
print("Iteration "+str(i))
SSA_f1[i] = eval_act(X, Yact, vX, vYact, tX, tYact, ep)
print("##################")
print("Raw Data, F1 Scores: "+str(SSA_f1))
print("Mean: "+str(SSA_f1.mean()))
print("STD: "+str(SSA_f1.std()))
# +
X = train_data.copy()
vX = val_data.copy()
tX = test_data.copy()
num_comps = 2
print("With "+str(num_comps)+" components:")
for i in range(len(X)):
X[i,0,:,0] = ssa_train_0[i].reconstruct(list(range(0,num_comps)))
X[i,1,:,0] = ssa_train_1[i].reconstruct(list(range(0,num_comps)))
for i in range(len(vX)):
vX[i,0,:,0] = ssa_val_0[i].reconstruct(list(range(0,num_comps)))
vX[i,1,:,0] = ssa_val_1[i].reconstruct(list(range(0,num_comps)))
for i in range(len(tX)):
tX[i,0,:,0] = ssa_test_0[i].reconstruct(list(range(0,num_comps)))
tX[i,1,:,0] = ssa_test_1[i].reconstruct(list(range(0,num_comps)))
SSA_f1 = np.zeros(5)
for i in range(5):
print("##################")
print("Iteration "+str(i))
SSA_f1[i] = eval_act(X, Yact, vX, vYact, tX, tYact, ep)
print("##################")
print("Raw Data, F1 Scores: "+str(SSA_f1))
print("Mean: "+str(SSA_f1.mean()))
print("STD: "+str(SSA_f1.std()))
| msda/msda_table2_(I)act(subject).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pickle
import pyLDAvis
import pyLDAvis.gensim
with open('pkl_files/ldavis.pkl','rb') as pf:
lda,corpus,dictionary = pickle.load(pf)
vis = pyLDAvis.gensim.prepare(lda,corpus,dictionary,mds='mmds')
pyLDAvis.display(vis)
| 4-Fletcher/lda_vis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: venvAutoVc
# language: python
# name: venvautovc
# ---
# +
# PLOT CREPE PREDICTIONS
import crepe
from scipy.signal import medfilt
import soundfile as sf
import librosa
import os
from IPython.display import Audio
import soundfile as sf
import numpy as np
import matplotlib.pyplot as plt
audio_path = '/import/c4dm-datasets/VCTK-Corpus-0.92/wav48_silence_trimmed/p225/p225_001_mic1.flac'
audio, sr = sf.read(audio_path)
timestamp, frequency_prediction, confidence, activation = crepe.predict(audio, sr, viterbi=False)
# Audio(data=audio, rate=sr, autoplay=True)
# -
timestamp, frequency_prediction, confidence, activation = crepe.predict(audio, sr, viterbi=False, step_size=20)
len(frequency_prediction)
# USE viterbi=False predicitons
confidence_vuv_threshold = 0.5
voiced_bool = (confidence>confidence_vuv_threshold)
unvoiced_bool = ~voiced_bool
def show_plot(title, array):
print(title)
plt.plot(array)
plt.show()
plt.close()
medfilt_frequency = medfilt(frequency_prediction,3)
voiced_flagged_frequency = medfilt_frequency.copy()
voiced_flagged_frequency[unvoiced_bool] = voiced_flagged_frequency[unvoiced_bool]=np.nan
voiced_log_freq = voiced_flagged_frequency.copy()
voiced_log_freq[voiced_bool] = np.log(voiced_log_freq[voiced_bool])
unit_var_voiced_log_freq = voiced_log_freq.copy()
unit_var_voiced_log_freq[voiced_bool] = (unit_var_voiced_log_freq[voiced_bool] - np.mean(unit_var_voiced_log_freq[voiced_bool]))/np.std(unit_var_voiced_log_freq[voiced_bool])/4
normalized_unit_var_voiced_log_freq = unit_var_voiced_log_freq.copy()
normalized_unit_var_voiced_log_freq[voiced_bool] = (normalized_unit_var_voiced_log_freq[voiced_bool] - np.min(normalized_unit_var_voiced_log_freq[voiced_bool]))/(np.max(normalized_unit_var_voiced_log_freq[voiced_bool])-np.min(normalized_unit_var_voiced_log_freq[voiced_bool]))
vector_257_normalized_unit_var_voiced_log_freq = normalized_unit_var_voiced_log_freq.copy()
vector_257_normalized_unit_var_voiced_log_freq[voiced_bool] = np.rint(vector_257_normalized_unit_var_voiced_log_freq[voiced_bool]*256)+1
vector_257_vuv_normalized_unit_var_log_freq = vector_257_normalized_unit_var_voiced_log_freq.copy()
vector_257_vuv_normalized_unit_var_log_freq[unvoiced_bool] = vector_257_vuv_normalized_unit_var_log_freq[unvoiced_bool]=0
vector_257_vuv_normalized_unit_var_log_freq = vector_257_vuv_normalized_unit_var_log_freq.astype(int)
one_hot_preprocessed_pitch_conotours = np.zeros((vector_257_vuv_normalized_unit_var_log_freq.size, vector_257_vuv_normalized_unit_var_log_freq.max()+1))
one_hot_preprocessed_pitch_conotours[np.arange(vector_257_vuv_normalized_unit_var_log_freq.size),vector_257_vuv_normalized_unit_var_log_freq] = 1
# +
print('original frequency_prediction prediction','\n', frequency_prediction,'\n')
show_plot('original frequency_prediction prediction',frequency_prediction)
medfilt_frequency = medfilt(frequency_prediction,3)
print('medfilt_frequency','\n', medfilt_frequency,'\n')
show_plot('medfilt_frequency',medfilt_frequency)
voiced_flagged_frequency = medfilt_frequency.copy()
voiced_flagged_frequency[unvoiced_bool] = voiced_flagged_frequency[unvoiced_bool]=np.nan
print('voiced_flagged_frequency','\n', voiced_flagged_frequency,'\n')
show_plot('voiced_flagged_frequency',voiced_flagged_frequency)
# unvoiced_zerod_medfilt_freq_cont = medfilt(voiced_flagged_frequency,3)
# print('unvoiced_zerod_medfilt_freq_cont','\n', unvoiced_zerod_medfilt_freq_cont,'\n')
voiced_log_freq = voiced_flagged_frequency.copy()
# unvoiced_medfilt_freq_cont = unvoiced_medfilt_freq_cont+1e-7 # not necessary if only performing operations on voiced
voiced_log_freq[voiced_bool] = np.log(voiced_log_freq[voiced_bool])
print('voiced_log_freq','\n', voiced_log_freq,'\n')
show_plot('voiced_log_freq',voiced_log_freq)
normalized_unit_var_voiced_log_freq = voiced_log_freq.copy()
normalized_unit_var_voiced_log_freq[voiced_bool] = (normalized_unit_var_voiced_log_freq[voiced_bool] - np.min(normalized_unit_var_voiced_log_freq[voiced_bool]))/(np.max(normalized_unit_var_voiced_log_freq[voiced_bool])-np.min(normalized_unit_var_voiced_log_freq[voiced_bool]))
print('normalized_unit_var_voiced_log_freq','\n',normalized_unit_var_voiced_log_freq,'\n')
show_plot('normalized_unit_var_voiced_log_freq',normalized_unit_var_voiced_log_freq)
vector_257_normalized_unit_var_voiced_log_freq = normalized_unit_var_voiced_log_freq.copy()
vector_257_normalized_unit_var_voiced_log_freq[voiced_bool] = np.rint(vector_257_normalized_unit_var_voiced_log_freq[voiced_bool]*255)+1
vector_257_vuv_normalized_unit_var_log_freq = vector_257_normalized_unit_var_voiced_log_freq.copy()
vector_257_vuv_normalized_unit_var_log_freq[unvoiced_bool] = vector_257_vuv_normalized_unit_var_log_freq[unvoiced_bool]=0
print('vector_257_vuv_normalized_unit_var_log_freq','\n',vector_257_vuv_normalized_unit_var_log_freq,'\n')
show_plot('vector_257_vuv_normalized_unit_var_log_freq',vector_257_vuv_normalized_unit_var_log_freq)
vector_257_vuv_normalized_unit_var_log_freq = vector_257_vuv_normalized_unit_var_log_freq.astype(int)
one_hot_preprocessed_pitch_conotours = np.zeros((vector_257_vuv_normalized_unit_var_log_freq.size, vector_257_vuv_normalized_unit_var_log_freq.max()+1))
one_hot_preprocessed_pitch_conotours[np.arange(vector_257_vuv_normalized_unit_var_log_freq.size),vector_257_vuv_normalized_unit_var_log_freq] = 1
one_hot_preprocessed_pitch_conotours
print('one_hot_preprocessed_pitch_conotours','\n',one_hot_preprocessed_pitch_conotours)
# -
one_hot_preprocessed_pitch_conotours[57]
show_plot('original frequency_prediction prediction',frequency_prediction)
show_plot('medfilt_frequency',medfilt_frequency)
show_plot('voiced_flagged_frequency',voiced_flagged_frequency)
show_plot('voiced_log_freq',voiced_log_freq)
show_plot('unit_var_voiced_log_freq',unit_var_voiced_log_freq)
show_plot('normalized_unit_var_voiced_log_freq',normalized_unit_var_voiced_log_freq)
show_plot('vector_257_vuv_normalized_unit_var_log_freq',vector_257_vuv_normalized_unit_var_log_freq)
Audio(data=audio, rate=sr, autoplay=True)
| pitch_contours.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Learning to graph stock data with Pandas and Bokeh
from math import pi
import pandas as pd
from pandas_datareader import data, wb
import datetime
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
symbol = 'AAPL'
web = data
start = datetime.datetime(2016, 8, 5)
end = datetime.datetime(2017, 8, 4)
df = web.DataReader(symbol, 'google', start, end)
df
df.columns
df.index
output_notebook()
# +
inc = df.Close > df.Open
dec = df.Open > df.Close
w = 12*60*60*1000 #half day in milliseconds
TOOLS = "pan,wheel_zoom,box_zoom,reset,save"
p = figure(x_axis_type="datetime", tools=TOOLS, plot_width=1000, title="AAPL Candlestick")
p.xaxis.major_label_orientation = pi / 4
p.grid.grid_line_alpha=0.3
p.segment(df.index, df.High, df.index, df.Low, color="black")
p.vbar(df.index[inc], w, df.Open[inc], df.Close[inc], color="#32CD32", line_color="green")
p.vbar(df.index[dec], w, df.Open[dec], df.Close[dec], color="#FF4500", line_color="red")
output_file("candlestick_AAPL.html", title="candlestick_AAPL.py example")
show(p) # open a browser
| candlestick_AAPL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PythonData
# language: python
# name: pythondata
# ---
# # **Observations and Insights**
#
#
# ### *Summary statistics*
# - Just by looking at the summary statistics for each regimen we can determine that Capomulin and Ramicane are the two most promising drugs since we can observe that on both drug regimens the average of the tumor volume drops by around 25% compared to the rest of the drugs tested
# - Infubinol and Ceftamin are the next two best Tumor Volume Averages although the difference is not that big as with Capomulin and Ramicane
#
# ### *Bar chart for number of measurements*
# - This graph was made to determine if the number of measurements is consistent across all the drugs tested
# - By looking at the graph the number of measures are consistent with each other
# - Ramicane and Capomulin are the drugs with most measures, by around 50 measures which constitutes a significant difference with the rest of the drugs, I will do further analysis to determine if the measures difference is significant to the results
#
# ### *Mouse Sex Distribution*
# - The pie chart represents the percentage of female vs male mouses; with a 0.7% difference I will say that the mouse population is evenly distributed by gender
#
# ### *Quartiles, Outliers and Boxplots*
# - This analysis was done to determine if the data between the top four most promising drugs is valid
# - The analysis shows that the only drug with potential outliers is Infubinol, the outlier is very far away from the rest of the data
# - It is interesting that the outlier would fit into the Capomulin or Ramicane data, this suggests the possibility that this mouse drug was mislabeled
# - With the number of measures available and the data reliability tests results I recommend that any further analysis done about Infubinol simply deletes the outlier mouse instead of spending resources trying to save the data of that test subject
# - This analysis also allows me to determine that the difference in the number of measures between Capomulin and Ramicane and all the other drugs does not compromise the quality of the data available
#
# ### *Line and Scatter Plots*
# - This analysis allows us to study individual mouses for Capomulin and see the reduction of the Tumor Volume over time
# - Besides the obvious negative tendency in the tumor volume over time we can see that for some mouses the biggest negative slope is between timepoint 0 and 10, but we can also see a slight volume increase between timepoints 15-20
#
# ### *Correlation and Linear Regression Model*
# - With this analysis we determine that there is a strong correlation between a mouse weight and the Tumor Volume
# - It is important to note that a strong correlation does not necessarily reflects a cause-effect relationship
# - On the other hand, correlation is useful to build a linear regression model and determine expected values. With the formula provided we can determine an expected tumor volume by simply measuring a mouse weight and use it as a control variable
#
#
# +
# importing libraries
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# saving paths, opening csv files and saving them into dataframes
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# merging the dataframes
merged_study = pd.merge(mouse_metadata, study_results, on = 'Mouse ID', how = 'outer')
merged_study
# +
# cleaning data
#lines below checks for duplicated ID's on mouse metadata, can be commented if test is succesful
# mice_count = merged_study['Mouse ID'].unique()
# len(mice_count)
#looking for duplicates on the dataframe unique mouses should be 249
mouse_count = merged_study[['Mouse ID', 'Timepoint']].value_counts()
mouse_count = pd.DataFrame(mouse_count).reset_index()
mouse_count = mouse_count.rename(columns={0: 'Count'})
#at least 1 duplicated row, need to check for more
#slicing the dataframe to contain only duplicated values
duplicated_rows = mouse_count.loc[(mouse_count['Count'] > 1),:]
duplicated_rows.head(10)
# +
# obtaining a series with the Mouse ID of all the ones that are duplicated
dup_ID = duplicated_rows['Mouse ID'].unique()
print("The duplicated mouse(s) ID are: " + str(dup_ID))
# +
# display the duplicated data to double-check rows to delete
dup_data = merged_study.loc[(merged_study['Mouse ID'].isin(dup_ID)), :]
dup_data
# +
# deleting duplicated data keeping last values
clean_study = merged_study.drop_duplicates(subset = ['Mouse ID', 'Timepoint'], keep='last')
clean_study
# -
# double checking so that I didn't deleted any valid data
print ("The study was done in: " + str(len(clean_study['Mouse ID'].unique())) + " unique mouses")
# ## Summary Statistics
# +
#statistical summary calculations
#grouping by drug regimen
drug_grouped_summ = clean_study.groupby('Drug Regimen').describe()
#filtering unnecessary data and converting to dataframe
drug_grouped_summ = pd.DataFrame(drug_grouped_summ['Tumor Volume (mm3)']).reset_index()
#computing variance and adding to df
variance = drug_grouped_summ['std'] ** 2
drug_grouped_summ['Variance'] = variance
#computing SEM and adding to df
std_dev = drug_grouped_summ['std']
value_count = drug_grouped_summ['count']
sem_ = std_dev / (value_count ** 0.5)
drug_grouped_summ['SEM'] = sem_
#dropping unecessary columns, rearranging, and renaming
drug_grouped_summ = drug_grouped_summ[['Drug Regimen', 'mean', '50%', 'Variance', 'std', 'SEM']]
drug_grouped_summ = drug_grouped_summ.rename(columns= {'mean': 'Mean', '50%': 'Median', 'std': 'Std Dev'})
drug_grouped_summ.head(15)
# +
# repeat the calculations from above but using a single line approach (aggregate)
drug_grouped_summ_agg = clean_study[
['Drug Regimen', 'Tumor Volume (mm3)']
].groupby('Drug Regimen').aggregate([np.mean, np.median, np.var, np.std, st.sem])
drug_grouped_summ_agg.head(10)
# -
# ## Bar and Pie Charts
# +
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
#creating the dataframe with data to plot
bar_plot_data = clean_study[['Drug Regimen', 'Mouse ID']].groupby('Drug Regimen').count()
bar_plot_data = bar_plot_data.rename(columns = {'Mouse ID': 'Number of Measurements'})
#plotting and formatting
bar_plot_data.plot(kind ='bar', title = 'Number of Measurements by Drug Regimen', ylim = [0, 250], legend = False,
ylabel='Number of Measurements')
# +
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
#plot parameters
bar_plot_data = bar_plot_data.reset_index()
x = bar_plot_data['Drug Regimen']
y = bar_plot_data['Number of Measurements']
plt.ylim(0, 250)
plt.title("Number of Measurements By Drug Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Measurements")
plt.xticks(rotation=90)
#plotting with matplotlib
plt.bar(x, y, color='b', alpha=0.5, align="center", width=0.5)
# +
# Generate a pie plot showing the distribution of female versus male mice using pandas
#pulling data from the study
sex_dist = clean_study[['Mouse ID', 'Sex']].groupby('Sex').count().reset_index()
#plotting using pandas
sex_dist.plot(y='Mouse ID', kind ='pie', title = 'Mouse Sex Distribution', legend=False, autopct="%1.1f%%", explode=(0, 0.1),
shadow = True, startangle = 120, labels = sex_dist['Sex'])
# +
# Generate a pie plot showing the distribution of female versus male mice using pyplot
fig, ax = plt.subplots()
#plot parameters
labels = sex_dist['Sex']
sizes = sex_dist['Mouse ID']
colors = ["cyan", "orange"]
title = "Mouse Sex Distribution"
# Tells matplotlib to seperate the "Humans" section from the others
explode = (0, 0.1)
ax.set(aspect="equal", title='Mouse Sex Distribution')
plt.pie(sizes, explode=explode, labels=labels, colors=colors, autopct="%1.1f%%", shadow=True, startangle=120)
# -
# ## Quartiles, Outliers and Boxplots
# +
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
#getting the final tumor volume
#dataframe is sorted by Mouse ID and timepoint, dropping duplicates whilst maintaining the last value
merged_study.drop_duplicates(subset = ['Mouse ID', 'Timepoint'], keep='last')
final_tumor = clean_study.drop_duplicates(subset='Mouse ID', keep='last')
#filtering the dataframe for the studies to analyze
final_tumor = final_tumor.loc[(final_tumor['Drug Regimen']=='Capomulin')|
(final_tumor['Drug Regimen']=='Ramicane')|
(final_tumor['Drug Regimen']=='Infubinol')|
(final_tumor['Drug Regimen']=='Ceftamin'), :]
#selecting only relevant columns and resetting the index
final_tumor = final_tumor.reset_index()
final_tumor = final_tumor[['Mouse ID', 'Drug Regimen', 'Tumor Volume (mm3)']]
final_tumor
# +
# Put treatments into a list for for loop (and later for plot labels)
treatments = final_tumor['Drug Regimen'].unique().tolist()
# Create empty list to fill with tumor vol data (for plotting)
vol_data_list = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
for x in treatments:
#slicing the dataframe for the current iteration
data = final_tumor.loc[final_tumor['Drug Regimen'] == x, :]
#obtaining quartiles, IQR and boundaries
quartiles = data['Tumor Volume (mm3)'].quantile([0.25, 0.5, 0.75])
lower_bound = quartiles[0.25] - (1.5 * (quartiles[0.75] - quartiles[0.25]))
upper_bound = quartiles[0.75] + (1.5 * (quartiles[0.75] - quartiles[0.25]))
#finding the outliers if there is any
upper_outliers = data.loc[data['Tumor Volume (mm3)'] > upper_bound]
lower_outliers = data.loc[data['Tumor Volume (mm3)'] < lower_bound]
total_outliers = len(upper_outliers) + len(lower_outliers)
#conditional to print out the results
#if there are outliers prints the information about the Mouse
if total_outliers > 0:
print (f'For the drug {x} there are {total_outliers} potential outlier(s)')
if len(upper_outliers) > 0:
print(upper_outliers)
if len(lower_outliers) > 0:
print(lower_outliers)
else:
print (f'For the drug {x} there are {total_outliers} potential outlier(s)')
# +
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
#building the dictionary values
#for loop creates a dictionary with the name of the regimen as key and the tumor volumes as series
tumor_dic = {}
for x in treatments:
data = final_tumor.loc[final_tumor['Drug Regimen'] == x, :]
data = data['Tumor Volume (mm3)'].tolist()
tumor_dic[x] = data
#graph parameters
fig1, ax1 = plt.subplots()
ax1.set_title('Tumor Volume (mm3) for relevant studies')
ax1.set_ylabel('Tumor Volume (mm3)')
ax1.set_xlabel('Drug Regimen')
flierprops = dict(marker='s', markerfacecolor='r', markersize=10, linestyle='none', markeredgecolor='r') #outliers format
ax1.boxplot(tumor_dic.values(), flierprops= flierprops)
ax1.set_xticklabels(tumor_dic.keys())
plt.show()
# -
# ## Line and Scatter Plots
# +
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
#new dataframe with only the mouses treated with capomulin
capomulin_data = clean_study.loc[clean_study['Drug Regimen'] == 'Capomulin', :]
#retrieving mouse id's
mouse_id = capomulin_data['Mouse ID'].unique().tolist()
#asking user for the mouse to plot
print ('From the list of mouse ID\'s please type the one you want to see the plot for:')
print (mouse_id)
#for testing purposes, switch comments below when testing is done
#mouse_plot = 's185'
mouse_plot = input ('ID: ')
#obtaining mouse data
plot_data = capomulin_data.loc[capomulin_data['Mouse ID'] == mouse_plot, :]
x_axis = plot_data['Timepoint'].tolist()
y_axis = plot_data['Tumor Volume (mm3)'].tolist()
#graph parameters
plt.title(f'Capumulin regimen: Tumor Volume vs Timepoint for Mouse Id: {mouse_plot}')
plt.xlabel("Timepoint")
plt.ylabel("Tumor Volume (mm3)")
plt.grid()
mouse_line = plt.plot(x_axis, y_axis, marker ='o', color='green')
# +
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
#obtaining tumor volume average
tumor_average = capomulin_data[['Mouse ID', 'Weight (g)', 'Tumor Volume (mm3)']].groupby('Mouse ID').mean()
tumor_average.plot(kind="scatter", x="Weight (g)", y="Tumor Volume (mm3)", grid=True, figsize=(10,10),
title="Weight vs Average Tumor Volume")
# -
# ## Correlation and Regression
# +
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
correlation = st.pearsonr(tumor_average['Weight (g)'],tumor_average['Tumor Volume (mm3)'])
print(f"The Pearson correlation factor between Weight and Tumor Volume is {round(correlation[0],2)}")
# +
#Creatting the linear regression model
# Add the linear regression equation and line to plot
#setting x and y for line
x_values = tumor_average['Weight (g)']
y_values = tumor_average['Tumor Volume (mm3)']
(m, b, r, p, stderr) = st.stats.linregress(x_values, y_values)
tumor_average.plot(kind="scatter", x="Weight (g)", y="Tumor Volume (mm3)", grid=True, figsize=(10,10),
title="Weight vs Average Tumor Volume")
line_eq = 'y = ' + str(round(m, 2)) + 'x+' + str(round (b, 2))
plt.annotate(line_eq,(22.1,40.2),fontsize=15,color="red")
plt.plot(x_values, m*x_values + b, color ='r')
print ('The R squared is:' + str(round(r**2, 2)))
| pymaceuticals_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Adversarial Search Algorithm in Game Theory #
# ## Mimimax Algorithm ##
# #### _Constraints_: ####
# 1. 2-player zero-sum (one player wins and the other loses) adversarial game
# 2. Players take turns to make move
# 3. Assume opponent plays optimal way
# 4. Need a score(value) associated with each state (usaully from evaluation funciton)
# 5. Need a winning and losing state
# #### _Description_: ####
# - Each possible move will generate a successor state, and player will choose the move that will lead to the state with highest score.
# - There will be two kinds of layers, MAX layer and MIN layer
# - MAX layer gets the maximum value of its children
# - MIN layer gets the minimum value of its children
# - MAX layer and MIN layer happening iteratively and repeately, +1 level for each layer and +1 depth for each pair,until the game tree reaches a terminal state or the maximum depth defined
# #### _Example_: ####
# 
# #### _Complexity_: ####
# _Assmue on average each node has b successors and depth is d_
#
# O(bˣbˣbˣ...ˣb) = O(b^d)
class MinimaxPolicy():
def __init__(self, index=0, depth=2):
"""Abstract class of Algorithms using Minimax Policy
By default 3 kinds of optimizer defined
1. Minimizer, returns the tupe with minimum first value
2. Maximizer, returns the tupe with maximum first value
3. Expectation_Adder, returns a tuple containing sum of first values and None
Parameters
----------
index : int
Current agent index in agent array.
depth : int
The depth of game tree going to expand.
"""
self.index = 0
self.depth = depth
self.minimizer = lambda *iterable: min(iterable,
key=lambda val: val[0])
self.maximizer = lambda *iterable: max(iterable,
key=lambda val: val[0])
self.expectation_adder = lambda *iterable: (sum([val[0]
for val
in iterable]),
None)
def evaluationFunction(self, game_state):
"""
@todo To be implemented according to the rule of game
"""
raise NotImplementedError("To be implemented")
def get_optimize_specifics(self, agent_index):
"""
Get optimizer and inital best score
Abstract function to be defined in inheritor separately.
Parameters
----------
agent_index : int
The agent index in agent array.
Returns
-------
(function, (float, Action))
tuple of optimizer and inital best score from agent index
"""
raise NotImplementedError("To be implemented")
def minimax(self, agent_index, game_state, depth, alpha=None, beta=None):
"""
Get optimizer and inital best score
Abstract function to be defined in inheritor separately.
Parameters
----------
agent_index : int
The agent index in agent array.
game_state: State
State of the game
depth: int
Current depth
alpha: int
Alpha value if using alpha-beta pruning
beta: int
Beta value if using alpha-beta purning
Returns
-------
(function, (float, Action))
tuple of optimizer and inital best score from agent index
"""
# Check Terminal State or not
if game_state.isWin() or game_state.isLose():
return (self.evaluationFunction(game_state), None)
optimizer, best_score = self.get_optimize_specifics(agent_index)
# Take one step ahead
if agent_index + 1 < game_state.getNumAgents():
next_agent_index = agent_index + 1
next_depth = depth
else:
next_agent_index = 0
next_depth = depth + 1
# Traverse through possible successors
legal_actions = game_state.getLegalActions(agent_index)
for action in legal_actions:
successor_state = game_state.generateSuccessor(agent_index,
action)
# Get score of current node if reaches the max depth
# otherwise keep expanding
if next_depth > self.depth:
successor_score = (self.evaluationFunction(successor_state),
None)
else:
successor_score = (self.minimax(next_agent_index,
successor_state,
next_depth,
alpha,
beta)[0],
action)
# Update Best score and alpha beta values if applies
if optimizer == self.maximizer:
best_score = optimizer(best_score,
successor_score)
alpha = alpha and optimizer(alpha,
best_score)
elif optimizer == self.minimizer:
best_score = optimizer(best_score,
successor_score)
beta = beta and optimizer(beta,
best_score)
elif optimizer is self.expectation_adder:
best_score = optimizer(best_score,
(1./len(legal_actions) *
successor_score[0],
None))
else:
raise NotImplementedError("To be implemented")
# Pruning if applies
if alpha and beta and alpha[0] >= beta[0]:
return best_score
return best_score
class MinimaxAgent(MinimaxPolicy):
def __init__(self, index, depth):
"""Agent using minimax algorithm
Parameters
----------
index : int
Current agent index in agent array.
depth : int
The depth of game tree going to expand.
"""
self._player_optimizer = (self.maximizer, (-float('inf'), None))
self._opponent_optimizer = (self.minimizer, (float('inf'), None))
return super().__init__(index=index, depth=depth)
def evaluationFunction(self, game_state):
"""
Parameters
----------
game_state : State
Game State.
Returns
-------
int
Value associated with the game state
"""
game_state.get_score()
def get_optimize_specifics(self, agent_index):
"""
Get optimizer and inital best score
"""
if agent_index == self.index:
return (self.maximizer, (-float('inf'), None))
else:
return (self.minimizer, (float('inf'), None))
def getAction(self, gameState):
"""
Returns the action associated with best score
"""
_, action = self.minimax(self.index,
gameState,
1)
return action
# ## Alpha-Beta Pruning (Optimation Method of Minimax) ##
# #### _Desciption_: ####
# - There will be two variables storing evaluated max and min values, which are ⍺ and β respectively
# - Initial value of ⍺ is -∞, and initial value of β is ∞
# - MAX layer only update ⍺
# - MIN layer only update β
# - Pruning the rest whenever ⍺ >= β
# #### _Example_: ####
# 
# ```
# Step 1:
# MAX{ ⍺ = -∞
# β = ∞
# ↓
# MIN{3, 5, 10},
# MIN{2, a, b},
# MIN{7, 2, 3},
# }
#
# Step 2:
# MAX{ ⍺ = -∞
# β = 3
# ↓
# MIN{3, 5, 10},
# MIN{2, a, b},
# MIN{7, 2, 3},
# }
#
# Step 3:
# MAX{ ⍺ = -∞
# β = 3
# ↓
# MIN{3, 5, 10},
# MIN{2, a, b},
# MIN{7, 2, 3},
# }
#
# Step 4:
# MAX{ ⍺ = 3
# MIN{3, 5, 10},
# β = ∞
# ↓
# MIN{2, a, b},
# MIN{7, 2, 3},
# }
#
# Step 5:
# MAX{ ⍺ = 3
# MIN{3, 5, 10},
# β = 2(pruning because MIN{2, a, b} <= 2 <= 3, result of outer MAX will never fall on MIN{2, a, b})
# ↓
# MIN{2, a, b},
# MIN{7, 2, 3},
# }
#
# Step 6:
# MAX{ ⍺ = 3
# MIN{3, 5, 10},
# MIN{2, a, b},
# β = ∞
# ↓
# MIN{7, 2, 3},
# }
#
# Step 7:
# MAX{ ⍺ = 3
# MIN{3, 5, 10},
# MIN{2, a, b},
# β = 7
# ↓
# MIN{7, 2, 3},
# }
#
# Step 8:
# MAX{ ⍺ = 3
# MIN{3, 5, 10},
# MIN{2, a, b},
# β = 2(pruning because MIN{7, 2, 3} <= 2 <= 3, result of outer MAX will never fall on MIN{7, 2, 3})
# ↓
# MIN{7, 2, 3},
# }
# ```
# #### _Complexity_: ####
#
# _Assmue each node has b successors and depth is d_<br>
# _Worst Case_: No pruning happening, same complexity as minimax<br>
# _Best Case_: First evaluated node is the best node, so that 1 for MAX layer and b for last MIN layer(it is possible to have multiple MIN layers)
#
# O(1ˣbˣ1ˣbˣ...ˣb) = O(b^(d/2))
#
# Therefore, within the same amount of time Minimax with alpha-beta pruning could traverse 2 times deeper
class AlphaBetaAgent(MinimaxPolicy):
def __init__(self, index, depth):
"""Agent using Alpha-Beta Pruning algorithm
Parameters
----------
index : int
Current agent index in agent array.
depth : int
The depth of game tree going to expand.
"""
self._player_optimizer = (self.maximizer, (-float('inf'), None))
self._opponent_optimizer = (self.minimizer, (float('inf'), None))
return super().__init__(index=index, depth=depth)
def evaluationFunction(self, game_state):
"""
Parameters
----------
game_state : State
Game State.
Returns
-------
int
Value associated with the game state
"""
game_state.get_score()
def get_optimize_specifics(self, agent_index):
"""
Get optimizer and inital best score
"""
if agent_index == self.index:
return self._player_optimizer
else:
return self._opponent_optimizer
def getAction(self, gameState):
"""
Returns the action associated with best score
"""
_, action = self.minimax(self.index,
gameState,
1,
(-float('inf'), None),
(float('inf'), None))
return action
# ## Expectimax ##
# #### _Description_: ####
# - MAX layer remains the same
# - MIN layer is replaced by chance nodes, that's to say each possible opponent move is associated with a weight(possibility) and the result is the sum of (weight ˣ score)
# - Minimax almost could be considered as a special case of expectimax that min value has weight of 1.0 and others are 0
# ```
# Solves more realistic problem that result will not be too much biased by the minimum value since weights could be changed according to different kinds of opponents, so we don't need to assume opponent makes the optimal move.
# ```
# #### _Example_: ####
# 
#
# ```
# MAX{
# w1ˣA + w2ˣB + w3ˣC,
# w4ˣD + w5ˣE + w6ˣF,
# w7ˣG + w8ˣH + w9ˣI,
# }
# ```
class ExpectimaxAgent(MinimaxPolicy):
def __init__(self, index, depth):
"""Agent using Expectimax algorithm
Parameters
----------
index : int
Current agent index in agent array.
depth : int
The depth of game tree going to expand.
"""
self._player_optimizer = (self.maximizer, (-float('inf'), None))
self._opponent_optimizer = (self.expectation_adder, (0, None))
return super().__init__(index=index, depth=depth)
def evaluationFunction(self, game_state):
"""
Parameters
----------
game_state : State
Game State.
Returns
-------
int
Value associated with the game state
"""
game_state.get_score()
def get_optimize_specifics(self, agent_index):
"""
Get optimizer for player or opponent
"""
if agent_index == self.index:
return self._player_optimizer
else:
return self._opponent_optimizer
def getAction(self, gameState):
"""
Returns the action associated with best score
"""
_, action = self.minimax(self.index,
gameState,
1)
return action
# ## Optimizations ##
# ### Zobrist Hash ###
# ##### _Decription_: #####
# - Compute the hash of a state
# ##### _Steps_: #####
# 1. Initialize a 3 dimensional table that contains keys for each possible case on board
# 2. Start with 0 and do XOR operation for each non empty position on board
# ##### _Advantages_: #####
# 1. When player makes a move, no need to recalculate everything because (A XOR B XOR B = A)
# 2. Less hash table collision (Complex mathmatical prove behind this, Skip)
#
# ##### _Disadvantages_: #####
# 1. Common drawback of tabulation hash that requires certain amount of memory to store keys
# ##### _Example_: #####
#
# _Tic-Tac-Toe_:
# +
from random import randint
# dictionary storing the piece keys to zobrist table
pieces = {
'x': 0,
'o': 1,
}
board_height = 3
board_width = 4
# Zobrist table value for each piece in board
zobrist_table = [[[randint(1, 2**63) for _ in pieces] for _ in range(board_width)] for _ in range(board_height)]
def get_hash(board):
height = len(board)
width = len(board[0])
h_val = 0
for y in range(height):
for x in range(width):
piece = board[y][x]
if piece in pieces:
piece_key = pieces[piece]
h_val ^= zobrist_table[y][x][piece_key]
return h_val
#@todo wrap this function in a class so that previous_board_hash == hash(previous_board)
def update_hash(board, previous_board, previous_hash, positions):
new_hash = previous_hash
for position in positions:
y, x = position
previous_piece = previous_board[y][x]
if previous_piece in pieces:
piece_key = pieces[previous_piece]
new_hash ^= zobrist_table[y][x][piece_key]
current_piece = board[y][x]
if current_piece in pieces:
piece_key = pieces[current_piece]
new_hash ^= zobrist_table[y][x][piece_key]
return new_hash
previous_board = [
['x', 'o', '_', '_'],
['_', 'x', '_', 'o'],
['_', 'o', '_', 'x'],
]
board = [
['x', 'o', 'o', '_'],
['_', 'x', 'o', 'o'],
['_', 'o', 'o', 'x'],
] # updated ((0, 2), (1, 2), (2, 2))
updated_positions = ((0, 2), (1, 2), (2, 2))
# previous hash
previous_hash = get_hash(previous_board)
print(previous_hash)
# updated hash
current_hash = update_hash(board,
previous_board,
previous_hash,
updated_positions)
print(current_hash)
# Should get the same value as previous_hash
print(update_hash(previous_board,
board,
current_hash,
updated_positions))
# -
# ### Evaluaton Function ###
# ##### _Decription_: #####
#
# To get value of the state, depending on rules of the game, also there are some learnings about using Reinforcement Learning on evaluation function.
# ## More Exploration ##
# ### Monte Carlo Simulation ###
# #### _Example_: ####
# 
#
# Sampling randomly on the square area and will get the result that
#
# # of samples in circle
# ---------------------- ≈ π
# # of total samples
#
# This can also be used in simulating odds for gambling games such as Texas Holdem Poker
# ======================================================================================================================
# # Thanks #
| minimax.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Predicting Coronary Artery Disease in Patients Undergoing Exercise Stress Tests and Angiography
#
# ## Introduction
# The following dataset (www.kaggle.com/ronitf/heart-disease-uci) contains various measured parameters from patients undergoing angiography and exercise stress tests. Angigraphy is a clinical test used to visualize blood vessels in the body including the heart. In these patients, angiography was used to determine the narrowing of coronary arteries that supply blood to the cardiac muscle. Those with artery narrowing are deemed to have coronary artery disease (Detrano et. al.).
# The purpose of this notebook is to take a closer look at this relatively small dataset and then establish a machine learning model that can correctly predict coronary artery disease (CAD). Predictions from such a model can then be corroborated with angiograms, if possible to do so in a timely manner so that an appropriate line of treatment can be initiated.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score, classification_report
from sklearn.metrics import f1_score
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
# -
# ### Feature Description
#
# 1) age
# 2) sex (1 = male, 0 = female)
# 3) cp -chest pain type (4 values: 0: asymptomatic, 1: atypical angina, 2: non-anginal pain, 3: typical angina)
# 4) trestbps - resting blood pressure (mm Hg on admission to the hospital)
# 5) chol - serum cholestoral in mg/dl
# 6) fbs - fasting blood sugar (> 120 mg/dl, 1 = true; 0 = false)
# 7) restecg - resting electrocardiographic results (value 0: showing probable or definite left ventricular hypertrophy by Estes’ criteria, Value 1: normal, value 2: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV))
# 8) thalach - maximum heart rate achieved
# 9) exang - exercise induced angina (1 = yes; 0 = no)
# 10) oldpeak - ST depression induced by exercise relative to rest
# 11) slope - the slope of the peak exercise ST segment (0: downsloping; 1: flat; 2: upsloping)
# 12) ca - number of major vessels (0-3) colored by flourosopy
# 13) thal: A blood disorder called thalassemia Value 0: NULL (dropped from the dataset previously), Value 1: fixed defect (no blood flow in some part of the heart), Value 2: normal blood flow, Value 3: reversible defect (a blood flow is observed but it is not normal)
# 14) Target - Coronary Artery Disease - CAD (0 = no, 1= yes)
df1 = pd.read_csv('heart.csv')
df1.head()
df1.shape #it is a pretty small dataset with only about 300 rows of patient data.
df1.isnull().sum() #no null values
# # EDA
# ### 1) Age distribution
sns.histplot(x='age', hue='sex', data=df1, kde=True,bins=20)
#age distribution is similar for both genders. Greater proportion of patients are male
print('Mean age females :',df1[df1['sex']==0]['age'].mean())
print('Mean age males :',df1[df1['sex']==1]['age'].mean())
print('Number of females :', df1[df1['sex']==0]['age'].count())
print('Number of males :', df1[df1['sex']==1]['age'].count())
# The age distribution is similar for males (mean age approx. 54 ys) and females (mean age approx 56 years) but a greater proportion, almost twice as many, of the patients in the dataset are males.
# ### 2) Blood pressure
df1[df1['target']==0]['trestbps'].mean()
df1[df1['target']==1]['trestbps'].mean()
# Strangely, though it is considered a strong associated risk factor, the mean blood pressure is slightly lower in those diagnosed with CAD compared to those who were not. The mean blood pressure in both groups exceed 120 mm Hg and can be considered elevated. The histogram below illustrates blood pressure distribution in the patient groups.
sns.histplot(x=df1['trestbps'], hue=df1['target'], kde=True)
plt.axvline(x=df1[df1['target']==0]['trestbps'].mean(), lw=2, c='b')
plt.axvline(x=df1[df1['target']==1]['trestbps'].mean(), lw=2, c='orange')
plt.legend(['+ CAD','- CAD'])
# ### 3) Correlation Heat Map
plt.figure(figsize=(12,10))
sns.heatmap(df1.corr(), annot=True)
# There are no strong positive or negative correlations with within various features. The best positive correlations are between target and max heart rate achieved (thalarch) and chest pain type (cp). The strongest negative correlations are between target and exercise induced angina (exang) and ST segment depression due to exercise (oldpeak). Let's take a closer look at these features and how they may differ between the patient groups.
# ### 4) Maximum Heart Rate Achieved in Patients with CAD
# Maximum heart rate during exercise is higher in patients with CAD compared to those without
sns.barplot(y='thalach', x='target', data=df1)
plt.xlabel('+/- CAD')
plt.ylabel('Mx Heart Rate (bpm)')
plt.show()
# ### 5) Chest Pain Type Associated with CAD
# Chest pain (angina) of all types is associated with the presence of CAD but less common in its absence.
sns.countplot(x='cp', hue='target', data=df1)
plt.legend(['- CAD', '+ CAD'])
plt.xlabel('Chest Pain Type')
plt.show()
# ### 6) ECG Abnormalities Associated with CAD
# During exercise testing, an upward ST slope is present in a large proportion of those with CAD. A downward slope is less commonly observed in those with CAD but more common in those without.
sns.countplot(x='slope', hue='target', data=df1)
plt.legend(['- CAD', '+ CAD'])
plt.xlabel('Slope of ST segment')
plt.show()
# ### 7) Serum Cholesterol Levels
# Serum cholesterol is not very different between those with or without CAD. This is surprising since high cholesterol levels is strong risk factor for developing coronary artery disease. Since it is not clear what the bad cholesterol (LDL - low density lipoprotein) and good cholesterol (HDL - high density lipoprotein) levels are, we will assume that we cannot draw firm conclusions based on this data alone. It is however interesting to note is that the cholesterol levels are elevated in both groups. Ideal total cholesterol levels are 200 mg/dL or lower (https://www.healthline.com/health/high-cholesterol/levels-by-age). It is possible in the presence of elevated total serum cholesterol; some patients are more prone to developing CAD than others.
sns.barplot(y='chol', x='target', data=df1)
plt.xlabel('+/- CAD')
plt.ylabel('Serum Cholesterol (mg/dL)')
plt.show()
# ### 8) Number of Patients in Each Group
# The patient groups are more or less balanced. Therefore we will proceed with the preparing the data for modeling.
sns.countplot(x='target', data=df1)
plt.xlabel('+/- CAD')
plt.show()
# # Data preprocessing
# ## 1) Create Dummy Columns
df1.dtypes
dummy_cols = ['sex', 'cp', 'fbs', 'restecg', 'exang', 'slope', 'ca', 'thal']
df2 = pd.get_dummies(data = df1, columns = dummy_cols, drop_first = True)
df2.head()
# ## 2) Train/test Split the Data
#define X and y
X = df2.drop('target', axis = 1)
y = df2['target']
#split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42, stratify = y)
# ## 3) Scale the data
#scale the data
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# # Data Modeling
# Models tested include Logistic Regression, Random Forest Classifier, Support Vector Machine, Gradient Boosting Classifier and XGBoost Classifier
# +
model_logreg = LogisticRegression(max_iter=3500, C=1)
model_SVC=SVC(kernel='rbf', C=5, gamma='auto')
model_rfc=RandomForestClassifier(n_estimators=100, max_depth = 5)
model_gbc=GradientBoostingClassifier(learning_rate=0.1, max_depth=3)
model_xgb=XGBClassifier(use_label_encoder=False, eval_metric='mlogloss', n_estimators=100, learning_rate=0.5, max_depth=3)
#create a list of the model objects to be used in the for loop below
models=[model_logreg, model_SVC, model_rfc, model_gbc, model_xgb]
#an empty dictionary for storing average model metric data
score_dict = {'Logistic Regression':{},
'Support Vector':{},
'Random Forest':{},
'Gradient Boost':{},
'XG Boost':{}
}
#following function inserts model metrics into the score_dict dictionary
def score_dict_edit (model):
score_dict[model]['Model accuracy']=np.mean(accuracy_scores)
score_dict[model]['CAD_precision']=np.mean(precision_scores)
score_dict[model]['CAD_recall']=np.mean(recall_scores)
score_dict[model]['CAD_F1']=np.mean(f1_scores)
for model in models:
#create empty lists to store the different testing metrics
accuracy_scores =[]
precision_scores = []
recall_scores = []
f1_scores = []
model.fit(X_train, y_train)
predict = model.predict(X_test)
accuracy_scores.append(accuracy_score(y_test, predict))
precision_scores.append(precision_score(y_test, predict, average=None)[1])
recall_scores.append(recall_score(y_test, predict, average=None)[1])
f1_scores.append(f1_score(y_test, predict, average=None)[1])
if model==model_logreg:
score_dict_edit ('Logistic Regression')
elif model==model_SVC:
score_dict_edit ('Support Vector')
elif model==model_rfc:
score_dict_edit ('Random Forest')
elif model==model_gbc:
score_dict_edit ('Gradient Boost')
else:
score_dict_edit ('XG Boost')
print('Following table summarizes the model accuracy and metrics for correctly classifying patients with CAD')
df_data=pd.DataFrame(score_dict).transpose()
df_data
# -
# All the models tested performed fairly well without hyperparameter tuning. Logistic regression and SVM, performed the best with impressive (~ 90%) recall scores for correctly classifying CAD class.
# # Concluding Remarks
# 1) Despite the small size of the dataset, we were able to establish a machine learning model for predicting coronary artery disease (CAD) in patients using parameters (features) that can be easily determined in a hospital setting.
# 2) Models using Logistic Regression or Support Vector Machine algorithms performed exceptionally well in predicting patients with CAD. Due to the satisfactory outcome, no effort was made to eliminate any of the features. As a matter of fact, other kagglers have noted that the model performances do not improve upon removing features, perhaps indicating that all the features play an important role in the models' predictive capabilities.
# 3) Presence of other risk factors that can be associated with CAD include smoking, diabetes, obesity and family history of CAD. Inclusion of this information in the dataset may further improved the predictive capabilities of the model.
# # Reference
#
# <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (1989). International application of a new probability algorithm for the diagnosis of coronary artery disease, American Journal of Cardiology, 64,304-310.
| Coronary_Artery_Disease_Prediction/Heart_Disease_data_read.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from bayes_opt import BayesianOptimization
from smart_grasping_sandbox.smart_grasper import SmartGrasper
from tf.transformations import quaternion_from_euler
from math import pi
import time
import rospy
from math import sqrt, pow
import random
from sys import argv
from numpy import var, mean
import numpy as np
from keras.models import Sequential, Model
from keras.layers import Dense, Activation, Flatten, Input, Concatenate
from keras.optimizers import Adam
from keras.optimizers import sgd
import keras
import pickle
# +
sgs = SmartGrasper()
MIN_LIFT_STEPS = 1
# Cut off the action
MAX_BALL_DISTANCE = 0.4
REPEAT_GRASP = 1
# +
# SGS client
# Grasp using parameters fed and return stated when asked
class GraspQuality(object):
def __init__(self, sgs):
self.sgs = sgs
self.last_distance = None
self.current_grasp = {}
def check_stable(self, joint_names):
current_min = 500
positions = []
velocities = []
efforts = []
for k in range(30):
sgs.move_tip(y=0.03)
ball_distance = self.__compute_euclidean_distance()
if k > MIN_LIFT_STEPS and ball_distance < current_min:
current_min = ball_distance
break
if ball_distance > MAX_BALL_DISTANCE:
break
time.sleep(0.5)
robustness = (1/(current_min - 0.18))**2
return robustness
def __compute_euclidean_distance(self):
ball_pose = self.sgs.get_object_pose()
hand_pose = self.sgs.get_tip_pose()
dist = sqrt((hand_pose.position.x - ball_pose.position.x)**2 + \
(hand_pose.position.y - ball_pose.position.y)**2 + \
(hand_pose.position.z - ball_pose.position.z)**2)
return dist
def run_experiments(self, grasp_distance,
H1_F1J1, H1_F1J2, H1_F1J3,
H1_F2J1, H1_F2J2, H1_F2J3,
H1_F3J1, H1_F3J2, H1_F3J3):
robustness = []
for _ in range(REPEAT_GRASP):
robustness.append(self.experiment(grasp_distance,
H1_F1J1, H1_F1J2, H1_F1J3,
H1_F2J1, H1_F2J2, H1_F2J3,
H1_F3J1, H1_F3J2, H1_F3J3))
# trying to maximize the robustness average - while minimizing its variance
utility = mean(robustness) / max(0.001,sqrt(var(robustness))) # don't divide by 0
return utility
def experiment(self, grasp_distance,
H1_F1J1, H1_F1J2, H1_F1J3,
H1_F2J1, H1_F2J2, H1_F2J3,
H1_F3J1, H1_F3J2, H1_F3J3):
self.sgs.reset_world()
time.sleep(0.1)
self.sgs.reset_world()
time.sleep(0.1)
self.sgs.open_hand()
time.sleep(0.1)
self.sgs.open_hand()
time.sleep(0.01)
ball_pose = self.sgs.get_object_pose()
ball_pose.position.z += 0.5
#setting an absolute orientation (from the top)
quaternion = quaternion_from_euler(-pi/2., 0.0, 0.0)
ball_pose.orientation.x = quaternion[0]
ball_pose.orientation.y = quaternion[1]
ball_pose.orientation.z = quaternion[2]
ball_pose.orientation.w = quaternion[3]
self.sgs.move_tip_absolute(ball_pose)
self.sgs.move_tip(y=grasp_distance)
# close the grasp
self.sgs.check_fingers_collisions(False)
self.current_grasp["H1_F1J1"] = H1_F1J1
self.current_grasp["H1_F1J2"] = H1_F1J2
self.current_grasp["H1_F1J3"] = H1_F1J3
self.current_grasp["H1_F2J1"] = H1_F2J1
self.current_grasp["H1_F2J2"] = H1_F2J2
self.current_grasp["H1_F2J3"] = H1_F2J3
self.current_grasp["H1_F3J1"] = H1_F3J1
self.current_grasp["H1_F3J2"] = H1_F3J2
self.current_grasp["H1_F3J3"] = H1_F3J3
self.sgs.send_command(self.current_grasp, duration=0.5)
# lift slowly and check the quality
joint_names = self.current_grasp.keys()
robustness = self.check_stable(joint_names)
rospy.loginfo("Grasp quality = " + str(robustness))
sgs.check_fingers_collisions(True)
# reward
return robustness
# +
# States
# Assumptions:
# There is no noise in kinematic control
# Perfect localization of objects
# States variables : finger positions, grasp distance,
# Position of the ball, position of the robot arm, distance to ball, finger positions
# Start with random finger joint values with regard to shadow robotic hand
# Start with slightly random arm position and ball position
grasp_distance = -0.16338
def __compute_euclidean_distance(self):
ball_pose = self.sgs.get_object_pose()
hand_pose = self.sgs.get_tip_pose()
dist = sqrt((hand_pose.position.x - ball_pose.position.x)**2 + \
(hand_pose.position.y - ball_pose.position.y)**2 + \
(hand_pose.position.z - ball_pose.position.z)**2)
return dist
def initializeAnExperiment():
# Reset the world
sgs.reset_world()
# Give gazebo some time
time.sleep(0.1)
# Double reset vs Gazebo
sgs.reset_world()
# Give gazebo some time
time.sleep(0.1)
sgs.open_hand()
time.sleep(0.1)
sgs.open_hand()
time.sleep(0.1)
state = []
init_joint_state = sgs.get_current_joint_state()
init_grasp_distance_state_x = 0.0
init_grasp_distance_state_y = 0.0
init_grasp_distance_state_z = 0.0
H1_F1J1 = (np.random.rand() / 4 ) + 0.6
H1_F1J2 = (np.random.rand() / 4 ) + 0.4
H1_F1J3 = (np.random.rand() / 4 ) + 0.4
H1_F2J1 = (np.random.rand() / 4 ) - 0.1
H1_F2J2 = (np.random.rand() / 4 ) + 0.1
H1_F2J3 = (np.random.rand() / 4 ) + 0.1
H1_F2J3 = (np.random.rand() / 4 ) + 0.0
H1_F3J1 = (np.random.rand() / 4 ) - 0.1
H1_F3J2 = (np.random.rand() / 4 ) + 0.1
H1_F3J3 = (np.random.rand() / 4 ) + 0.4
state.append(H1_F1J1)
state.append(H1_F1J2)
state.append(H1_F1J3)
state.append(H1_F2J1)
state.append(H1_F2J2)
state.append(H1_F2J3)
state.append(H1_F3J1)
state.append(H1_F3J2)
state.append(H1_F3J3)
state.append(0)
random_distort_of_hand_x = np.random.rand() / 5.0 - 0.2
random_distort_of_hand_z = np.random.rand() / 5.0 - 0.2
sgs.move_tip(x = random_distort_of_hand_x, z = random_distort_of_hand_z)
time.sleep(0.1)
ball_pose = sgs.get_object_pose()
ball_pose.position.z += 0.5
#setting an absolute orientation (from the top)
quaternion = quaternion_from_euler(-pi/2., 0.0, 0.0)
ball_pose.orientation.x = quaternion[0]
ball_pose.orientation.y = quaternion[1]
ball_pose.orientation.z = quaternion[2]
ball_pose.orientation.w = quaternion[3]
return sgs, state
def take_step(current_state, action):
next_state = current_state[0].tolist()
reward = 0
experiment_cont = True
if action < 18:
# Joint movement positive or negative
if action % 2 == 0:
next_state[action / 2] -= JOINT_STEP_SIZE
if action % 2 == 1:
next_state[action / 2] += JOINT_STEP_SIZE
else:
# Arm movement
reward = grasp_quality.run_experiments(grasp_distance, next_state[0], next_state[1], next_state[2], next_state[3]
, next_state[4], next_state[5], next_state[6], next_state[7], next_state[8])
if reward > 10:
experiment_cont = False
next_state = np.asarray(next_state, dtype=np.float32).reshape((1, 10))
return next_state, reward, experiment_cont
# +
memory = []
discount = 0.99
max_memory = 500
def __init__(max_memory=500, discount=.99):
max_memory = max_memory
memory = list()
discount = discount
def remember(states, game_over):
memory.append([states, game_over])
if len(memory) > max_memory:
del memory[0]
def get_batch(model, batch_size=10):
len_memory = len(memory)
num_actions = model.output_shape[-1]
# env_dim = self.memory[0][0][0].shape[1]
env_dim = 10
inputs = np.zeros((min(len_memory, batch_size), env_dim))
targets = np.zeros((inputs.shape[0], num_actions))
for i, idx in enumerate(np.random.randint(0, len_memory,
size=inputs.shape[0])):
state_t, action_t, reward_t, state_tp1 = memory[idx][0]
game_over = memory[idx][1]
inputs[i:i+1] = state_t
# There should be no target values for actions not taken.
# Thou shalt not correct actions not taken #deep
targets[i] = model.predict(state_t)[0]
#from IPython.core.debugger import Tracer; Tracer()()
action_index = np.argmax(model.predict(state_tp1)[0])
value = model.predict(state_tp1)[0][action_index]
Q_sa = value
if game_over: # if game_over is True
targets[i, action_t] = reward_t
else:
# reward_t + gamma * max_a' Q(s', a')
targets[i, action_t] = reward_t + discount * Q_sa
return inputs, targets
# +
# Number of episodes
NUM_OF_EPISODES = 10000
# Max steps
NUM_MAX_STEPS = 1000
STATE_SIZE = 10
# Increase, decrease, doing nothing doesn't change the system
# Creates 3 * Degrees of Freedom of the whole system + Grasp or not at that state
NUM_OF_JOINTS = 1
NUM_OF_ACTIONS = 2 * 9 + 1 # 19 number of actions
# Move a joint by a constant value
JOINT_STEP_SIZE = 0.01
i_episode = 0
hidden_size = 50
learning_rate = 0.99
discount_rate = 0.9
batch_size = 50
epsilon = 0.3
sgs, state = initializeAnExperiment()
# Use an old state, if need to continue with training
#f = open('state.pckl', 'rb')
#state = pickle.load(f)
#f.close()
experiment_cont = True
grasp_quality = GraspQuality(sgs)
epsilonGrasp = 0.05
# Use a saved model if you want
model = keras.models.load_model('model-Q.h5')
#model = Sequential()
#model.add(Dense(hidden_size, input_shape=(STATE_SIZE, ), activation='relu'))
#model.add(Dense(hidden_size, activation='relu'))
#model.add(Dense(hidden_size, activation='relu'))
#model.add(Dense(NUM_OF_ACTIONS))
#model.compile(Adam(lr=0.01), "mse")
def check_stable(self, joint_names):
current_min = 1000
positions = []
velocities = []
efforts = []
for k in range(30):
sgs.move_tip(y=0.02)
ball_distance = self.__compute_euclidean_distance()
if k > MIN_LIFT_STEPS and ball_distance < current_min:
current_min = ball_distance
break
if ball_distance > MAX_BALL_DISTANCE:
break
time.sleep(0.5)
reward = (1/(current_min - 0.18))**2
return reward
for i_episode in range(NUM_OF_EPISODES):
step = 0
loss = 0
current_state = np.asarray(state, dtype=np.float32).reshape((1, 10))
print 'Episode:' + str(i_episode)
experiment_cont = True
while experiment_cont:
step += 1
# Convert list to keras friendly numpy shape
if np.random.rand() <= epsilon:
action = np.random.randint(0, NUM_OF_ACTIONS - 1, size=1)[0]
else:
q = model.predict(current_state)
action = np.argmax(q[0]) - 1
if np.random.rand() <= epsilonGrasp:
action = 18
#print action
next_state, reward, experiment_cont = take_step(current_state, action)
remember([next_state, action, reward, current_state], (not experiment_cont))
inputs, targets = get_batch(model, batch_size=batch_size)
loss += model.train_on_batch(inputs, targets) #[0]
current_state = next_state
if step > NUM_MAX_STEPS:
next_state, reward, experiment_cont = take_step(current_state, 18)
inputs, targets = get_batch(model, batch_size=batch_size)
loss += model.train_on_batch(inputs, targets) #[0]
break
print 'Total Loss:' + str(loss)
print 'Total number of steps:' + str(step)
#actions = policy.getActions(current_state)
#next_state, experiment_cont = step(sgs, action)
# +
initializeAnExperiment()
# -
take_step(current_state, action)
current_state[0].tolist()
reward
i_episode
take_step(current_state=current_state, action=18)
# +
model.save('model-Q.h5') # creates a HDF5 file 'my_model.h5'
f = open('state.pckl', 'wb')
pickle.dump(state, f)
f.close()
| Chapter08/chapter8_tutorials/smart_grasping_sandbox/smart_grasping_sandbox/notebooks/RL-Hand.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ArturoSirvent/TFG_notebooks/blob/main/Preprocesado_datos_multitelescop/Nuevos_archivos_muchos_telescopios.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="OsgrQ58Oz2Yv"
"""
function ClickConnect(){
console.log("Clicked on connect button");
document.querySelector("colab-connect-button").click()
}
setInterval(ClickConnect,60000)
"""
# + id="9lLnGqjMlTBd"
import os
import numpy as np
import tarfile
import re
import gzip
import glob
import re
from tqdm.notebook import tqdm
import pandas as pd
import matplotlib.pyplot as plt
# + id="ZCVo-Gt_l5es"
#los directorios imporatantes son:
npy_save_dir="/content/drive/MyDrive/prediccion_datos_muchos_telescopios/datos_muchos_tels_seleccion_6_03_21"
datos_compr="/content/drive/MyDrive/Datos_CTA_02_03_2021"
npy_data="/content/drive/MyDrive/prediccion_datos_muchos_telescopios/datos_muchos_tels_seleccion_6_03_21/npy_data"
# + [markdown] id="nHrXb9Lu8BG0"
# #Extraer
# + id="LGw2RWgdmv1n"
def extract_tar(dir_in,dir_out,final_folder=True):
#esta funcion recibe el directorio de un .tar y nos lo pone en una carpeta
#en dir_our con el mismo nombre (si folder=True)
if tarfile.is_tarfile(dir_in)!=True:
print("ERROR, NO ES TARF ILE")
return
else:
with tarfile.open(dir_in) as aux_tar:
if final_folder:
nombre_aux=os.path.basename(dir_in).replace(".tar","")
dir_out=f"{dir_out}/{nombre_aux}"
os.mkdir(dir_out)
aux_tar.extractall(dir_out)
# + colab={"base_uri": "https://localhost:8080/", "height": 128} id="faHOBhxe37QC" outputId="d7a48df3-1911-4a37-9dfc-9c7d2b94f631"
# + colab={"base_uri": "https://localhost:8080/"} id="ZKT7M2FSoquB" outputId="e7dfedf2-570a-4660-bcf8-3b8964d015a9"
lista_tar=os.listdir(datos_compr)
for i in lista_tar:
print(i)
extract_tar(f"{datos_compr}/{i}",npy_save_dir,final_folder=True)
# + [markdown] id="rBr6nbKp8DWB"
# #Descomprimir
# + id="e0rK9Q6uxCgV"
#ahora la descompresion de los archivos
#lo primero que vamos a hacer es descomprimir los gunzip que tenemos
def descomprimir_gunzip(ground_dir,elements,folders=True):
#es necesaria una carpeta con carpetas para los elementos y se crearan nuevas como extract_element
#funcion para descomprimir los elementos gunzip
#ground_dir es la carpeta padre de las otras carpetas o la contenedora de los datos talcual.
#elementos son los [gamma,electron,silicium...]
#folders true nos indica que hay carpetas contenedoras para cada elemento. I false, estan todos los datos
#en el ground directory.
if folders:
for i in elements:
new_folder=f"{ground_dir}/extract_{i}"
os.mkdir(new_folder)
element_dir=f"{ground_dir}/{i}"
os.chdir(element_dir)
files_names_dt=glob.glob("*.dt.gz")
files_names_txt=[h.replace(".dt.gz",".txt.gz") for h in files_names_dt]
for j in tqdm(range(len(files_names_dt))):
try:
if (os.path.isfile(f"{element_dir}/{files_names_dt[j]}")) and (os.path.isfile(f"{element_dir}/{files_names_txt[j]}")):
#print(files_names_dt[j].replace(".dt.gz",""))
with gzip.open(f"{element_dir}/{files_names_dt[j]}","rb") as f:
new_name_dt=files_names_dt[j].replace(".dt.gz",".dt")
fp=open(f"{new_folder}/{new_name_dt}","wb")
aux=f.read()
fp.write(aux)
fp.close()
f.close()
with gzip.open(f"{element_dir}/{files_names_txt[j]}","rb") as f:
new_name_txt=files_names_txt[j].replace(".txt.gz",".txt")
fp=open(f"{new_folder}/{new_name_txt}","wb")
aux=f.read()
fp.write(aux)
fp.close()
f.close()
else:
print(f"Para el elemento : {files_names_dt[j]} no hay un txt correspondiente")
except IndexError:
print("Hay más elementos dt que txt")
# + colab={"background_save": true, "base_uri": "https://localhost:8080/", "height": 49} id="o9vK1kpe5hKC" outputId="9b68607c-713b-4153-b914-c84fe67500e2"
elementos=["iron","nitrogen","silicon","electron","helium","proton","gamma"]
descomprimir_gunzip(npy_save_dir,elementos,folders=True)
# + id="i97y_puQ_3B7"
a=".txt"
# + [markdown] id="F6TWcRCwWV8f"
# # Que datos tenemos y cuales faltan?
# + id="HdExnd8jz7E1"
def dif_dt_txt(dir,faltantes=False,max_val=None,ending=(".dt",".txt")):
#esta funcion comprueba si hay los mismo archivos para txt y dt y cuales faltan y hasta que run llegan
#devuelve un diccionario con los telescopios y las runs de cada uno
#si pedimos los faltantes y damos un max_val obtenemos los que no hay en cada run
os.chdir(dir)
file_dt=glob.glob(f"*{ending[0]}")
file_txt=glob.glob(f"*{ending[1]}")
#primero extraemos la informacion importante, el tel y la run
tel_run_dt=np.array([ np.array([ re.findall("tel_([0-9]*)_",i)[0] ,re.findall("run_([0-9]*).",i)[0]],dtype="int") for i in file_dt])
tel_run_txt=np.array([ np.array([ re.findall("tel_([0-9]*)_",i)[0] ,re.findall("run_([0-9]*).",i)[0] ],dtype="int") for i in file_txt])
#una vez tenemos la info, queremos ver si son iguales
#primero las dimensiones
if tel_run_dt.shape[0]!=tel_run_txt.shape[0]:
print("Error con las dimensiones, no hay los mismos")
if tel_run_dt.shape[0] > tel_run_txt.shape[0]:
for i in tel_run_dt:
if not np.all(tel_run_txt==i,axis=-1).any():
print(f"El tel_{i[0]}_run_{i[1]}.{ending[0]} no tiene correspondiente {ending[1]}.")
else:
for i in tel_run_txt:
if not np.all(tel_run_dt==i,axis=-1).any():
print(f"El tel_{i[0]}_run_{i[1]}.{ending[1]} no tiene correspondiente {ending[0]}.")
return None
#son iguales las dos listas?
salir=False
for i in tel_run_dt:
if i not in tel_run_txt:
print(f"{i} no tiene correspondiente {ending[0]}")
salir=True
for i in tel_run_txt:
if i not in tel_run_dt:
print(f"{i} no tiene correspondiente {ending[1]}")
salir=True
if salir:
return None
else:
print(f"Para \"{os.path.basename(dir)}\" todos los {ending[1]} tienen {ending[0]} y viceversa, todo bien.")
#si las dimensiones sí estan bien entonces pasamos a listar los telescopios que hay para cada elemento y las runs para cada telescopio
telescopios=sorted(np.unique(tel_run_dt[:,0]))
runs=[sorted(tel_run_dt[tel_run_dt[:,0]==i][:,1]) for i in telescopios]
#por ultimo vamos a hacer un diccionario que tenga el telescopios y las runs que agrupa
if not faltantes:
return dict(zip(telescopios,runs))
else:
if max_val is None:
print("pasa un valor maximo para las runs")
return None
else:
runs_reales=np.arange(1,max_val+1)
run_faltantes=[]
for i in range(len(runs)):
faltan=[]
for j in runs_reales:
if j not in runs[i]:
faltan.append(j)
run_faltantes.append(faltan)
diccionario=dict(zip(telescopios,run_faltantes))
for i in telescopios:
if (diccionario[i]==[]):
diccionario.pop(i)
return diccionario
# + colab={"base_uri": "https://localhost:8080/"} id="bFxYLlDmWU-w" outputId="f72f4163-d532-4ac8-beae-e75e6f3eedc6"
elementos=["iron","nitrogen","silicon","electron","helium","proton","gamma"]
max_runs=[20,40,40,40,40,40,41]
lista=[]
for j,elem in enumerate(elementos):
carpeta=f"{npy_save_dir}/extract_{elem}"
lista.append(dif_dt_txt(carpeta,faltantes=True,max_val=max_runs[j]))#,ending=(".dt.gz","txt.gz")))
lista=dict(zip(elementos,lista))
# + colab={"base_uri": "https://localhost:8080/"} id="mxBwBHcTq9II" outputId="bcfcecf6-5f74-4240-f124-c925d88ef469"
#bueno podemos ver que no esta bien esta
lista
# + colab={"base_uri": "https://localhost:8080/"} id="dwZ0gJXqH55p" outputId="d8d75729-92e9-4052-a650-8c4d930057c7"
elementos=["iron","nitrogen","silicon","electron","helium","proton","gamma"]
max_runs=[20,40,40,40,40,40,41]
lista=[]
for j,elem in enumerate(elementos):
carpeta=f"{npy_save_dir}/{elem}"
lista.append(dif_dt_txt(carpeta,faltantes=True,max_val=max_runs[j],ending=(".dt.gz","txt.gz")))
lista=dict(zip(elementos,lista))
lista
# + colab={"base_uri": "https://localhost:8080/"} id="4Sb4sfQjJ8la" outputId="7e4b9a23-ff0c-40ce-90d7-e766790cd218"
np.all(a[1][:2]==[4,17],axis=1)
# + [markdown] id="-XEOqGpD_HLT"
# #Ahora procesarlos para obtener las imagenes npy y las labels de energia
# + id="4mWDb45d-IVV"
#GUARDAR LOS ARCHIVOS COMO .NPY pero sin normalizar ni nada
#tenemos que buscar una forma de que no guardemos archivos npy de mas de un giga (por poner un limite)
#from tqdm.notebook import tqdm
def multiple_dt_2_npy(lista_archivos,npy_dir,limit_size=0.35,save_events_id=False,verbose=False):
#le pasamos una lista de directorios y los guardará descomprimidos y sin normalizar en npy_dir
#ground_dir es el directorio base para las carpetas o para los archivos
#npy_dir es el directorio para guardar todosl os .npy juntos, sin fantasia ni carpetas
#folders=True es que lo .dt estan en carpetas
#limit_size limite de peso en gigas de los .npy, por defecto esat en 350 Mb ó 0.35 Gigas
limit_size=limit_size*1e9 # pasamos de gigas a bytes
npy_dir_aux=npy_dir
num_pix_x=0
num_pix_y=0
files_names=lista_archivos
verbose_list=[]
for j in range(len(files_names)):
contador_nombre=0
dt_list=[]
nombre_archivo=re.findall("([a-zA-Z]*_tel_[0-9]*_run_\d\d).dt",files_names[j])[0]
aux_df=pd.read_csv(files_names[j],sep=' ',names=["1","2","3","4","5","6"],engine="python")
#ahora la procesamos y la guardamos en un npy
value_auf=aux_df[['1','3','4','5']].copy()
del aux_df
#tenemos que agupar los valores
value_auf.loc[value_auf["5"]<0,"5"]=0
#max_aux=np.amax(value_auf["5"])
#value_auf["5"]=value_auf["5"]/max_aux
x_minimo=min(value_auf['3'])
y_minimo=min(value_auf['4'])
events=value_auf["1"].unique()
num_pix_x_aux=value_auf["3"].unique().size
num_pix_y_aux=value_auf["4"].unique().size
if (num_pix_x != num_pix_x_aux) or (num_pix_y != num_pix_y_aux) : #tenemos que ser capacer de cambiar segun si observamos un telescopio u otro
num_pix_x=num_pix_x_aux
num_pix_y=num_pix_y_aux
if verbose:
#print(num_pix_x,num_pix_y)
verbose_list.append((num_pix_x,num_pix_y))
x_minimo=min(value_auf['3'])
y_minimo=min(value_auf['4'])
##!!!esto puede dar problemas si resulta que para el primer evento faltan datos o algo...
auxiliar=value_auf.loc[value_auf["1"]==events[0]][["3","4","5"]].to_numpy()
#ahora tenemos los datos de los pixeles, podemos obtener lo que ocupa cada pixel
size_pix_x=np.ceil((max(auxiliar[:,0])-min(auxiliar[:,0]))/(np.unique(auxiliar[:,0]).size-1))
size_pix_y=np.ceil((max(auxiliar[:,1])-min(auxiliar[:,1]))/(np.unique(auxiliar[:,1]).size-1))
del auxiliar
if verbose:
#print(nombre_archivo,end="\n")
verbose_list.append(nombre_archivo)
value_auf.loc[:,'3']=value_auf['3'].apply(lambda x: round((x-x_minimo)/size_pix_x))
value_auf.loc[:,'4']=value_auf['4'].apply(lambda x: round((x-y_minimo)/size_pix_y))
#event_aux=value_auf["1"].unique()
for k in range(np.shape(events)[0]):
#cada evento tiene que ponerse en una imagen con sus valores
array_aux=value_auf.loc[value_auf["1"]==events[k]][["3","4","5"]]
#lo que vamos a hacer es poner los valores en una matriz creada de antemano y guardar esa matrix
#esos numeros vienen del maximo y el minimo valor para los pixeles, simplemente shifteamos todo
matrix_aux=np.zeros((num_pix_x,num_pix_y)) #eran 60-5= 55 y 131-38
matrix_aux[array_aux["3"].to_numpy(),array_aux["4"].to_numpy()]=array_aux["5"].to_numpy()
dt_list.append(matrix_aux)
if limit_size!=0:
if (np.array(dt_list).nbytes>limit_size):
if contador_nombre==0:
name_npy=f"{npy_dir_auxf}/npy_sin_norm_{nombre_archivo}_{contador_nombre}.npy"
np.save(name_npy,np.array(dt_list))
del dt_list
dt_list=[]
contador_nombre+=1
name_npy=f"{npy_dir_aux}/npy_sin_normal_{nombre_archivo}_{contador_nombre}.npy"
np.save(name_npy,np.array(dt_list))
del dt_list
if save_events_id:
name_npy_events=f"{npy_dir_aux}/id_eventos_npy_sin_normal_{nombre_archivo}.npy"
np.save(name_npy_events,np.array(events))
if verbose:
return verbose_list
# + id="hhSywjfh23jH"
def lista_dt(dt_dir):
return sorted(glob.glob(f"{dt_dir}/*.dt"))
def lista_txt(txt_dir):
return sorted(glob.glob(f"{txt_dir}/*.dt"))
# + colab={"background_save": true} id="cNU61AxNos-H"
#tenemos que pasarle una lista de los archivos a descomprimir y decirle donde
elementos=["gamma"]
#vamos a entrar en cada extract folder,
#vamos a convertir cada .dt en un npy en una carpeta npy_elemento en npy_data
#ya estaría
for i,j in enumerate(elementos):
#cargamos los nombres que necesitamos
fold_name=f"{npy_save_dir}/extract_{j}"
names_files=lista_dt(fold_name)
#long=len(names_files)//2
long2=len(names_files)-len(names_files)//3
names_files=names_files[long2:]
#creamos la carpeta destino
carpeta_destino_name=f"{npy_data}/npy_{j}"
#os.mkdir(carpeta_destino_name)
#ahora solo tenemos que darle a la funcion la lista y el destino
out=multiple_dt_2_npy(names_files,carpeta_destino_name,save_events_id=True,verbose=True)
with open(f"{npy_data}/verbose_npy_{j}_parte3.txt","w") as file_aux:
file_aux.write(str(out)) #podriamos poner .replace("[","").replace("]","") para quitar los bordes
file_aux.close()
del out
# + id="0J250Kjx6SyQ"
#VARIANTE!!! SI <NAME>, HACEMOS UN LISTA AUXILIAR DONDE GUARDAMOS LAS QUE SE
#VAN GUARDANDO, Y AL REINICIAR SE LEE LA LISTA Y SE IGNORAN LAS QUE YA SE HAN PASADO
#tenemos que pasarle una lista de los archivos a descomprimir y decirle donde
elementos=["gamma"]
#vamos a entrar en cada extract folder,
#vamos a convertir cada .dt en un npy en una carpeta npy_elemento en npy_data
#ya estaría
for i,j in enumerate(elementos):
#cargamos los nombres que necesitamos
fold_name=f"{npy_save_dir}/extract_{j}"
names_files=lista_dt(fold_name)
#creamos la carpeta destino
carpeta_destino_name=f"{npy_data}/npy_{j}"
if not os.path.isdir(carpeta_destino):
os.mkdir(carpeta_destino_name)
files_done=[]
else:
files_done=list(np.load(f"{carpeta_destino_name}/files_done_{j}.py"))
#ahora solo tenemos que darle a la funcion la lista y el destino
for k in names_files:
if k not in files_done:
multiple_dt_2_npy(k,carpeta_destino_name,save_events_id=True,verbose=False)
files_done.append(k)
np.save(f"{carpeta_destino_name}/files_done_{j}.py",files_done)
# + [markdown] id="UC3xdDt85kxA"
# ## Problema con los telescopios 58 59...
# + id="AjTnjCsbq5o3"
a1=np.load("/content/drive/MyDrive/prediccion_datos_muchos_telescopios/datos_muchos_tels_seleccion_6_03_21/npy_data/npy_iron/npy_sin_normal_iron_tel_59_run_01_0.npy")
# + colab={"base_uri": "https://localhost:8080/"} id="43HiibN48EIV" outputId="87af90ca-49b0-407e-848a-d7a478997bd1"
a1.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 493} id="z43ol5zj8Ftb" outputId="72f5ff0f-8381-48d3-f624-8a6997ffe1d6"
plt.figure(figsize=(13,13))
plt.imshow(a1[76])
# + id="h7QZzrBD5rfZ"
#vamos a descomprimir directamente un dt del telescopios 60
base_dir="/content/drive/MyDrive"
run_tel_61="/content/drive/MyDrive/prediccion_datos_muchos_telescopios/datos_muchos_tels_seleccion_6_03_21/extract_iron/iron_tel_61_run_07.dt"
# + colab={"base_uri": "https://localhost:8080/"} id="j35GjN5-5rb8" outputId="5caadfcd-0818-45a5-c813-51bc7bea6e86"
multiple_dt_2_npy([run_tel_61],base_dir)
# + colab={"base_uri": "https://localhost:8080/", "height": 493} id="xmv8tp7f5rZH" outputId="cc874257-35b2-41ee-b11e-f767adb9090d"
dat_61="/content/drive/MyDrive/npy_sin_normal_iron_tel_61_run_07_0.npy"
a1=np.load(dat_61)
plt.figure(figsize=(13,13))
plt.imshow(a1[1])
# + [markdown] id="93g4uJ0j64IW"
# **PUES NADA QUE EL 58,59,60,61 SON TELESCOPIOS DE LOS GRANDES , LO CUAL ESTA MAL PERO BUENO...**
# + [markdown] id="aWu6Cc1R5fgS"
# ## Comprobar que estan todos los dt como npy
# + id="H5vFquP99PGl"
#ahora vamos a comprobar qeu exactamente, estan los mismos dt que npy
def dif_dt_txt(dirs,faltantes=False,max_val=None,ending=(".dt",".txt")):
#esta funcion comprueba si hay los mismo archivos para txt y dt y cuales faltan y hasta que run llegan
#devuelve un diccionario con los telescopios y las runs de cada uno
#si pedimos los faltantes y damos un max_val obtenemos los que no hay en cada run
if (len(dirs)==2) and (type(dirs==list)):
os.chdir(dirs[0])
file_dt=glob.glob(f"*{ending[0]}")
os.chdir(dirs[1])
file_txt=glob.glob(f"npy_*{ending[1]}")
elif (len(dirs)==1) and (type(dirs==list)):
os.chdir(dirs[0])
file_dt=glob.glob(f"*{ending[0]}")
file_txt=glob.glob(f"*{ending[1]}")
elif type(dirs)!=list:
os.chdir(dirs)
file_dt=glob.glob(f"*{ending[0]}")
file_txt=glob.glob(f"*{ending[1]}")
else:
print("ERROR CON DIRS ")
return None
#primero extraemos la informacion importante, el tel y la run
tel_run_dt=np.array([ np.array([ re.findall("tel_([0-9]*)_",i)[0] ,re.findall("run_([0-9]*).",i)[0]],dtype="int") for i in file_dt])
tel_run_txt=np.array([ np.array([ re.findall("tel_([0-9]*)_",i)[0] ,re.findall("run_([0-9]*).",i)[0] ],dtype="int") for i in file_txt])
#una vez tenemos la info, queremos ver si son iguales
#primero las dimensiones
if tel_run_dt.shape[0]!=tel_run_txt.shape[0]:
print("Error con las dimensiones, no hay los mismos")
if tel_run_dt.shape[0] > tel_run_txt.shape[0]:
for i in tel_run_dt:
if not np.all(tel_run_txt==i,axis=-1).any():
print(f"El tel_{i[0]}_run_{i[1]}.{ending[0]} no tiene correspondiente {ending[1]}.")
else:
for i in tel_run_txt:
if not np.all(tel_run_dt==i,axis=-1).any():
print(f"El tel_{i[0]}_run_{i[1]}.{ending[1]} no tiene correspondiente {ending[0]}.")
return None
#son iguales las dos listas?
salir=False
for i in tel_run_dt:
if i not in tel_run_txt:
print(f"{i} no tiene correspondiente {ending[0]}")
salir=True
for i in tel_run_txt:
if i not in tel_run_dt:
print(f"{i} no tiene correspondiente {ending[1]}")
salir=True
if salir:
return None
else:
print(f"Para {os.path.basename(dirs[1])} todos los {ending[1]} tienen {ending[0]} y viceversa, todo bien.")
#si las dimensiones sí estan bien entonces pasamos a listar los telescopios que hay para cada elemento y las runs para cada telescopio
telescopios=sorted(np.unique(tel_run_dt[:,0]))
runs=[sorted(tel_run_dt[tel_run_dt[:,0]==i][:,1]) for i in telescopios]
#por ultimo vamos a hacer un diccionario que tenga el telescopios y las runs que agrupa
if not faltantes:
return dict(zip(telescopios,runs))
else:
if max_val is None:
print("pasa un valor maximo para las runs")
return None
else:
runs_reales=np.arange(1,max_val+1)
run_faltantes=[]
for i in range(len(runs)):
faltan=[]
for j in runs_reales:
if j not in runs[i]:
faltan.append(j)
run_faltantes.append(faltan)
diccionario=dict(zip(telescopios,run_faltantes))
for i in telescopios:
if (diccionario[i]==[]):
diccionario.pop(i)
return diccionario
# + id="24trD6AixfrJ" colab={"base_uri": "https://localhost:8080/"} outputId="0991d622-62e7-445f-b904-d0f1991187d1"
#comprobamos para el iron
elementos=["gamma"]#,"nitrogen","silicon","electron","helium","proton","gamma"]
max_runs=41#,40,40,40,40,40,41]
lista=[]
for j,elem in enumerate(elementos):
carpeta1=f"{npy_save_dir}/extract_{elem}"
carpeta2=f"{npy_data}/npy_{elem}"
lista.append(dif_dt_txt([carpeta1,carpeta2],faltantes=True,max_val=max_runs,ending=(".dt",".npy")))
lista=dict(zip(elementos,lista))
lista
# + colab={"base_uri": "https://localhost:8080/"} id="ZqRgahUc4W2u" outputId="d2708c1f-b172-4c5f-9296-2922369b9730"
lista
# + [markdown] id="_EZdPDinTii7"
# # Extracción elementos para el telescopio 1
# + id="YljUU2o_FDNL"
base_dir_dt="/content/drive/MyDrive/analisis_datos_tfg_inicial_hasta_4_02_21"
dest_dir_npy="/content/drive/MyDrive/prediccion_datos_muchos_telescopios/datos_muchos_tels_seleccion_6_03_21/npy_data/npy_elementos_tel_1"
# + id="zspxWSzzUm5o"
#tenemos que pasarle una lista de los archivos a descomprimir y decirle donde
elementos=["nitrogen","gamma","silicon","electron","helium","proton","iron"]
#vamos a entrar en cada extract folder,
#vamos a convertir cada .dt en un npy en una carpeta npy_elemento en npy_data
#ya estaría
for i,j in enumerate(elementos):
#cargamos los nombres que necesitamos
if j=="gamma":
fold_name=f"{base_dir_dt}/gamma/gamma_dt"
else:
fold_name=f"{base_dir_dt}/{j}"
names_files=lista_dt(fold_name)
#long=len(names_files)//2
#long2=len(names_files)-len(names_files)//3
#names_files=names_files[long2:]
#creamos la carpeta destino
carpeta_destino_name=f"{dest_dir_npy}/npy_tel_1_{j}"
os.mkdir(carpeta_destino_name)
#ahora solo tenemos que darle a la funcion la lista y el destino
out=multiple_dt_2_npy(names_files,carpeta_destino_name,save_events_id=True,verbose=True)
with open(f"{dest_dir_npy}/verbose_npy_{j}_tel_1.txt","w") as file_aux:
file_aux.write(str(out)) #podriamos poner .replace("[","").replace("]","") para quitar los bordes
file_aux.close()
del out
# + id="LGGWf2kqWbVb"
# + [markdown] id="oFlFyCuDUjNw"
# # Datos sin ruido según la simulación.
#
# En los datos se nus indica en la sexta columna que valore son reales y cuales son debido a ruido y tal. Vamos a ver si esto puede utilizarse para la construccion de un autoencoder que nos simplifique todo.
# + id="i3TcfBj-U5yJ"
def simple_load_dt_2_npy(file,verbose=False,save=False,save_events_id=False,npy_dir_aux=None,truth_sim=True):
num_pix_x=0
num_pix_y=0
verbose_list=[]
dt_list=[]
nombre_archivo=re.findall("([a-zA-Z]*_tel_[0-9]*_run_\d\d).dt",file)[0]
aux_df=pd.read_csv(file,sep=' ',names=["1","2","3","4","5","6"],engine="python")
#ahora la procesamos y la guardamos en un npy
if truth_sim:
value_auf=aux_df.loc[aux_df.loc[:,"6"]==1][['1','3','4','5']].copy()
else:
value_auf=aux_df[['1','3','4','5']].copy()
del aux_df
#tenemos que agupar los valores
value_auf.loc[value_auf["5"]<0,"5"]=0
#max_aux=np.amax(value_auf["5"])
#value_auf["5"]=value_auf["5"]/max_aux
x_minimo=min(value_auf['3'])
y_minimo=min(value_auf['4'])
events=value_auf["1"].unique()
num_pix_x_aux=value_auf["3"].unique().size
num_pix_y_aux=value_auf["4"].unique().size
if (num_pix_x != num_pix_x_aux) or (num_pix_y != num_pix_y_aux) : #tenemos que ser capacer de cambiar segun si observamos un telescopio u otro
num_pix_x=num_pix_x_aux
num_pix_y=num_pix_y_aux
if verbose:
#print(num_pix_x,num_pix_y)
verbose_list.append((num_pix_x,num_pix_y))
x_minimo=min(value_auf['3'])
y_minimo=min(value_auf['4'])
##!!!esto puede dar problemas si resulta que para el primer evento faltan datos o algo...
auxiliar=value_auf.loc[value_auf["1"]==events[0]][["3","4","5"]].to_numpy()
#ahora tenemos los datos de los pixeles, podemos obtener lo que ocupa cada pixel
size_pix_x=np.ceil((max(auxiliar[:,0])-min(auxiliar[:,0]))/(np.unique(auxiliar[:,0]).size-1))
size_pix_y=np.ceil((max(auxiliar[:,1])-min(auxiliar[:,1]))/(np.unique(auxiliar[:,1]).size-1))
del auxiliar
if verbose:
#print(nombre_archivo,end="\n")
verbose_list.append(nombre_archivo)
value_auf.loc[:,'3']=value_auf['3'].apply(lambda x: round((x-x_minimo)/size_pix_x))
value_auf.loc[:,'4']=value_auf['4'].apply(lambda x: round((x-y_minimo)/size_pix_y))
#event_aux=value_auf["1"].unique()
for k in range(np.shape(events)[0]):
#cada evento tiene que ponerse en una imagen con sus valores
array_aux=value_auf.loc[value_auf["1"]==events[k]][["3","4","5"]]
#lo que vamos a hacer es poner los valores en una matriz creada de antemano y guardar esa matrix
#esos numeros vienen del maximo y el minimo valor para los pixeles, simplemente shifteamos todo
matrix_aux=np.zeros((num_pix_x,num_pix_y)) #eran 60-5= 55 y 131-38
matrix_aux[array_aux["3"].to_numpy(),array_aux["4"].to_numpy()]=array_aux["5"].to_numpy()
dt_list.append(matrix_aux)
if save:
name_npy=f"{npy_dir_aux}/npy_sin_normal_{nombre_archivo}_{contador_nombre}.npy"
np.save(name_npy,np.array(dt_list))
if save_events_id:
name_npy_events=f"{npy_dir_aux}/id_eventos_npy_sin_normal_{nombre_archivo}.npy"
np.save(name_npy_events,np.array(events))
if verbose:
return verbose_list, np.array(dt_list)
else:
return np.array(dt_list)
# + id="NqVJkNqLaGeW"
#funcion que nos suaviza todo para que la red asimile mejor las cosicas
def fill_holes(npy):
npy_aux=npy.copy()
if type(npy)!=np.ndarray:
print("Error input")
return
#para cada elemento que sea cero lo rellenamos con la media de los vecinos
indices=np.where(npy[1:-1,1:-1]==0)
indices_1=indices[1]+1
indices_0=indices[0]+1
for i in range(indices_1.shape[0]):
media=(npy[indices_0[i]-1,indices_1[i]]+npy[indices_0[i],indices_1[i]-1]+npy[indices_0[i]+1,indices_1[i]]+npy[indices_0[i],indices_1[i]+1])/4
npy_aux[indices_0[i],indices_1[i]]=media
return npy_aux
# + id="Q5vaXZ01Xy7b"
ruta_aux="/content/drive/MyDrive/prediccion_datos_muchos_telescopios/datos_muchos_tels_seleccion_6_03_21/extract_helium/helium_tel_11_run_18.dt"
a=simple_load_dt_2_npy(ruta_aux)
b=simple_load_dt_2_npy(ruta_aux,truth_sim=False)
# + id="mI-Qp6irYNGD"
z=np.array(list(map(fill_holes,a))
# + colab={"base_uri": "https://localhost:8080/", "height": 162} id="bTWppCUCqAkB" outputId="eef78ebe-0142-4669-c335-13b1d23da3fd"
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="vVxbWozbYtCO" outputId="2c2df507-b18f-4aef-874d-71d3d5f29ce0"
for i in range(5):
plt.figure(figsize=(13,13))
plt.subplot(1,2,1)
plt.imshow(fill_holes(a[i]))
plt.subplot(1,2,2)
plt.imshow(a[i])
| Preprocesado_datos_multitelescop/Nuevos_archivos_muchos_telescopios.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: deepTFM
# language: python
# name: deeptfm
# ---
# +
import torch
import torchvision
import shutil
batch_size= 32
mean = -1
std=1
train_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('../datasets/mnist', train=True, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.Resize([32, 32]),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(mean,), (std,))
])),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('../datasets/mnist', train=False, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.Resize([32, 32]),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(mean,), (std,))
])),
batch_size=16, shuffle=False, drop_last= True)
for x,y in train_loader:
break
vmin= x.min().item()
vmax= x.max().item()
print('range : ',vmin, vmax)
# +
# %load_ext autoreload
# %autoreload 2
import torch
import shutil
import numpy as np
from torch import nn
from modules.kernels import get_gaussian
from modules.models.decoder import simple_generator
from modules.models.forward_model import forward_modelA
from modules.models.forward_H import modelH
from modules.train_utils_v2 import train
from modules.custom_activations import inc_m
import matplotlib.pyplot as plt
from modules.models.preprocess_H_weights import ifft, fft
from modules.noise import poisson_noise
from modules.models.classifiers import simple_mnist_classifier
device='cuda:0' if torch.cuda.is_available() else 'cpu'
classifier = simple_mnist_classifier(32).to(device)
classifier.load_state_dict(torch.load('saved_models/mnist_classifier.pth', map_location=device)['model_state_dict'])
# +
img_size= 32
m_inc_epoc= 1
def inc_1_after_100_interval_10(m, epoch):
if epoch>150 and epoch%10==0:
m=inc_m(m, epoch, 1)
return m
sPSF= torch.tensor(get_gaussian(side_len=5, s=1)).float().to(device)
exPSF= torch.tensor(get_gaussian(side_len=5, s=1)).float().to(device)
criterion= nn.L1Loss().to(device)
train_model_iter, train_H_iter= 1, 1
epochs=200
show_results_epoch=10
initialization_bias= 10
# +
from modules.models.forward_H import modelH
from modules.train_utils_v2 import train
from modules.custom_activations import sigmoid_custom2 as H_activation
T=5
H_complex_init =True
H_weight_preprocess= ifft #torch.abs #torch.abs #None
m_inc_proc =None #inc_m
decoder= simple_generator(T, img_size)
H_generator = modelH(T, 32, H_weight_preprocess, H_complex_init, device, initialization_bias=10, activation = H_activation)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
decoder = nn.DataParallel(decoder)
#H_generator= nn.DataParallel(H_generator)
decoder.to(device)
H_generator.to(device)
opt_model= torch.optim.Adam(decoder.parameters(), lr= 0.001)
opt_H= torch.optim.Adam(H_generator.parameters(), lr= 0.01)
noise=True # gradient exploding is not there (because I have added change the image range [1,2] -> which not contains 0s in it: forward_model.py)
train_model_iter, train_H_iter= 1, 1
train(decoder, forward_modelA, H_generator, sPSF, exPSF, criterion, [opt_model, opt_H], train_loader, test_loader, device, T, 10, 1, train_model_iter, train_H_iter, m_inc_epoc, m_inc_proc, './1', noise, classifier, [mean, std])
| aim2/support_notebooks/old/experiments_multiGPUS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy as sp
import seaborn as sns
import statsmodels.api as sm
import statsmodels.tsa.api as smt
import scipy.stats as stats
import warnings
import pylab
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from statsmodels.stats.outliers_influence import variance_inflation_factor
import statsmodels.api as sm
from statsmodels.stats.diagnostic import linear_harvey_collier
warnings.filterwarnings("ignore")
# %matplotlib inline
# +
pdInputData = pd.read_excel("ProjectInputData.xlsx")
X = pdInputData[['gold', "oil", "JPM"]]
y = pdInputData['Close_ETF']
# Create a model y = b0 + b1x + b2x manually
def get_multilinear_best_fit_line(X, Y):
# Use Linear Algebra to solve
a = np.linalg.solve(np.dot(X.T, X), np.dot(X.T, Y))
predictedY = np.dot(X, a)
# calculate the r-squared
SSres = Y - predictedY
SStot = Y - Y.mean()
rSquared = 1 - (SSres.dot(SSres) / SStot.dot(SStot))
print("the r-squared is: ", rSquared)
print("the coefficient (value of a) for independent variables('gold', 'oil'), constant is: ", a)
return predictedY, SSres
predictedY, SSres = get_multilinear_best_fit_line(X, y)
### # Plot Predict Vs Residual To Check Linearity
plt.figure(figsize=(15,7))
sns.regplot(x=predictedY,y=SSres)
plt.xlabel("Prediction Value (y_hat)", fontsize = 20)
plt.ylabel("Residual Value (y - y_hat)", fontsize = 20)
plt.title("Scatter plot: Residual Value Vs Prediction Value (y hat)", fontsize = 20)
plt.show()
# +
# Split the data into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.20,
random_state=42)
X_with_constant = sm.add_constant(X_train)
model = sm.OLS(y_train, X_with_constant)
results = model.fit()
results.params
# -
vif = [variance_inflation_factor(X_train.values, i) for i in range(X_train.shape[1])]
pd.DataFrame({'vif': vif[0:]}, index=X_train.columns).T
X = pdInputData[['oil' , 'gold']]
y = pdInputData['Close_ETF']
X_with_constant = sm.add_constant(X)
model = sm.OLS(y, X_with_constant)
results = model.fit()
results.params
results.summary()
sns.pairplot(pdInputData[['gold', "oil", "Close_ETF"]], kind ='reg')
sns.pairplot(pdInputData , x_vars=['gold', "oil"], y_vars=["Close_ETF"],
height=5, aspect=.8, kind="reg")
fig = plt.figure(1)
ax = fig.add_subplot(111, projection='3d')
#ax.scatter(X[:, 0], X[:, 1], Y)
ax.scatter(pdInputData['gold'], pdInputData["oil"], pdInputData['Close_ETF'])
ax.set_xlabel('Gold')
ax.set_ylabel('Oil')
ax.set_zlabel('Close_ETF')
# +
y_pred = results.predict()
# multicolinearity/independence
vif = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
pd.DataFrame({'vif': vif[0:]}, index=X.columns).T
# +
# normality of residuals
plt.figure(figsize=(15,7))
serResidual = results.resid
ax = sns.distplot(serResidual)
plt.axvline(np.mean(serResidual), color="b", linestyle="dashed", linewidth=5)
_, max_ = plt.ylim()
plt.text( serResidual.mean() + serResidual.mean() / 10, max_ - max_ / 10, "Mean: {:.2f}".format(serResidual.mean()),
)
acf = smt.graphics.plot_acf(serResidual, lags=100, alpha=0.01)
fig, ax = plt.subplots(figsize=(20,10))
_, (__, ___, r) = sp.stats.probplot(serResidual, plot=ax, fit=True)
# -
np.mean(serResidual)
# +
# Residuals vs Fitted
model_fitted_y = results.predict()
model_residuals = results.resid
model_norm_residuals = results.get_influence().resid_studentized_internal
model_norm_residuals_abs_sqrt = np.sqrt(np.abs(model_norm_residuals))
model_abs_resid = np.abs(model_residuals)
model_leverage = results.get_influence().hat_matrix_diag
model_cooks = results.get_influence().cooks_distance[0]
plot_lm_1 = plt.figure(figsize=(15,7))
plot_lm_1.axes[0] = sns.residplot(model_fitted_y, pdInputData.columns[-1], \
data=pdInputData,
lowess=True,
scatter_kws={'alpha': 0.5},
line_kws={'color': 'red', 'lw': 1, 'alpha': 0.8})
plot_lm_1.axes[0].set_title('Residuals vs Fitted', size = 20)
plot_lm_1.axes[0].set_xlabel('Fitted values', size = 20)
plot_lm_1.axes[0].set_ylabel('Residuals', size = 20)
plot_lm_3 = plt.figure(figsize=(15,7))
plt.scatter(model_fitted_y, model_norm_residuals_abs_sqrt, alpha=0.5);
sns.regplot(model_fitted_y, model_norm_residuals_abs_sqrt,
scatter=False,
ci=False,
lowess=True,
line_kws={'color': 'red', 'lw': 1, 'alpha': 0.8});
plot_lm_3.axes[0].set_title('Scale-Location', size = 20)
plot_lm_3.axes[0].set_xlabel('Fitted values', size = 20)
plot_lm_3.axes[0].set_ylabel('$\sqrt{|Standardized Residuals|}$', size = 20);
# annotations
abs_sq_norm_resid = np.flip(np.argsort(model_norm_residuals_abs_sqrt), 0)
#abs_norm_resid_top_3 = abs_norm_resid[:3]
abs_sq_norm_resid_top_3 = abs_sq_norm_resid[:3]
for i in abs_sq_norm_resid_top_3:
plot_lm_3.axes[0].annotate(i,
xy=(model_fitted_y[i],
model_norm_residuals_abs_sqrt[i]));
plot_lm_4 = plt.figure(figsize=(15,7))
plt.scatter(model_leverage, model_norm_residuals, alpha=0.5)
sns.regplot(model_leverage, model_norm_residuals,
scatter=False,
ci=False,
lowess=True,
line_kws={'color': 'red', 'lw': 1, 'alpha': 0.8})
plot_lm_4.axes[0].set_xlim(0, max(model_leverage)+0.01)
plot_lm_4.axes[0].set_ylim(-3, 5)
plot_lm_4.axes[0].set_title('Residuals vs Leverage', size = 20)
plot_lm_4.axes[0].set_xlabel('Leverage', size = 20)
plot_lm_4.axes[0].set_ylabel('Standardized Residuals', size = 20)
# annotations
leverage_top_3 = np.flip(np.argsort(model_cooks), 0)[:3]
for i in leverage_top_3:
plot_lm_4.axes[0].annotate(i,
xy=(model_leverage[i],
model_norm_residuals[i]))
# -
| Part 10 Final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Part 2: Data Visualization with Seaborn
import seaborn as sns
# %matplotlib inline
# Loading a build-in dataset from seaborn.
flights = sns.load_dataset('flights')
flights.head()
flights.corr()
# Rearranges the data to have months as rows, and then years as columns.
flights.pivot_table(values='passengers',index='month',columns='year')
# Graphing with Seaborn!
pvflights = flights.pivot_table(values='passengers',index='month',columns='year')
sns.heatmap(pvflights)
# Changing the cmap, linecolor, and linewidth for visual clarity.
sns.heatmap(pvflights,cmap='magma',linecolor='white',linewidths=1)
# From the graph above, we can notice that the months of June and July had a similar number of flights over the years.
sns.clustermap(pvflights)
# Another dataset within the Seaborn statistical data visualization package library in Python
iris = sns.load_dataset('iris')
iris.head()
import matplotlib.pyplot as plt
g = sns.PairGrid(iris)
g.map(plt.scatter)
# Mapping the lower, diagonal, and upper graphs to represent different functions.
g = sns.PairGrid(iris)
g.map_diag(plt.hist)
g.map_upper(plt.scatter)
g.map_lower(sns.kdeplot)
sns.pairplot(iris)
# Using FacetGrid to graph compare features through graphs.
# Analyzing another built-in dataset to better illustrate the power of features of FacetGrid!
tips = sns.load_dataset('tips')
g = sns.FacetGrid(tips, col="time", row="smoker")
g = g.map(plt.hist, "total_bill")
g = sns.JointGrid(x="total_bill", y="tip", data=tips)
g = g.plot(sns.regplot, sns.distplot)
| data-visualization/SeabornDataVisualization2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a name="top"></a>
# <div style="width:1000 px">
#
# <div style="float:right; width:98 px; height:98px;">
# <img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
# </div>
#
# <h1>Siphon Overview</h1>
# <h3>Unidata Python Workshop</h3>
#
# <div style="clear:both"></div>
# </div>
#
# <hr style="height:2px;">
#
# <div style="float:right; width:250 px"><img src="https://unidata.github.io/siphon/latest/_static/siphon_150x150.png" alt="TDS" style="height: 200px;"></div>
#
# ## Overview:
#
# * **Teaching:** 15 minutes
# * **Exercises:** 15 minutes
#
# ### Questions
# 1. What is a THREDDS Data Server (TDS)?
# 1. How can I use Siphon to access a TDS?
#
# ### Objectives
# 1. <a href="#threddsintro">Use siphon to access a THREDDS catalog</a>
# 1. <a href="#filtering">Find data within the catalog that we wish to access</a>
# 1. <a href="#dataaccess">Use siphon to perform remote data access</a>
# <a name="threddsintro"></a>
# ## 1. What is THREDDS?
#
# * Server for providing remote access to datasets
# * Variety of services for accesing data:
# - HTTP Download
# - Web Mapping/Coverage Service (WMS/WCS)
# - OPeNDAP
# - NetCDF Subset Service
# - CDMRemote
# * Provides a more uniform way to access different types/formats of data
# + [markdown] slideshow={"slide_type": "subslide"}
# ## THREDDS Demo
# http://thredds.ucar.edu
# -
# ### THREDDS Catalogs
# - XML descriptions of data and metadata
# - Access methods
# - Easily handled with `siphon.catalog.TDSCatalog`
from datetime import datetime, timedelta
from siphon.catalog import TDSCatalog
date = datetime.utcnow() - timedelta(days=1)
cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/nexrad/level3/'
'N0Q/DGX/{dt:%Y%m%d}/catalog.xml'.format(dt=date))
# <a href="#top">Top</a>
# <hr style="height:2px;">
# <a name="filtering"></a>
# ## 2. Filtering data
# We *could* manually figure out what dataset we're looking for and generate that name (or index). Siphon provides some helpers to simplify this process, provided the names of the dataset follow a pattern with the timestamp in the name:
from datetime import datetime, timedelta
request_time = date.replace(hour=18, minute=30)
ds = cat.datasets.filter_time_nearest(request_time)
ds
# We can also find the list of datasets within a time range:
datasets = cat.datasets.filter_time_range(request_time, request_time + timedelta(hours=1))
print(datasets)
# <div class="alert alert-success">
# <b>EXERCISE</b>:
# <ul>
# <li>Starting from http://thredds.ucar.edu/thredds/catalog/satellite/SFC-T/SUPER-NATIONAL_1km/, find the composites for the previous day.</li>
# <li>Grab the URL and create a TDSCatalog instance.</li>
# <li>Using Siphon, find the data available in the catalog between 12Z and 18Z on the previous day.</li>
# </ul>
# </div>
# Your code goes here
# <button data-toggle="collapse" data-target="#sol1" class='btn btn-primary'>View Solution</button>
# <div id="sol1" class="collapse">
# <code><pre>
# date = datetime.utcnow() - timedelta(days=1)
# cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/satellite/SFC-T/SUPER-NATIONAL_1km/{dt:%Y%m%d}/catalog.xml'.format(dt=date))
# request_time = date.replace(hour=12, minute=0, second=0)
# datasets = cat.datasets.filter_time_range(request_time, request_time + timedelta(hours=6))
# print(datasets)
# </pre></code>
# </div>
# <a href="#top">Top</a>
# <hr style="height:2px;">
# <a name="dataaccess"></a>
# ## 3. Accessing data
# + [markdown] slideshow={"slide_type": "subslide"}
# Accessing catalogs is only part of the story; Siphon is much more useful if you're trying to access/download datasets.
#
# For instance, using our data that we just retrieved:
# -
ds = datasets[0]
# + [markdown] slideshow={"slide_type": "fragment"}
# We can ask Siphon to download the file locally:
# -
ds.download('data.nc4')
import os; os.listdir()
# + [markdown] slideshow={"slide_type": "subslide"}
# Or better yet, get a file-like object that lets us `read` from the file as if it were local:
# -
fobj = ds.remote_open()
data = fobj.read()
print(len(data))
# This is handy if you have Python code to read a particular format.
# + [markdown] slideshow={"slide_type": "subslide"}
# It's also possible to get access to the file through services that provide netCDF4-like access, but for the remote file. This access allows downloading information only for variables of interest, or for (index-based) subsets of that data:
# -
nc = ds.remote_access()
# By default this uses CDMRemote (if available), but it's also possible to ask for OPeNDAP (using netCDF4-python).
print(list(nc.variables))
# <a href="#top">Top</a>
# <hr style="height:2px;">
| notebooks/Siphon/Siphon Overview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ProgramistaZpolski/reimagined-waffle/blob/master/Interactive_textgenrnn_Demo_w_GPU.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="H7LoMj4GA4n_" colab_type="text"
# # Interactive textgenrnn Demo w/ GPU
#
# by [<NAME>](http://minimaxir.com)
#
# *Last updated: December 2nd, 2018*
#
# Generate text using a pretrained neural network with a few lines of code, or easily train your own text-generating neural network of any size and complexity, **for free on a GPU using Collaboratory!**
#
# For more about textgenrnn, you can visit [this GitHub repository](https://github.com/minimaxir/textgenrnn).
#
#
# To get started:
#
# 1. Copy this notebook to your Google Drive to keep it and save your changes.
# 2. Make sure you're running the notebook in Google Chrome.
# 3. Run the cells below:
#
# + id="KBkpRgBCBS2_" colab_type="code" outputId="ed44586e-77d4-4a9b-f447-caab8c9403f4" colab={"base_uri": "https://localhost:8080/", "height": 35}
# !pip install -q textgenrnn
from google.colab import files
from textgenrnn import textgenrnn
from datetime import datetime
import os
# + [markdown] id="0wXB05bPDYxS" colab_type="text"
# Set the textgenrnn model configuration here: the default parameters here give good results for most workflows. (see the [demo notebook](https://github.com/minimaxir/textgenrnn/blob/master/docs/textgenrnn-demo.ipynb) for more information about these parameters)
#
# If you are using an input file where documents are line-delimited, make sure to set `line_delimited` to `True`.
# + id="P8wSlgXoDPCR" colab_type="code" colab={}
model_cfg = {
'word_level': False, # set to True if want to train a word-level model (requires more data and smaller max_length)
'rnn_size': 128, # number of LSTM cells of each layer (128/256 recommended)
'rnn_layers': 3, # number of LSTM layers (>=2 recommended)
'rnn_bidirectional': False, # consider text both forwards and backward, can give a training boost
'max_length': 30, # number of tokens to consider before predicting the next (20-40 for characters, 5-10 for words recommended)
'max_words': 10000, # maximum number of words to model; the rest will be ignored (word-level model only)
}
train_cfg = {
'line_delimited': False, # set to True if each text has its own line in the source file
'num_epochs': 20, # set higher to train the model for longer
'gen_epochs': 5, # generates sample text from model after given number of epochs
'train_size': 0.8, # proportion of input data to train on: setting < 1.0 limits model from learning perfectly
'dropout': 0.0, # ignore a random proportion of source tokens each epoch, allowing model to generalize better
'validation': False, # If train__size < 1.0, test on holdout dataset; will make overall training slower
'is_csv': False # set to True if file is a CSV exported from Excel/BigQuery/pandas
}
# + [markdown] id="BT__brhBCvJu" colab_type="text"
# In the Colaboratory Notebook sidebar on the left of the screen, select *Files*. From there you can upload files:
#
# 
#
# Upload **any text file** and update the file name in the cell below, then run the cell.
# + id="6OFnPCLADfll" colab_type="code" colab={}
file_name = "tinyshakespeare.txt"
model_name = 'colaboratory' # change to set file name of resulting trained models/texts
# + [markdown] id="LdpZQXknFNY3" colab_type="text"
# The next cell will start the actual training. And thanks to the power of Keras's CuDNN layers, training is super-fast when compared to CPU training on a local machine!
#
# Ideally, you want a training loss less than `1.0` in order for the model to create sensible text consistently.
# + id="aeXshJM-Cuaf" colab_type="code" outputId="ac86ffe4-729f-4555-df4a-fa0ced5e1670" colab={"base_uri": "https://localhost:8080/", "height": 8881}
textgen = textgenrnn(name=model_name)
train_function = textgen.train_from_file if train_cfg['line_delimited'] else textgen.train_from_largetext_file
train_function(
file_path=file_name,
new_model=True,
num_epochs=train_cfg['num_epochs'],
gen_epochs=train_cfg['gen_epochs'],
batch_size=1024,
train_size=train_cfg['train_size'],
dropout=train_cfg['dropout'],
validation=train_cfg['validation'],
is_csv=train_cfg['is_csv'],
rnn_layers=model_cfg['rnn_layers'],
rnn_size=model_cfg['rnn_size'],
rnn_bidirectional=model_cfg['rnn_bidirectional'],
max_length=model_cfg['max_length'],
dim_embeddings=100,
word_level=model_cfg['word_level'])
# + [markdown] id="RTa6zf3e_9gV" colab_type="text"
# You can download a large amount of generated text from your model with the cell below! Rerun the cell as many times as you want for even more text!
# + id="-fxL77nvAMAX" colab_type="code" colab={}
# this temperature schedule cycles between 1 very unexpected token, 1 unexpected token, 2 expected tokens, repeat.
# changing the temperature schedule can result in wildly different output!
temperature = [1.0, 0.5, 0.2, 0.2]
prefix = None # if you want each generated text to start with a given seed text
if train_cfg['line_delimited']:
n = 1000
max_gen_length = 60 if model_cfg['word_level'] else 300
else:
n = 1
max_gen_length = 2000 if model_cfg['word_level'] else 10000
timestring = datetime.now().strftime('%Y%m%d_%H%M%S')
gen_file = '{}_gentext_{}.txt'.format(model_name, timestring)
textgen.generate_to_file(gen_file,
temperature=temperature,
prefix=prefix,
n=n,
max_gen_length=max_gen_length)
files.download(gen_file)
# + [markdown] id="ClJwpF_ACONp" colab_type="text"
# You can download the weights and configuration files in the cell below, allowing you recreate the model on your own computer!
# + id="4RNY6RBI9LmL" colab_type="code" colab={}
files.download('{}_weights.hdf5'.format(model_name))
files.download('{}_vocab.json'.format(model_name))
files.download('{}_config.json'.format(model_name))
# + [markdown] id="oF4-PqF0Fl7R" colab_type="text"
# To recreate the model on your own computer, after installing textgenrnn and TensorFlow, you can create a Python script with:
#
# ```
# from textgenrnn import textgenrnn
# textgen = textgenrnn(weights_path='colaboratory_weights.hdf5',
# vocab_path='colaboratory_vocab.json',
# config_path='colaboratory_config.json')
#
# textgen.generate_samples(max_gen_length=1000)
# textgen.generate_to_file('textgenrnn_texts.txt', max_gen_length=1000)
# ```
#
# Have fun with your new model! :)
# + [markdown] id="92Zjtsb_Dgj-" colab_type="text"
# # Etcetera
#
# If the model fails to load on a local machine due to a model-size-not-matching bug (common in >30MB weights), this is due to a file export bug from Colaboratory. To work around this issue, save the weights to Google Drive with the two cells below and download from there.
# + id="F-IzscxUHmAB" colab_type="code" colab={}
# !pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from google.colab import files
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# + id="WR4_XJpfKAIn" colab_type="code" outputId="f70ce499-1062-4968-8423-c7dbcd71cc5d" colab={"base_uri": "https://localhost:8080/", "height": 35}
uploaded = drive.CreateFile({'title': '{}_weights.hdf5'.format(model_name)})
uploaded.SetContentFile('{}_weights.hdf5'.format(model_name))
uploaded.Upload()
print('Uploaded file with ID {}'.format(uploaded.get('id')))
# + [markdown] id="ig-KVgkCDCKD" colab_type="text"
# If the notebook has errors (e.g. GPU Sync Fail), force-kill the Colaboratory virtual machine and restart it with the command below:
# + id="rIHiVP53FnsX" colab_type="code" colab={}
# !kill -9 -1
| Interactive_textgenrnn_Demo_w_GPU.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import torch as tr
# + pycharm={"name": "#%%\n"}
a = tr.tensor([-10, 10, 10])
tr.exp(a) / tr.exp(a).sum()
# + [markdown] pycharm={"name": "#%% md\n"}
# sentences = [
# "How are you",
# "Who are you",
# "Who are they",
# "Who are we",
# "Who am I",
# "Who am I",
# "Where are you going"
# ]
#
# # replace to training dataset from training datas
# ## 1. all the elements in sentences are integers, elements equal to words in this model
#
# ## 2. read all elements from training data with randomized segments having same orders
#
#
# + pycharm={"name": "#%%\n"}
path1 = "C:/Users/gzkei/PycharmProjects/cis667-secretary-problem/data/first_training_set.csv"
path2 = "C:/Users/gzkei/PycharmProjects/cis667-secretary-problem/data/second_training_set.csv"
path3 = "C:/Users/gzkei/PycharmProjects/cis667-secretary-problem//data/third_training_set.csv"
path4 = "C:/Users/gzkei/PycharmProjects/cis667-secretary-problem//data/fourth_training_set.csv"
path5 = "C:/Users/gzkei/PycharmProjects/cis667-secretary-problem//data/fifth_training_set.csv"
# pick 5000 (each 1000, imply by the parameter inside) segments in each dataset with randomized segments
sentences = []
import number_helpers
sen1 = number_helpers.augment_sentences(path1, 10, 100)
sen2 = number_helpers.augment_sentences(path2, 10, 100)
sen3 = number_helpers.augment_sentences(path3, 10, 100)
sen4 = number_helpers.augment_sentences(path4, 10, 100)
sen5 = number_helpers.augment_sentences(path5, 10, 100)
sentences.extend(sen1)
sentences.extend(sen2)
sentences.extend(sen3)
sentences.extend(sen4)
sentences.extend(sen5)
print(sentences[:5])
print('------------------------------------------------------------------------------------------------------------------------')
print('length: '+str(len(sentences)))
# + [markdown] pycharm={"name": "#%% md\n"}
# # start to set building
# + pycharm={"name": "#%%\n"}
# Make a dictionary mapping each word to a one-hot tensor
words = set()
for sentence in sentences:
for word in sentence:
#for word in sentence.split(" "):
words.add(word)
words = tuple(words) # deterministic order
print(words[:5])
# PyTorch LSTM expects 3d tensors representing (sequence length, batch size, number of features)
I = tr.eye(len(words))
dictionary = {
word: I[w].reshape(1, 1, len(words))
for w, word in enumerate(words)}
print(len(dictionary))
# + [markdown] pycharm={"name": "#%% md\n"}
# # try to train the LSTM
# + pycharm={"name": "#%%\n"}
# Define a small LSTM recurrent neural network with linear hidden-to-output layer
class Net(tr.nn.Module):
def __init__(self, hidden_size):
super(Net, self).__init__()
self.lstm = tr.nn.LSTM(input_size=len(words), hidden_size=hidden_size)
self.readout = tr.nn.Linear(in_features=hidden_size, out_features=len(words))
def forward(self, x, v=None):
_, v = self.lstm(x) if v is None else self.lstm(x, v) # update hidden from input
h, c = v # LSTM hidden vector and internal so-called "cell state"
y = self.readout(h) # get output from hidden
y = tr.softmax(y, dim=-1) # make sure output is a probability distribution
return y, v
print(Net(3))
# + [markdown] pycharm={"name": "#%% md\n"}
# # next
# + pycharm={"is_executing": true, "name": "#%%\n"}
net = Net(3)
# opt = tr.optim.SGD(net.parameters(), lr=0.001)
opt = tr.optim.Adam(net.parameters(), lr=0.05)
for epoch in range(20000):
batch_loss = 0.
for sentence in sentences:
#tokens = sentence.split(" ")
tokens = sentence
v = None # no hidden activation at first time-step
for t in range(len(tokens) - 1):
y, v = net(dictionary[tokens[t]], v)
y_target = dictionary[tokens[t + 1]]
loss = tr.sum((y - y_target)**2) # MSE
#loss = -tr.sum(y_target * tr.log(y)) # Cross-entropy
batch_loss += loss
batch_loss.backward()
opt.step()
opt.zero_grad()
print(epoch, batch_loss.item())
# if epoch % 100 == 0: print(epoch, batch_loss.item())
# + [markdown] pycharm={"name": "#%% md\n"}
# # complete epoch
# ### and try prediction
# + pycharm={"name": "#%%\n"}
# Try predicting
word = "3"
v = None
print(word)
for t in range(3):
x = dictionary[word]
y, v = net(dictionary[tokens[t]], v)
y = y.squeeze() # ignore singleton dimensions for time-step/example
w = y.argmax()
word = words[w]
prob = y[w]
print(word, prob.item())
# + pycharm={"name": "#%%\n"}
| algorithms/BeamLstmTraining-Copy2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <center>
# <img src="../../img/ods_stickers.jpg">
# ## Открытый курс по машинному обучению. Сессия № 2
# </center>
# Автор материала: программист-исследователь Mail.ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ Юрий Кашницкий. Материал распространяется на условиях лицензии [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала.
# # <center>Занятие 7. Обучение без учителя
# ## <center>Часть 3. Кластеризация. Метод k-средних (k-means)
# Метод k-means – это один из наиболее популярных методов кластеризации. Основная идея метода заключается в том, что на каждой итерации пересчитывается центр масс (центроид) для каждого кластера, полученного на предыдущем шаге, затем объекты снова разбиваются на кластеры согласно тому, какой из новых центроидов находится ближе.
#
# Более формально, алгоритм принимает на вход выборку $X_1, \dots, X_N$ и параметр $k$, указывающий необходимое число кластеров. Выходом алгоритма является набор из $k$ центроидов $\{\mu_1, \dots, \mu_k\}$, с помощью которых кластеризация осуществляется путём отнесения каждого объекту к ближайшему центроиду. Все точки внутри одного кластера ближе к центроиду этого кластера, чем к центроиду любого другого кластера.
#
# Метод может быть сформулирован как задача оптимизации, а именно, минимизации суммарного квадратичного отклонения точек кластеров от центров этих кластеров по центроидам и кластерам:
# $$\sum_{i=1}^k \sum_{X_n \in C_i} ||X_n - \mu_i||^2 \rightarrow \min, \text{где $C_i$ - это $i$-ый кластер, $\mu_i$ - это центр масс кластера $C_i$.}$$
#
# Решение такой задачи оптимизации является NP-трудной задачей, однако существует простой итеративный алгоритм, позволяющий найти локальный минимум указанного функционала. Алгоритм представляет собой последовательное чередование двух шагов до сходимости.
#
# Предположим, что как-то (например, случайно) выбраны начальные положения центроидов $\mu_1, \dots, \mu_k$.
#
# 1) *Этап кластеризациu.* На данном этапе происходит кластеризация выборки, как было описано выше: каждый объект относится к кластеру ближайшего к нему центроида. Формально, $$C_i = \{X_n : ||X_n - \mu_i|| \leq ||X_n - \mu_j||, \text{ для всех $j \in \{1, \dots, k\}$}\}.$$
#
# 2) *Этап обновления центроидов.* На данном этапе центроиды пересчитываются, как центры масс только что построенных кластеров. Формально, $$\mu_i = \frac{1}{|C_i|}\sum_{X_n \in C_i} X_n.$$
#
# Этот процесс продолжается, пока центроиды и кластеризация продолжают изменяться. Алгоритм гарантированно сходится, однако не гарантируется достижение глобального минимума – а только одного из локальных минимумов. Другим недостатком алгоритма является то, что итоговая кластеризация зависит от выбора исходных центров кластеров. На практике алгоритм запускается несколько раз из различных начальных приближений, а полученные результаты некоторым образом усредняются. Стоит также отметить, что число кластеров необходимо знать заранее. Существуют различные эвристики, позволяющие выбирать в некотором смысле оптимальное число кластеров.
# ### Пример: кластеризация игроков NBA
# Про <a href="http://www.databasebasketball.com/about/aboutstats.htm">признаки</a> игроков.
# +
import numpy as np
import pandas as pd
# %matplotlib inline
import matplotlib.pyplot as plt
nba = pd.read_csv("../../data/nba_2013.csv")
nba.head(3)
# +
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
kmeans = KMeans(n_clusters=5, random_state=1)
numeric_cols = nba._get_numeric_data().dropna(axis=1)
kmeans.fit(numeric_cols)
# Visualizing using PCA
pca = PCA(n_components=2)
res = pca.fit_transform(numeric_cols)
plt.figure(figsize=(12,8))
plt.scatter(res[:,0], res[:,1], c=kmeans.labels_, s=50, cmap='viridis')
plt.title('PCA')
# Visualizing using 2 features: Total points vs. Total assists
plt.figure(figsize=(12,8))
plt.scatter(nba['pts'], nba['ast'], c=kmeans.labels_, s=50, cmap='viridis')
plt.xlabel('Total points')
plt.ylabel('Total assitances')
# Visualizing using 2 features: Age vs. Minutes played
plt.figure(figsize=(12,8))
plt.scatter(nba['age'], nba['mp'], c=kmeans.labels_, s=50, cmap='viridis')
plt.xlabel('Age')
plt.ylabel('Minutes played');
# -
# ### Инициализация центроидов
#
# Метод `sklearn.KMeans` содержит параметры `n_init` (число запусков из различных начальных приближений) и `init`. Есть три способа инициализации центроидов:
# - `k-means++` – "умная" инициализация центроидов для ускорения сходимости.
# - `random` – случайная инициализация центроидов.
# - `ndarray` – заданная инициализация центроидов.
# ## Сжатие изображений с K-means
# +
import matplotlib.image as mpimg
img = mpimg.imread('../../img/woman.jpg')[..., 1]
plt.figure(figsize = (20, 12))
plt.axis('off')
plt.imshow(img, cmap='gray');
# +
from scipy.stats import randint
from sklearn.cluster import MiniBatchKMeans
X = img.reshape((-1, 1))
k_means = MiniBatchKMeans(n_clusters=3)
k_means.fit(X)
values = k_means.cluster_centers_
labels = k_means.labels_
img_compressed = values[labels].reshape(img.shape)
plt.figure(figsize = (20, 12))
plt.axis('off')
plt.imshow(img_compressed, cmap = 'gray');
# -
# # Нахождение тем в текстах
# **Применим KMeans для кластеризации текстов из 4 новостных категорий.**
# +
from time import time
from sklearn import metrics
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfTransformer, TfidfVectorizer
from sklearn.preprocessing import Normalizer
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space']
print("Loading 20 newsgroups dataset for categories:")
print(categories)
dataset = fetch_20newsgroups(subset='all', categories=categories,
shuffle=True, random_state=42)
print("%d documents" % len(dataset.data))
print("%d categories" % len(dataset.target_names))
labels = dataset.target
true_k = np.unique(labels).shape[0]
# -
# **Закодируем тексты с помощью TF-IDF признаков.**
# +
print("Extracting features from the training dataset using a sparse vectorizer")
vectorizer = TfidfVectorizer(max_df=0.5, max_features=1000,
min_df=2, stop_words='english')
X = vectorizer.fit_transform(dataset.data)
print("n_samples: %d, n_features: %d" % X.shape)
# -
# **И применим к получившимся векторам метод $k$ средних.**
# +
km = KMeans(n_clusters=true_k, init='k-means++', max_iter=100, n_init=1)
print("Clustering sparse data with %s" % km)
t0 = time()
km.fit(X)
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels, km.labels_))
print("Completeness: %0.3f" % metrics.completeness_score(labels, km.labels_))
print("V-measure: %0.3f" % metrics.v_measure_score(labels, km.labels_))
print("Adjusted Rand-Index: %.3f"
% metrics.adjusted_rand_score(labels, km.labels_))
print("Silhouette Coefficient: %0.3f"
% metrics.silhouette_score(X, km.labels_, sample_size=1000))
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
# -
# **Выведем слова, соответствующие самым весомым компонентам центров кластеров.**
terms = vectorizer.get_feature_names()
for i in range(true_k):
print("Cluster %d:" % (i + 1), end='')
for ind in order_centroids[i, :10]:
print(' %s' % terms[ind], end='')
print()
# ## Кластеризация рукописных цифр
# +
from sklearn.datasets import load_digits
digits = load_digits()
X, y = digits.data, digits.target
# -
kmeans = KMeans(n_clusters=10)
kmeans.fit(X)
# +
from sklearn.metrics import adjusted_rand_score
adjusted_rand_score(y, kmeans.predict(X))
# -
_, axes = plt.subplots(2, 5)
for ax, center in zip(axes.ravel(), kmeans.cluster_centers_):
ax.matshow(center.reshape(8, 8), cmap=plt.cm.gray)
ax.set_xticks(())
ax.set_yticks(())
# ## Полезные ссылки
# - <a href="https://en.wikipedia.org/wiki/K-means_clustering">k-means</a> на Википедии
# - <a href="">Статья</a> про нечеткую кластеризацию городов по социально-экономическим показателям на Хабрахабре
| jupyter_russian/topic07_unsupervised/lesson7_part3_kmeans.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Supervised Anomaly Detection with Random Forest
#
# ## Setup
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sqlite3
with sqlite3.connect('../../ch_11/logs/logs.db') as conn:
logs_2018 = pd.read_sql(
'SELECT * FROM logs WHERE datetime BETWEEN "2018-01-01" AND "2019-01-01";',
conn, parse_dates=['datetime'], index_col='datetime'
)
hackers_2018 = pd.read_sql(
'SELECT * FROM attacks WHERE start BETWEEN "2018-01-01" AND "2019-01-01";',
conn, parse_dates=['start', 'end']
).assign(
duration=lambda x: x.end - x.start,
start_floor=lambda x: x.start.dt.floor('min'),
end_ceil=lambda x: x.end.dt.ceil('min')
)
# -
# ## Get training and testing sets
# +
def get_X(log, day):
"""
Get data we can use for the X
Parameters:
- log: The logs dataframe
- day: A day or single value we can use as a datetime index slice
Returns:
A `pandas.DataFrame` object
"""
return pd.get_dummies(log.loc[day].assign(
failures=lambda x: 1 - x.success
).query('failures > 0').resample('1min').agg(
{'username': 'nunique', 'failures': 'sum'}
).dropna().rename(
columns={'username': 'usernames_with_failures'}
).assign(
day_of_week=lambda x: x.index.dayofweek,
hour=lambda x: x.index.hour
).drop(columns=['failures']), columns=['day_of_week', 'hour'])
def get_y(datetimes, hackers, resolution='1min'):
"""
Get data we can use for the y (whether or not a hacker attempted a log in during that time).
Parameters:
- datetimes: The datetimes to check for hackers
- hackers: The dataframe indicating when the attacks started and stopped
- resolution: The granularity of the datetime. Default is 1 minute.
Returns:
`pandas.Series` of Booleans.
"""
date_ranges = hackers.apply(
lambda x: pd.date_range(x.start_floor, x.end_ceil, freq=resolution),
axis=1
)
dates = pd.Series(dtype='object')
for date_range in date_ranges:
dates = pd.concat([dates, date_range.to_series()])
return datetimes.isin(dates)
def get_X_y(log, day, hackers):
"""
Get the X, y data to build a model with.
Parameters:
- log: The logs dataframe
- day: A day or single value we can use as a datetime index slice
- hackers: The dataframe indicating when the attacks started and stopped
Returns:
X, y tuple where X is a `pandas.DataFrame` object
and y is a `pandas.Series` object
"""
X = get_X(log, day)
y = get_y(X.reset_index().datetime, hackers)
return X, y
# -
# Train on January 2018; test on February 2018:
X_train, y_train = get_X_y(logs_2018, '2018-01', hackers_2018)
X_test, y_test = get_X_y(logs_2018, '2018-02', hackers_2018)
# ## Random Forest
# Accepting all the defaults.
# +
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(random_state=0, n_estimators=100).fit(X_train, y_train)
# -
# ## Evaluate the Model
# Performance is pretty good:
# +
from sklearn.metrics import classification_report
print(classification_report(y_test, rf.predict(X_test)))
# -
# Examine the plots:
# +
from ml_utils.classification import confusion_matrix_visual, plot_pr_curve, plot_roc
fig, axes = plt.subplots(1, 3, figsize=(20, 5))
plot_roc(y_test, rf.predict_proba(X_test)[:,1], ax=axes[0])
confusion_matrix_visual(y_test, rf.predict(X_test), ax=axes[1], class_labels=[False, True])
plot_pr_curve(y_test, rf.predict_proba(X_test)[:,1], ax=axes[2])
plt.suptitle('Random Forest Classifier')
| solutions/ch_11/exercise_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # One Table To Rule Them All: Radio
#
# This notebook generates a table of radio components in the CDFS and ELAIS-S1 fields, according to various incarnations of the ATLAS survey. To run it, you will need a MongoDB server with the RGZ database loaded. All other data is fetched from the internet.
#
# In the following cell, specify the MongoDB server details:
MONGO_HOST = 'localhost'
MONGO_PORT = 27017
# In this cell, specify if you have access to a crowdastro output file (crowdastro.h5), and if so, where it is:
USING_CROWDASTRO = True
CROWDASTRO_PATH = 'crowdastro-swire.h5'
# To get this file, run `crowdastro import_data --ir swire`.
# In this cell, specify if you have access to a CSV of the Fan et al. (2015) cross-identifications, and if so, where it is:
USING_FAN = True
FAN_PATH = 'J:/repos/crowdastro/data/fan_2015.csv'
# Next, we will fetch the resources we need.
NORRIS_COMPONENTS_URI = 'http://www.atnf.csiro.au/people/rnorris/papers/n202/tab4.txt'
NORRIS_CROSS_IDENTIFICATIONS_URI = 'http://www.atnf.csiro.au/people/rnorris/papers/n202/tab6.txt'
MIDDELBERG_COMPONENTS_URI = 'http://iopscience.iop.org/article/10.1086/508275/fulltext/datafile4.txt'
MIDDELBERG_CROSS_IDENTIFICATIONS_URI = 'http://iopscience.iop.org/article/10.1086/508275/fulltext/datafile6.txt'
# Load Norris components.
import requests, io, astropy.io.ascii as asc, astropy.table, pandas
norris_components = astropy.table.Table.from_pandas(
pandas.read_fwf(
io.StringIO(
requests.get(NORRIS_COMPONENTS_URI).text
),
skiprows=[0, 2],
header=0,
widths=map(len, [
' # ',
'Name ',
'Radio RA ',
'Radio dec ',
'err(RA) ',
'err(dec) ',
'Peak Flux ',
'Int flux ',
'Bmaj ',
'Bmin ',
' Bpa ',
' rms ',
])
)
)
norris_components
# +
# Load Norris cross-identifications.
# This table has inconsistent tabs, so we will have to convert them to "soft tabs".
def replace_tabs(s, tabstop=8):
"""Convert tabs to spaces."""
out = ''
upto = 0
last = None
for c in s:
if c == '\t':
# Fill up to next tabstop.
diff = tabstop - upto % tabstop
if diff == 0:
diff = tabstop
out += ' ' * diff
upto += diff
last = c
continue
last = c
out += c
upto += 1
return out
test_input = ('S001 ATCDFS_J032602.78-284709.0 C001 SWIRE3_J032603.15-284708.5 3:26:02.785 -28:47:09.06 1.4 33.8 21.1 -1.0 -1.0 -1.0 4 looks like a group in irac 1')
test_output = ('S001 ATCDFS_J032602.78-284709.0 C001 SWIRE3_J032603.15-284708.5 3:26:02.785 -28:47:09.06 1.4 33.8 21.1 -1.0 -1.0 -1.0 4 looks like a group in irac 1')
assert test_output == replace_tabs(test_input)
norris_cross_identifications = astropy.table.Table.from_pandas(
pandas.read_fwf(
io.StringIO(
'\n'.join(map(
lambda s: replace_tabs(s, 8),
requests.get(NORRIS_CROSS_IDENTIFICATIONS_URI).text.split('\r\n'))
)
),
skiprows=[0, 2],
header=0,
widths=[8, 32, 20, 28, 16, 16, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 16, 8, 16]
)
)
norris_cross_identifications[700:710]
# -
# Load Middelberg tables.
middelberg_components = asc.read(MIDDELBERG_COMPONENTS_URI)
print(middelberg_components[0])
middelberg_cross_identifications = asc.read(MIDDELBERG_CROSS_IDENTIFICATIONS_URI)
print(middelberg_cross_identifications[0])
# +
# Convert Middelberg data into columns. There's no catalogue matching to do here so we can
# throw everything in right away.
import astropy.coordinates
_middelberg_component_ids = middelberg_components['ID']
_middelberg_component_names = middelberg_components['Name']
_middelberg_component_positions = [
astropy.coordinates.SkyCoord(ra=(r['RAh'], r['RAm'], r['RAs']),
dec=(-r['DEd'], r['DEm'], r['DEs']),
unit=('hourangle', 'deg'))
for r in middelberg_components
]
_middelberg_component_ras = [r.ra.deg for r in _middelberg_component_positions]
_middelberg_component_decs = [r.dec.deg for r in _middelberg_component_positions]
_middelberg_source_ids = middelberg_components['ID']
_middelberg_cid_to_source_id = {}
_middelberg_cid_to_source_name = {}
_middelberg_cid_to_swire = {}
_middelberg_cid_to_source_z = {}
_middelberg_cid_to_source_ra = {}
_middelberg_cid_to_source_dec = {}
for row in middelberg_cross_identifications:
for component in row['CID'].split(','):
component = component.strip()
_middelberg_cid_to_source_id[component] = row['ID']
_middelberg_cid_to_source_name[component] = row['Name']
_middelberg_cid_to_swire[component] = row['SName']
_middelberg_cid_to_source_z[component] = row['z']
pos = astropy.coordinates.SkyCoord(ra=(row['RAh'], row['RAm'], row['RAs']),
dec=(-row['DEd'], row['DEm'], row['DEs']),
unit=('hourangle', 'deg'))
_middelberg_cid_to_source_ra[component] = pos.ra.deg
_middelberg_cid_to_source_dec[component] = pos.dec.deg
_middelberg_component_source_ids = [_middelberg_cid_to_source_id[c] for c in _middelberg_component_ids]
_middelberg_component_source_names = [_middelberg_cid_to_source_name[c] for c in _middelberg_component_ids]
_middelberg_component_swires = [_middelberg_cid_to_swire[c] for c in _middelberg_component_ids]
_middelberg_component_source_zs = [_middelberg_cid_to_source_z[c] for c in _middelberg_component_ids]
_middelberg_component_source_ras = [_middelberg_cid_to_source_ra[c] for c in _middelberg_component_ids]
_middelberg_component_source_decs = [_middelberg_cid_to_source_dec[c] for c in _middelberg_component_ids]
# +
# Load RGZ.
import pymongo, numpy
client = pymongo.MongoClient(MONGO_HOST, MONGO_PORT)
db = client['radio']
_rgz_sources = []
_rgz_coords = []
_rgz_zids = []
for subject in db.radio_subjects.find({'metadata.survey': 'atlas'}):
source = subject['metadata']['source']
ra, dec = subject['coords']
zid = subject['zooniverse_id']
_rgz_sources.append(source)
_rgz_coords.append((ra, dec))
_rgz_zids.append(zid)
_rgz_coords = numpy.array(_rgz_coords)
# -
# Load consensuses from crowdastro.
import h5py
with h5py.File(CROWDASTRO_PATH, 'r') as crowdastro_h5:
# (atlas_i, ir_i, success, percentage)
_crowdastro_consensus_objects = crowdastro_h5['/atlas/cdfs/consensus_objects']
_crowdastro_zids = [r[0].decode('ascii') for r in crowdastro_h5['/atlas/cdfs/string']]
_crowdastro_swire_names = [r.decode('ascii') for r in crowdastro_h5['/swire/cdfs/string']]
_crowdastro_zid_to_swire = {}
for atlas_i, ir_i, success, percentage in _crowdastro_consensus_objects:
_crowdastro_zid_to_swire[_crowdastro_zids[int(atlas_i)]] = _crowdastro_swire_names[int(ir_i)]
# +
# Match RGZ to Norris.
import scipy.spatial
_rgz_zid_to_norris = {} # Maps ZID -> Norris CID
_norris_cids = [r['#'] for r in norris_components]
_norris_coords = [astropy.coordinates.SkyCoord(
ra=r['Radio RA'],
dec=r['Radio dec'],
unit=('hourangle', 'deg')) for r in norris_components]
_norris_coords = numpy.array([(p.ra.deg, p.dec.deg) for p in _norris_coords])
_norris_tree = scipy.spatial.KDTree(_norris_coords)
# Assume that there are no situations where one Norris component maps to multiple RGZ components (and vice versa).
_dists, _indices = _norris_tree.query(_rgz_coords)
_matches = _dists < 3 / 60 / 60
for zid, match, index in zip(_rgz_zids, _matches, _indices):
if not match:
continue
_rgz_zid_to_norris[zid] = _norris_cids[index]
_norris_to_rgz_zid = {j:i for i, j in _rgz_zid_to_norris.items()}
# -
# Load Fan.
fan_cross_identifications = asc.read(FAN_PATH, header_start=0, delimiter=',')
_fan_source_ids = fan_cross_identifications['id']
_fan_id_to_swire = {r['id']:r['swire'] for r in fan_cross_identifications}
# Assuming that CID in Fan = CID in Norris.
_fan_component_to_source = {}
_fan_component_to_swire = {}
for row in fan_cross_identifications:
components = row['radios'].split(',')
for component in components:
component = component.strip()
_fan_component_to_source[component] = row['id']
_fan_component_to_swire[component] = row['swire']
# Now, we can construct the table. We will have the following columns:
#
# - Key
# - Component ID (Norris)
# - Source ID (Norris)
# - Source Name (Norris)
# - SWIRE Name (Norris)
# - RA (Norris)
# - Dec (Norris)
# - Source RA (Norris)
# - Source Dec (Norris)
# - Component ID (RGZ)
# - Zooniverse ID (RGZ)
# - SWIRE Name (RGZ-MV)
# - RA (RGZ)
# - Dec (RGZ)
# - Source ID (Fan)
# - SWIRE Name (Fan)
# - Component ID (Middelberg)
# - Component Name (Middelberg)
# - RA (Middelberg)
# - Dec (Middelberg)
# - Source ID (Middelberg)
# - Source Name (Middelberg)
# - SWIRE Name (Middelberg)
# - Source RA (Middelberg)
# - Source Dec (Middelberg)
# - Source Redshift (Middelberg)
# +
columns = [
'Key', 'Component ID (Norris)', 'Source ID (Norris)', 'Source Name (Norris)',
'SWIRE Name (Norris)', 'RA (Norris)', 'Dec (Norris)', 'Source RA (Norris)', 'Source Dec (Norris)',
'Component ID (RGZ)', 'Zooniverse ID (RGZ)', 'SWIRE Name (RGZ)', 'RA (RGZ)', 'Dec (RGZ)',
'Source ID (Fan)', 'SWIRE Name (Fan)',
'Component ID (Middelberg)', 'Component Name (Middelberg)', 'RA (Middelberg)',
'Dec (Middelberg)', 'Source ID (Middelberg)', 'Source Name (Middelberg)',
'SWIRE Name (Middelberg)', 'Source RA (Middelberg)', 'Source Dec (Middelberg)',
'Source Redshift (Middelberg)',
]
# +
import astropy.coordinates
# Component ID (Norris)
component_ids_norris = [r['#'] for r in norris_components]
# Source ID (Norris)
_component_to_source = {}
for r in norris_cross_identifications:
for component in r['Component'].split(','):
_component_to_source[component.strip()] = r['#']
source_ids_norris = [_component_to_source[c] for c in component_ids_norris]
# Source Name (Norris)
_source_to_name = {r['#']:r['Name'] for r in norris_cross_identifications}
source_names_norris = [_source_to_name[s] for s in source_ids_norris]
# SWIRE Name (Norris)
_source_to_swire_norris = {r['#']:r['SWIRE'] for r in norris_cross_identifications}
swire_names_norris = [_source_to_swire_norris[s] for s in source_ids_norris]
# RA (Norris), Dec (Norris)
_positions_norris = [astropy.coordinates.SkyCoord(
ra=r['Radio RA'],
dec=r['Radio dec'],
unit=('hourangle', 'deg')) for r in norris_components]
ras_norris = [p.ra.deg for p in _positions_norris]
decs_norris = [p.dec.deg for p in _positions_norris]
# Source RA (Norris), Source Dec (Norris)
_source_positions_norris = [astropy.coordinates.SkyCoord(
ra=r['Radio RA'],
dec=r['Radio dec'],
unit=('hourangle', 'deg')) for r in norris_cross_identifications]
_source_id_to_position_norris = dict(zip(norris_cross_identifications['#'], _source_positions_norris))
source_ras_norris = [_source_id_to_position_norris[s].ra.deg for s in source_ids_norris]
source_decs_norris = [_source_id_to_position_norris[s].dec.deg for s in source_ids_norris]
# Zooniverse ID (RGZ)
zooniverse_ids_rgz = [_norris_to_rgz_zid.get(cid, '') for cid in component_ids_norris]
# Component ID (RGZ)
_zid_to_cid = {z:c for z, c in zip(_rgz_zids, _rgz_sources)}
_zid_to_coord = {z:p for z, p in zip(_rgz_zids, _rgz_coords)}
component_ids_rgz = [_zid_to_cid.get(z, '') for z in zooniverse_ids_rgz]
# Extend all of these columns by RGZ objects with no corresponding Norris object.
_zid_no_norris = [z for z in _rgz_zids if z not in _rgz_zid_to_norris]
_cid_no_norris = [_zid_to_cid.get(z, '') for z in _zid_no_norris]
_blank_no_norris = [''] * len(_zid_no_norris)
for l in [component_ids_norris, source_ids_norris, source_names_norris,
swire_names_norris, ras_norris, decs_norris, source_ras_norris,
source_decs_norris]:
l.extend(_blank_no_norris)
zooniverse_ids_rgz.extend(_zid_no_norris)
component_ids_rgz.extend(_cid_no_norris)
# RA (RGZ), Dec (RGZ)
ras_rgz = [_zid_to_coord.get(z, ('', ''))[0] for z in zooniverse_ids_rgz]
decs_rgz = [_zid_to_coord.get(z, ('', ''))[1] for z in zooniverse_ids_rgz]
# SWIRE Name (RGZ)
swire_names_rgz = [_crowdastro_zid_to_swire.get(z, '') for z in zooniverse_ids_rgz]
# Source ID (Fan)
fan_source_ids = [_fan_component_to_source.get(cid, '') for cid in component_ids_norris]
# SWIRE Name (Fan)
fan_swire_names = [_fan_component_to_swire.get(cid, '') for cid in component_ids_norris]
# Pad out the Middelberg columns.
middelberg_component_ids = [''] * len(component_ids_norris) + list(_middelberg_component_ids)
middelberg_component_names = [''] * len(component_ids_norris) + list(_middelberg_component_names)
middelberg_component_ras = [''] * len(component_ids_norris) + list(_middelberg_component_ras)
middelberg_component_decs = [''] * len(component_ids_norris) + list(_middelberg_component_decs)
middelberg_component_source_ids = [''] * len(component_ids_norris) + list(_middelberg_component_source_ids)
middelberg_component_source_names = [''] * len(component_ids_norris) + list(_middelberg_component_source_names)
middelberg_component_swires = [''] * len(component_ids_norris) + list(_middelberg_component_swires)
middelberg_component_source_ras = [''] * len(component_ids_norris) + list(_middelberg_component_source_ras)
middelberg_component_source_decs = [''] * len(component_ids_norris) + list(_middelberg_component_source_decs)
middelberg_component_source_zs = [''] * len(component_ids_norris) + list(_middelberg_component_source_zs)
# Pad out the other columns.
for l in [component_ids_norris, source_ids_norris, source_names_norris,
swire_names_norris, ras_norris, decs_norris, component_ids_rgz,
zooniverse_ids_rgz, swire_names_rgz, ras_rgz, decs_rgz,
fan_source_ids, fan_swire_names, source_ras_norris, source_decs_norris]:
l.extend([''] * len(_middelberg_component_ids))
keys = list(range(len(component_ids_norris)))
table = astropy.table.Table(data=[keys, component_ids_norris, source_ids_norris, source_names_norris,
swire_names_norris, ras_norris, decs_norris, source_ras_norris,
source_decs_norris,
component_ids_rgz, zooniverse_ids_rgz, swire_names_rgz, ras_rgz, decs_rgz,
fan_source_ids, fan_swire_names,
middelberg_component_ids, middelberg_component_names,
middelberg_component_ras, middelberg_component_decs,
middelberg_component_source_ids, middelberg_component_source_names,
middelberg_component_swires, middelberg_component_source_ras,
middelberg_component_source_decs, middelberg_component_source_zs,
],
names=columns)
table
# -
table.write('one-table-to-rule-them-all.tbl', format='ascii')
| scripts/one-table-to-rule-them-all/radio_table.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Importando as bibliotecas
import pandas as pd
# Informar ao matplotlib sobre os gráficos inline
# %matplotlib inline
# +
# Vamos importar apenas algumas colunas do arquivo
# 3 - Hospistal, 6 - Município, 7 - Complexidade, 8 - Carater de atendimento
# 12 - Sub grupo de procedimento, 14 - Procedimento
# Carregar o arquivo CSV
df = pd.read_csv('sih-janeiro-2017-cirurgias-eletiva-e-emergencia.csv', sep=';', encoding='cp1252',
usecols=[3, 6, 7, 8, 12, 14])
# Trocar os nomes das colunas
df.columns = ['Hospital', 'Municipio', 'Complexidade', 'Carater atendimento', 'Sub grupo procedimento', 'Procedimento']
# Exibir as primeiras linhas do DataFrame
df.head(3)
# -
# Descrever as colunas numéricas
df.describe()
# Listar os hospitais
df['Hospital'].unique()
# Quantas cirurgias foram realizadas em cada hospital
df['Hospital'].value_counts()
# Quantas cirurgias por Sub grupo procedimento
df['Sub grupo procedimento'].value_counts()
# Plotar o grafico por sub grupo procedimento
df['Sub grupo procedimento'].value_counts().plot.bar()
# +
# Vamos criar sub conjunto dos dados originais
df_hospbase = df[df['Hospital'] == '0010456 HBDF HOSPITAL DE BASE DO DISTRITO FEDERAL']
# Linhas iniciais
df_hospbase.head(2)
# -
# Linhas finais do DataFrame
df_hospbase.tail(4)
# Listar linhas aleatórias do DataFrame
df_hospbase.sample(5)
# A quantidade de procedimentos realizados no Hospital de Base
df_hospbase['Procedimento'].value_counts()
# Conta os casos onde a coluna 'Procedimento' contem a palavra 'AMPUTA'
df_hospbase[df_hospbase['Procedimento'].str.contains('AMPUTA')].count()
# +
# Dividir o DataFrame original apenas pelo procedimento de
# parto cesariano
df_parto_cesariano = df[df['Procedimento'] == 'PARTO CESARIANO']
# Mostra as primeiras linhas
df_parto_cesariano.head()
# -
# Verificar a quantidade de partos cesariano por hospital
df_parto_cesariano['Hospital'].value_counts()
# Verificar a quantidade de partos por carater atendimento
df_parto_cesariano['Carater atendimento'].value_counts()
# Plotar um gráfico de barras horizontal
df_parto_cesariano['Hospital'].value_counts().plot.barh()
# Melhorar o gráfico de barras horizontal
# Inverter a ordem e colocar titulo
df_parto_cesariano['Hospital'].value_counts(ascending=True).plot.barh(title='Quantidade de partos cesarianos por hospital')
| csv_cirurgias.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Series.str.extract()
# * expand = True
# #### Series
import pandas as pd
s = pd.Series(['sooyeon','jihyun','seola'])
display(s.index, s.values)
# #### index
type(s.index)
list(s.index)
# #### values
s.values
type(s.values)
# ### expand parameter
# - True: DataFrame을 return
# - False: Series를 return
s
s.str.extract(r'[sj](\D)')
# 소문자 [] 중에 하나
# () 숫자
# ### default :expand = True
# > `s.str.extract(r'[sj](\D)', expand=True)`
# #### type → DataFrame
type(s.str.extract(r'[sj](\D)'))
# ### expand = False
s.str.extract(r'[sj](\D)', expand=False)
type(s.str.extract(r'[sj](\D)', expand=False))
# #### type → Series
# ## **diff = return type**
s.str.extract(r'([sj])(\D)')
s.str.extract(r'([sj])(\D)', expand=True)
s.str.extract(r'([sj])(\D)', expand=False)
type(s.str.extract(r'([sj])(\D)', expand=False))
# #### 그룹이 한 개인 경우에 대해서만 expand에 대한 결과값을 달리한다.
# > 그룹이 하나라면, 그룹에 대한 colums (array)가 한 개 이므로, <br>Series로도 리턴이 가능하다.
titanic = pd.read_csv('titanic.csv')
titanic.head(3)
# +
for data in [titanic]:
data['Title'] = data['Name'].str.extract('([A-za-z]+)\.')
data[0:3] # data.head(3)
# -
titanic['Title'].head(3)
expand_false = titanic['Name'].str.extract('([A-za-z]+)\.',expand=False) # Series
print(type(expand_false))
expand_false[0:3]
expand_false.head(3)
| practice/Series_extract.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="4U_l_eGYrWVE" colab_type="text"
# # FARM Building Blocks
#
# Welcome to the FARM building blocks tutorial! There are many different ways to make use of this repository, but in this notebook, we will be going through the most import building blocks that will help you harvest the rewards of a successfully trained NLP model.
#
# Happy FARMing!
# + [markdown] id="nKnv3c03rWVH" colab_type="text"
# ## 1) Text Classification
#
# GermEval 2018 (GermEval2018) (https://projects.fzai.h-da.de/iggsa/) is an open data set containing texts that need to be classified by whether they are offensive or not. There are a set of coarse and fine labels, but here we will only be looking at the coarse set which labels each example as either OFFENSE or OTHER. To tackle this task, we are going to build a classifier that is composed of Google's BERT language model and a feed forward neural network prediction head.
# + [markdown] id="hULXmgZSrWVJ" colab_type="text"
# ### Setup
# + id="ShqSNsNTrWVL" colab_type="code" outputId="3866129c-f0de-4f62-8f59-049bd74ec79f" colab={"base_uri": "https://localhost:8080/", "height": 54}
# Let's start by adjust the working directory so that it is the root of the repository
# This should be run just once.
import os
os.chdir('../')
print("Current working directory is {}".format(os.getcwd()))
# + id="kPHWNPNBrWVT" colab_type="code" outputId="9ef6693b-a47e-4b37-c801-30419fb9359d" colab={"base_uri": "https://localhost:8080/", "height": 71}
# Here are the imports we need
import torch
from farm.modeling.tokenization import Tokenizer
from farm.data_handler.processor import TextClassificationProcessor
from farm.data_handler.data_silo import DataSilo
from farm.modeling.language_model import LanguageModel
from farm.modeling.prediction_head import TextClassificationHead
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.optimization import initialize_optimizer
from farm.train import Trainer
from farm.utils import MLFlowLogger
# + id="RYpdtDGLrWVb" colab_type="code" outputId="20a15b95-9f74-49ad-8e6c-aee3c3e8a3e7" colab={"base_uri": "https://localhost:8080/", "height": 335}
# Farm allows simple logging of many parameters & metrics. Let's use MLflow framework to track our experiment ...
ml_logger = MLFlowLogger(tracking_uri="https://public-mlflow.deepset.ai/")
ml_logger.init_experiment(experiment_name="Public_FARM", run_name="Tutorial1_Colab")
# + id="-JxI84OErWVi" colab_type="code" outputId="78254dc0-d408-4e51-b06c-e5542b03a309" colab={"base_uri": "https://localhost:8080/", "height": 54}
# We need to fetch the right device to drive the growth of our model
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Devices available: {}".format(device))
# + [markdown] id="QyZMrssgrWVn" colab_type="text"
# ### Data Handling
# + id="TUU-Gq-XrWVo" colab_type="code" outputId="74a4cabe-1fcf-4869-8583-d6cffaa80785" colab={"base_uri": "https://localhost:8080/", "height": 71}
# Here we initialize a tokenizer that will be used for preprocessing text
# This is the BERT Tokenizer which uses the byte pair encoding method.
# It is currently loaded with a German model
tokenizer = Tokenizer.load(
pretrained_model_name_or_path="bert-base-german-cased",
do_lower_case=False)
# + id="pAJ2QrnPrWVt" colab_type="code" colab={}
# In order to prepare the data for the model, we need a set of
# functions to transform data files into PyTorch Datasets.
# We group these together in Processor objects.
# We will need a new Processor object for each new source of data.
# The abstract class can be found in farm.data_handling.processor.Processor
LABEL_LIST = ["OTHER", "OFFENSE"]
processor = TextClassificationProcessor(tokenizer=tokenizer,
max_seq_len=128,
data_dir="data/germeval18",
label_list=LABEL_LIST,
metric="f1_macro",
label_column_name="coarse_label")
# + id="WR5jFJEErWV0" colab_type="code" outputId="f0a8b748-8097-4617-8fdc-156b3d420527" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# We need a DataSilo in order to keep our train, dev and test sets separate.
# The DataSilo will call the functions in the Processor to generate these sets.
# From the DataSilo, we can fetch a PyTorch DataLoader object which will
# be passed on to the model.
# Here is a good place to define a batch size for the model
BATCH_SIZE = 32
data_silo = DataSilo(
processor=processor,
batch_size=BATCH_SIZE)
# + [markdown] id="HxLmbuzarWV6" colab_type="text"
# ### Modeling
# + [markdown] id="pg3GfYkbrWV6" colab_type="text"
# In FARM, we make a strong distinction between the language model and prediction head so that you can mix and match different building blocks for your needs.
#
# For example, in the transfer learning paradigm, you might have the one language model that you will be using for both document classification and NER. Or you perhaps you have a pretrained language model which you would like to adapt to your domain, then use for a downstream task such as question answering.
#
# All this is possible within FARM and requires only the replacement of a few modular components, as we shall see below.
#
# Let's first have a look at how we might set up a model.
# + id="bonmw3uErWV7" colab_type="code" outputId="84fa93e5-cd2f-4e8e-f18d-ec44e3bf122a" colab={"base_uri": "https://localhost:8080/", "height": 71}
# The language model is the foundation on which modern NLP systems are built.
# They encapsulate a general understanding of sentence semantics
# and are not specific to any one task.
# Here we are using Google's BERT model as implemented by HuggingFace.
# The model being loaded is a German model that we trained.
# You can also change the MODEL_NAME_OR_PATH to point to a BERT model that you
# have saved or download one connected to the HuggingFace repository.
# See https://huggingface.co/models for a list of available models
MODEL_NAME_OR_PATH = "bert-base-german-cased"
language_model = LanguageModel.load(MODEL_NAME_OR_PATH)
# + id="kWqLW5x2rWV_" colab_type="code" outputId="9c7398a7-9ee7-4805-ab95-3f3c8d6b832c" colab={"base_uri": "https://localhost:8080/", "height": 54}
# A prediction head is a model that processes the output of the language model
# for a specific task.
# Prediction heads will look different depending on whether you're doing text classification
# Named Entity Recognition (NER), question answering or some other task.
# They should generate logits over the available prediction classes and contain methods
# to convert these logits to losses or predictions
# Here we use TextClassificationHead which receives a single fixed length sentence vector
# and processes it using a feed forward neural network. layer_dims is a list of dimensions:
# [input_dims, hidden_1_dims, hidden_2_dims ..., output_dims]
# Here by default we have a single layer network.
# It takes in a vector of length 768 (the default size of BERT's output).
# It outputs a vector of length 2 (the number of classes in the GermEval18 (coarse) dataset)
prediction_head = TextClassificationHead(num_labels=len(LABEL_LIST))
# + id="fUeLxU8yrWWD" colab_type="code" colab={}
# The language model and prediction head are coupled together in the Adaptive Model.
# This class takes care of model saving and loading and also coordinates
# cases where there is more than one prediction head.
# EMBEDS_DROPOUT_PROB is the probability that an element of the output vector from the
# language model will be set to zero.
EMBEDS_DROPOUT_PROB = 0.1
model = AdaptiveModel(
language_model=language_model,
prediction_heads=[prediction_head],
embeds_dropout_prob=EMBEDS_DROPOUT_PROB,
lm_output_types=["per_sequence"],
device=device)
# + [markdown] id="ITADnSE5rWWF" colab_type="text"
# ### Training
# + id="YrYk9jSkrWWG" colab_type="code" outputId="2c59dd46-a979-4f4a-a886-1956113c1195" colab={"base_uri": "https://localhost:8080/", "height": 87}
# Here we initialize a Bert Adam optimizer that has a linear warmup and warmdown
# Here you can set learning rate, the warmup proportion and number of epochs to train for
LEARNING_RATE = 2e-5
N_EPOCHS = 1
model, optimizer, lr_schedule = initialize_optimizer(
model=model,
device=device,
learning_rate=LEARNING_RATE,
n_batches=len(data_silo.loaders["train"]),
n_epochs=N_EPOCHS)
# + id="HO7h-g1jrWWL" colab_type="code" colab={}
# Training loop handled by this
# It will also trigger evaluation during training using the dev data
# and after training using the test data.
# Set N_GPU to a positive value if CUDA is available, to 0 if not
N_GPU = 1
trainer = Trainer(
model=model,
optimizer=optimizer,
data_silo=data_silo,
epochs=N_EPOCHS,
n_gpu=N_GPU,
lr_schedule=lr_schedule,
device=device,
)
# + id="3-mYPF4frWWQ" colab_type="code" outputId="b3afa6b6-cf5b-4f64-8b91-9eadce5f13aa" colab={"base_uri": "https://localhost:8080/", "height": 1000}
model = trainer.train()
# + id="2UujbSqDrWWU" colab_type="code" outputId="71ac3449-4be5-49d3-ccc1-3a1a18d54014" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# Test your model on a sample (Inference)
from farm.infer import Inferencer
from pprint import PrettyPrinter
infer_model = Inferencer(processor=processor, model=model, task_type="text_classification", gpu=True)
basic_texts = [
{"text": "Martin ist ein Idiot"},
{"text": "<NAME> spielt Handball in Berlin"},
]
result = infer_model.inference_from_dicts(dicts=basic_texts)
PrettyPrinter().pprint(result)
# + [markdown] id="FBNdJEhqrWWW" colab_type="text"
# # Switch to NER
# + [markdown] id="4ksJq_PRrWWX" colab_type="text"
# In a transfer learning paradigm, there is a core computation that is shared amongst all tasks. FARM's modular structure means that you can easily swap out different building blocks to make the same language model work for many different tasks.
#
# We can adapt the above text classification model to NER by simply switching out the processor and prediction head.
# + id="xESpWInRrWWX" colab_type="code" colab={}
# Import the new building blocks
from farm.data_handler.processor import NERProcessor
from farm.modeling.prediction_head import TokenClassificationHead
ml_logger.init_experiment(experiment_name="Public_FARM", run_name="Tutorial1_Colab_NER")
# + id="nN6BqqGcrWWa" colab_type="code" colab={}
# This processor will preprocess the data for the CoNLL03 NER task
ner_labels = ["[PAD]", "X", "O", "B-MISC", "I-MISC", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-OTH", "I-OTH"]
ner_processor = NERProcessor(tokenizer=tokenizer,
max_seq_len=128,
data_dir="../data/conll03-de",
label_list=ner_labels,
metric="seq_f1")
# + id="92f-b_acrWWc" colab_type="code" outputId="9548938f-928e-4191-e643-2eda00c72181" colab={"base_uri": "https://localhost:8080/", "height": 54}
# This prediction head is also a feed forward neural network but expects one
# vector per token in the input sequence and will generate a set of logits
# for each input
ner_prediction_head = TokenClassificationHead(num_labels=len(ner_labels))
# + id="FtZAcpN2rWWf" colab_type="code" outputId="4fb7029d-87e8-4c51-a817-66804a66df45" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# We can integrate these new pieces with the rest using this code
# It is pretty much the same structure as what we had above for text classification
BATCH_SIZE = 32
EMBEDS_DROPOUT_PROB = 0.1
LEARNING_RATE = 2e-5
N_EPOCHS = 1
N_GPU = 1
data_silo = DataSilo(
processor=ner_processor,
batch_size=BATCH_SIZE)
model = AdaptiveModel(
language_model=language_model,
prediction_heads=[ner_prediction_head],
embeds_dropout_prob=EMBEDS_DROPOUT_PROB,
lm_output_types=["per_token"],
device=device)
model, optimizer, lr_schedule = initialize_optimizer(
model=model,
learning_rate=LEARNING_RATE,
n_batches=len(data_silo.loaders["train"]),
n_epochs=N_EPOCHS,
device=device)
trainer = Trainer(
model=model,
optimizer=optimizer,
data_silo=data_silo,
epochs=N_EPOCHS,
n_gpu=N_GPU,
lr_schedule=lr_schedule,
device=device,
)
# + id="JiRA8NrbrWWi" colab_type="code" outputId="a52098b2-d27d-4e8c-b319-b6f43a27d720" colab={"base_uri": "https://localhost:8080/", "height": 1000}
model = trainer.train()
# + id="aiFk6UtWrWWl" colab_type="code" colab={}
| tutorials/1_farm_building_blocks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Import necessary packages
# =========================
# +
# %config IPCompleter.greedy=True
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
sns.set()
# -
# Load train and test data
# ===================
df_train = pd.read_csv('titanic/train.csv')
df_test = pd.read_csv('titanic/test.csv')
# Inspect data information
# ====================
df_train.info()
# # Charts
# # Scatter Plot
sns.catplot(x="Embarked", y="PassengerId", hue="Sex", kind="swarm", data=df_train, legend=["a"]);
# ## Box plot
# +
# If customer doesn't have age, then age is mean of ages in the pClass
df_train_group = df_train.groupby(['Pclass', 'Age'])
df_train_group.head()
sns.boxplot(x="Pclass", y="Age", data=df_train)
# -
# ## Bar chart
#
# Display bar chart example
sns.catplot(x='Sex', col='Survived', kind='count', data=df_train);
# ## Cat plot
sns.catplot('Pclass','Survived', kind='point', data=df_train);
# # Machine Learning algorithm
# +
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, make_scorer
cl = RandomForestClassifier()
from sklearn.model_selection import train_test_split
# ?train_test_split
# -
| notes/titanic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
# +
import sympy
import math
import numpy as np
import matplotlib.pyplot as plt
# -
# # High-School Maths Exercise
# ## Getting to Know Jupyter Notebook. Python Libraries and Best Practices. Basic Workflow
# ### Problem 1. Markdown
# Jupyter Notebook is a very light, beautiful and convenient way to organize your research and display your results. Let's play with it for a while.
#
# First, you can double-click each cell and edit its content. If you want to run a cell (that is, execute the code inside it), use Cell > Run Cells in the top menu or press <kbd>Ctrl</kbd> + <kbd>Enter</kbd>.
#
# Second, each cell has a type. There are two main types: Markdown (which is for any kind of free text, explanations, formulas, results... you get the idea), and code (which is, well... for code :D).
#
# Let me give you a...
# #### Quick Introduction to Markdown
# ##### Text and Paragraphs
# There are several things that you can do. As you already saw, you can write paragraph text just by typing it. In order to create a new paragraph, just leave a blank line. See how this works below:
# ```
# This is some text.
# This text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).
#
# This text is displayed in a new paragraph.
#
# And this is yet another paragraph.
# ```
# **Result:**
#
# This is some text.
# This text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).
#
# This text is displayed in a new paragraph.
#
# And this is yet another paragraph.
#
# ##### Headings
# There are six levels of headings. Level one is the highest (largest and most important), and level 6 is the smallest. You can create headings of several types by prefixing the header line with one to six "#" symbols (this is called a pound sign if you are ancient, or a sharp sign if you're a musician... or a hashtag if you're too young :D). Have a look:
# ```
# # Heading 1
# ## Heading 2
# ### Heading 3
# #### Heading 4
# ##### Heading 5
# ###### Heading 6
# ```
#
# **Result:**
#
# # Heading 1
# ## Heading 2
# ### Heading 3
# #### Heading 4
# ##### Heading 5
# ###### Heading 6
#
# It is recommended that you have **only one** H1 heading - this should be the header of your notebook (or scientific paper). Below that, you can add your name or just jump to the explanations directly.
#
# ##### Emphasis
# You can create emphasized (stonger) text by using a **bold** or _italic_ font. You can do this in several ways (using asterisks (\*) or underscores (\_)). In order to "escape" a symbol, prefix it with a backslash (\). You can also strike thorugh your text in order to signify a correction.
# ```
# **bold** __bold__
# *italic* _italic_
#
# This is \*\*not \*\* bold.
#
# I ~~didn't make~~ a mistake.
# ```
#
# **Result:**
#
# **bold** __bold__
# *italic* _italic_
#
# This is \*\*not\*\* bold.
#
# I ~~didn't make~~ a mistake.
#
# ##### Lists
# You can add two types of lists: ordered and unordered. Lists can also be nested inside one another. To do this, press <kbd>Tab</kbd> once (it will be converted to 4 spaces).
#
# To create an ordered list, just type the numbers. Don't worry if your numbers are wrong - Jupyter Notebook will create them properly for you. Well, it's better to have them properly numbered anyway...
# ```
# 1. This is
# 2. A list
# 10. With many
# 9. Items
# 1. Some of which
# 2. Can
# 3. Be nested
# 42. You can also
# * Mix
# * list
# * types
# ```
#
# **Result:**
# 1. This is
# 2. A list
# 10. With many
# 9. Items
# 1. Some of which
# 2. Can
# 3. Be nested
# 42. You can also
# * Mix
# * list
# * types
#
# To create an unordered list, type an asterisk, plus or minus at the beginning:
# ```
# * This is
# * An
# + Unordered
# - list
# ```
#
# **Result:**
# * This is
# * An
# + Unordered
# - list
#
# ##### Links
# There are many ways to create links but we mostly use one of them: we present links with some explanatory text. See how it works:
# ```
# This is [a link](http://google.com) to Google.
# ```
#
# **Result:**
#
# This is [a link](http://google.com) to Google.
#
# ##### Images
# They are very similar to links. Just prefix the image with an exclamation mark. The alt(ernative) text will be displayed if the image is not available. Have a look (hover over the image to see the title text):
# ```
# ")
# ```
#
# **Result:**
#
# ")
#
# If you want to resize images or do some more advanced stuff, just use HTML.
#
# Did I mention these cells support HTML, CSS and JavaScript? Now I did.
#
# ##### Tables
# These are a pain because they need to be formatted (somewhat) properly. Here's a good [table generator](http://www.tablesgenerator.com/markdown_tables). Just select File > Paste table data... and provide a tab-separated list of values. It will generate a good-looking ASCII-art table for you.
# ```
# | Cell1 | Cell2 | Cell3 |
# |-------|-------|-------|
# | 1.1 | 1.2 | 1.3 |
# | 2.1 | 2.2 | 2.3 |
# | 3.1 | 3.2 | 3.3 |
# ```
#
# **Result:**
#
# | Cell1 | Cell2 | Cell3 |
# |-------|-------|-------|
# | 1.1 | 1.2 | 1.3 |
# | 2.1 | 2.2 | 2.3 |
# | 3.1 | 3.2 | 3.3 |
#
# ##### Code
# Just use triple backtick symbols. If you provide a language, it will be syntax-highlighted. You can also use inline code with single backticks.
# <pre>
# ```python
# def square(x):
# return x ** 2
# ```
# This is `inline` code. No syntax highlighting here.
# </pre>
#
# **Result:**
# ```python
# def square(x):
# return x ** 2
# ```
# This is `inline` code. No syntax highlighting here.
# **Now it's your turn to have some Markdown fun.** In the next cell, try out some of the commands. You can just throw in some things, or do something more structured (like a small notebook).
# 1. # Heading
# 2. ## Heading
# 3. ### Heading
# 4. #### Heading
# 5. ##### Heading
# 6. ###### Heading
#
# New Paragraph:
# I would like to be **bold** but I am just *italic*.
# So I am ~~**bold**~~ *italic* and that's it.
#
# ```python
# def doMath(hard = true):
# if hard:
# studyHardForHours()
# else:
# goAndPlayOutside()
# ```
#
# [GitHub](https://github.com/StanDimitroff/Math-Concepts)
#
# 
# ### Problem 2. Formulas and LaTeX
# Writing math formulas has always been hard. But scientists don't like difficulties and prefer standards. So, thanks to <NAME> (a very popular computer scientist, who also invented a lot of algorithms), we have a nice typesetting system, called LaTeX (pronounced _lah_-tek). We'll be using it mostly for math formulas, but it has a lot of other things to offer.
#
# There are two main ways to write formulas. You could enclose them in single `$` signs like this: `$ ax + b $`, which will create an **inline formula**: $ ax + b $. You can also enclose them in double `$` signs `$$ ax + b $$` to produce $$ ax + b $$.
#
# Most commands start with a backslash and accept parameters either in square brackets `[]` or in curly braces `{}`. For example, to make a fraction, you typically would write `$$ \frac{a}{b} $$`: $$ \frac{a}{b} $$.
#
# [Here's a resource](http://www.stat.pitt.edu/stoffer/freetex/latex%20basics.pdf) where you can look up the basics of the math syntax. You can also search StackOverflow - there are all sorts of solutions there.
#
# You're on your own now. Research and recreate all formulas shown in the next cell. Try to make your cell look exactly the same as mine. It's an image, so don't try to cheat by copy/pasting :D.
#
# Note that you **do not** need to understand the formulas, what's written there or what it means. We'll have fun with these later in the course.
#
# 
# Equation of a line: $ y = ax + b $
#
# Roots of the quadratic equation $ ax^2 + bx + c = 0 $: $ x_{1,2} = \frac{-b\pm \sqrt{b^2 - 4ac}}{2a} $
#
# Taylor series expansion: $ f(x)|_{x=a} = f(a) + f'(a)(x-a) + \frac{f^n(a)}{2!}(x-a)^2 + \dots + \frac{f^n(a)}{n!}(x-a)^n + \dots $
#
# Bionomial theorem: $ (x+y)^2 = \binom{n}{0}x^ny^0 + \binom{n}{1}x^1y^{n-1} + \dots + \binom{n}{n}x^0y^n = \sum\limits^n_{k=0} \binom{n}{k}x^{n-k}y^k$
#
# An integral (this one is a lot of fun to solve :D): $ \int^{+\infty}_{-\infty} e^{-x^2}dx = \sqrt{\pi} $
#
# A short matrix: $ \begin{pmatrix} 2 && 1 && 3 \\ 2 && 6 && 8 \\ 6 && 8 && 18 \end{pmatrix} $
#
# A long matrix: $ A = \begin{pmatrix} a_{11} && a_{12} && \dots && a_{1n} \\ a_{21} && a_{22} && \dots && a_{2n} \\ \vdots && \vdots && \ddots && \vdots \\ a_{m1} && a_{m2} && \dots && a_{mn} \end{pmatrix}$
# ### Problem 3. Solving with Python
# Let's first do some symbolic computation. We need to import `sympy` first.
#
# **Should your imports be in a single cell at the top or should they appear as they are used?** There's not a single valid best practice. Most people seem to prefer imports at the top of the file though. **Note: If you write new code in a cell, you have to re-execute it!**
#
# Let's use `sympy` to give us a quick symbolic solution to our equation. First import `sympy` (you can use the second cell in this notebook):
# ```python
# import sympy
# ```
#
# Next, create symbols for all variables and parameters. You may prefer to do this in one pass or separately:
# ```python
# x = sympy.symbols('x')
# a, b, c = sympy.symbols('a b c')
# ```
#
# Now solve:
# ```python
# sympy.solve(a * x**2 + b * x + c)
# ```
#
# Hmmmm... we didn't expect that :(. We got an expression for $a$ because the library tried to solve for the first symbol it saw. This is an equation and we have to solve for $x$. We can provide it as a second paramter:
# ```python
# sympy.solve(a * x**2 + b * x + c, x)
# ```
#
# Finally, if we use `sympy.init_printing()`, we'll get a LaTeX-formatted result instead of a typed one. This is very useful because it produces better-looking formulas.
x, a, b, c = sympy.symbols('x a b c')
sympy.init_printing()
sympy.solve(a * x**2 + b * x + c, x)
# How about a function that takes $a, b, c$ (assume they are real numbers, you don't need to do additional checks on them) and returns the **real** roots of the quadratic equation?
#
# Remember that in order to calculate the roots, we first need to see whether the expression under the square root sign is non-negative.
#
# If $b^2 - 4ac > 0$, the equation has two real roots: $x_1, x_2$
#
# If $b^2 - 4ac = 0$, the equation has one real root: $x_1 = x_2$
#
# If $b^2 - 4ac < 0$, the equation has zero real roots
#
# Write a function which returns the roots. In the first case, return a list of 2 numbers: `[2, 3]`. In the second case, return a list of only one number: `[2]`. In the third case, return an empty list: `[]`.
def solve_quadratic_equation(a, b, c):
"""
Returns the real solutions of the quadratic equation ax^2 + bx + c = 0
"""
d = b**2-4*a*c # discriminant
if d < 0:
return []
elif d == 0:
x = (-b + math.sqrt(d)) / (2 * a)
return [x]
elif d > 0:
x1 = (-b + math.sqrt(d)) / (2 * a)
x2 = (-b - math.sqrt(d)) / (2 * a)
return [x1, x2]
# Testing: Execute this cell. The outputs should match the expected outputs. Feel free to write more tests
print(solve_quadratic_equation(1, -1, -2)) # [-1.0, 2.0]
print(solve_quadratic_equation(1, -8, 16)) # [4.0]
print(solve_quadratic_equation(1, 1, 1)) # []
# **Bonus:** Last time we saw how to solve a linear equation. Remember that linear equations are just like quadratic equations with $a = 0$. In this case, however, division by 0 will throw an error. Extend your function above to support solving linear equations (in the same way we did it last time).
# ### Problem 4. Equation of a Line
# Let's go back to our linear equations and systems. There are many ways to define what "linear" means, but they all boil down to the same thing.
#
# The equation $ax + b = 0$ is called *linear* because the function $f(x) = ax+b$ is a linear function. We know that there are several ways to know what one particular function means. One of them is to just write the expression for it, as we did above. Another way is to **plot** it. This is one of the most exciting parts of maths and science - when we have to fiddle around with beautiful plots (although not so beautiful in this case).
#
# The function produces a straight line and we can see it.
#
# How do we plot functions in general? We know that functions take many (possibly infinitely many) inputs. We can't draw all of them. We could, however, evaluate the function at some points and connect them with tiny straight lines. If the points are too many, we won't notice - the plot will look smooth.
#
# Now, let's take a function, e.g. $y = 2x + 3$ and plot it. For this, we're going to use `numpy` arrays. This is a special type of array which has two characteristics:
# * All elements in it must be of the same type
# * All operations are **broadcast**: if `x = [1, 2, 3, 10]` and we write `2 * x`, we'll get `[2, 4, 6, 20]`. That is, all operations are performed at all indices. This is very powerful, easy to use and saves us A LOT of looping.
#
# There's one more thing: it's blazingly fast because all computations are done in C, instead of Python.
#
# First let's import `numpy`. Since the name is a bit long, a common convention is to give it an **alias**:
# ```python
# import numpy as np
# ```
#
# Import that at the top cell and don't forget to re-run it.
#
# Next, let's create a range of values, e.g. $[-3, 5]$. There are two ways to do this. `np.arange(start, stop, step)` will give us evenly spaced numbers with a given step, while `np.linspace(start, stop, num)` will give us `num` samples. You see, one uses a fixed step, the other uses a number of points to return. When plotting functions, we usually use the latter. Let's generate, say, 1000 points (we know a straight line only needs two but we're generalizing the concept of plotting here :)).
# ```python
# x = np.linspace(-3, 5, 1000)
# ```
# Now, let's generate our function variable
# ```python
# y = 2 * x + 3
# ```
#
# We can print the values if we like but we're more interested in plotting them. To do this, first let's import a plotting library. `matplotlib` is the most commnly used one and we usually give it an alias as well.
# ```python
# import matplotlib.pyplot as plt
# ```
#
# Now, let's plot the values. To do this, we just call the `plot()` function. Notice that the top-most part of this notebook contains a "magic string": `%matplotlib inline`. This hints Jupyter to display all plots inside the notebook. However, it's a good practice to call `show()` after our plot is ready.
# ```python
# plt.plot(x, y)
# plt.show()
# ```
x = np.linspace(-3, 5, 1000)
y = 2 * x + 3
plt.plot(x, y)
plt.show()
# It doesn't look too bad but we can do much better. See how the axes don't look like they should? Let's move them to zeto. This can be done using the "spines" of the plot (i.e. the borders).
#
# All `matplotlib` figures can have many plots (subfigures) inside them. That's why when performing an operation, we have to specify a target figure. There is a default one and we can get it by using `plt.gca()`. We usually call it `ax` for "axis".
# Let's save it in a variable (in order to prevent multiple calculations and to make code prettier). Let's now move the bottom and left spines to the origin $(0, 0)$ and hide the top and right one.
# ```python
# ax = plt.gca()
# ax.spines["bottom"].set_position("zero")
# ax.spines["left"].set_position("zero")
# ax.spines["top"].set_visible(False)
# ax.spines["right"].set_visible(False)
# ```
#
# **Note:** All plot manipulations HAVE TO be done before calling `show()`. It's up to you whether they should be before or after the function you're plotting.
#
# This should look better now. We can, of course, do much better (e.g. remove the double 0 at the origin and replace it with a single one), but this is left as an exercise for the reader :).
x = np.linspace(-3, 5, 1000)
y = 2 * x + 3
ax = plt.gca()
ax.spines["bottom"].set_position("zero")
ax.spines["left"].set_position("zero")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
plt.plot(x, y)
plt.show()
# ### * Problem 5. Linearizing Functions
# Why is the line equation so useful? The main reason is because it's so easy to work with. Scientists actually try their best to linearize functions, that is, to make linear functions from non-linear ones. There are several ways of doing this. One of them involves derivatives and we'll talk about it later in the course.
#
# A commonly used method for linearizing functions is through algebraic transformations. Try to linearize
# $$ y = ae^{bx} $$
#
# Hint: The inverse operation of $e^{x}$ is $\ln(x)$. Start by taking $\ln$ of both sides and see what you can do. Your goal is to transform the function into another, linear function. You can look up more hints on the Internet :).
# <p style="color: #d9534f">Write your result here.</p>
# ### * Problem 6. Generalizing the Plotting Function
# Let's now use the power of Python to generalize the code we created to plot. In Python, you can pass functions as parameters to other functions. We'll utilize this to pass the math function that we're going to plot.
#
# Note: We can also pass *lambda expressions* (anonymous functions) like this:
# ```python
# lambda x: x + 2```
# This is a shorter way to write
# ```python
# def some_anonymous_function(x):
# return x + 2
# ```
#
# We'll also need a range of x values. We may also provide other optional parameters which will help set up our plot. These may include titles, legends, colors, fonts, etc. Let's stick to the basics now.
#
# Write a Python function which takes another function, x range and number of points, and plots the function graph by evaluating it at every point.
#
# **BIG hint:** If you want to use not only `numpy` functions for `f` but any one function, a very useful (and easy) thing to do, is to vectorize the function `f` (e.g. to allow it to be used with `numpy` broadcasting):
# ```python
# f_vectorized = np.vectorize(f)
# y = f_vectorized(x)
# ```
def plot_math_function(f, min_x, max_x, num_points):
x = np.linspace(min_x, max_x, num_points)
f_vectorized = np.vectorize(f)
y = f_vectorized(x)
ax = plt.gca()
ax.spines["bottom"].set_position("zero")
ax.spines["left"].set_position("zero")
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
plt.plot(x, y)
plt.show()
plot_math_function(lambda x: 2 * x + 3, -3, 5, 1000)
plot_math_function(lambda x: -x + 8, -1, 10, 1000)
plot_math_function(lambda x: x**2 - x - 2, -3, 4, 1000)
plot_math_function(lambda x: np.sin(x), -np.pi, np.pi, 1000)
plot_math_function(lambda x: np.sin(x) / x, -4 * np.pi, 4 * np.pi, 1000)
# ### * Problem 7. Solving Equations Graphically
# Now that we have a general plotting function, we can use it for more interesting things. Sometimes we don't need to know what the exact solution is, just to see where it lies. We can do this by plotting the two functions around the "=" sign and seeing where they intersect. Take, for example, the equation $2x + 3 = 0$. The two functions are $f(x) = 2x + 3$ and $g(x) = 0$. Since they should be equal, the point of their intersection is the solution of the given equation. We don't need to bother marking the point of intersection right now, just showing the functions.
#
# To do this, we'll need to improve our plotting function yet once. This time we'll need to take multiple functions and plot them all on the same graph. Note that we still need to provide the $[x_{min}; x_{max}]$ range and it's going to be the same for all functions.
#
# ```python
# vectorized_fs = [np.vectorize(f) for f in functions]
# ys = [vectorized_f(x) for vectorized_f in vectorized_fs]
# ```
def plot_math_functions(functions, min_x, max_x, num_points):
x = np.linspace(min_x, max_x, num_points)
vectorized_fs = [np.vectorize(f) for f in functions]
ys = [vectorized_f(x) for vectorized_f in vectorized_fs]
for y in ys:
plt.plot(x, y)
plt.show()
plot_math_functions([lambda x: 2 * x + 3, lambda x: 0], -3, 5, 1000)
plot_math_functions([lambda x: 3 * x**2 - 2 * x + 5, lambda x: 3 * x + 7], -2, 3, 1000)
# This is also a way to plot the solutions of systems of equation, like the one we solved last time. Let's actually try it.
plot_math_functions([lambda x: (-4 * x + 7) / 3, lambda x: (-3 * x + 8) / 5, lambda x: (-x - 1) / -2], -1, 4, 1000)
# ### Problem 8. Trigonometric Functions
# We already saw the graph of the function $y = \sin(x)$. But, how do we define the trigonometric functions once again? Let's quickly review that.
#
# <img src="angle-in-right-triangle.png" style="max-height: 200px" alt="Right triangle" />
#
# The two basic trigonometric functions are defined as the ratio of two sides:
# $$ \sin(x) = \frac{\text{opposite}}{\text{hypotenuse}} $$
# $$ \cos(x) = \frac{\text{adjacent}}{\text{hypotenuse}} $$
#
# And also:
# $$ \tan(x) = \frac{\text{opposite}}{\text{adjacent}} = \frac{\sin(x)}{\cos(x)} $$
# $$ \cot(x) = \frac{\text{adjacent}}{\text{opposite}} = \frac{\cos(x)}{\sin(x)} $$
#
# This is fine, but using this, "right-triangle" definition, we're able to calculate the trigonometric functions of angles up to $90^\circ$. But we can do better. Let's now imagine a circle centered at the origin of the coordinate system, with radius $r = 1$. This is called a "unit circle".
#
# <img src="triangle-unit-circle.png" style="max-height: 300px" alt="Trigonometric unit circle" />
#
# We can now see exactly the same picture. The $x$-coordinate of the point in the circle corresponds to $\cos(\alpha)$ and the $y$-coordinate - to $\sin(\alpha)$. What did we get? We're now able to define the trigonometric functions for all degrees up to $360^\circ$. After that, the same values repeat: these functions are **periodic**:
# $$ \sin(k.360^\circ + \alpha) = \sin(\alpha), k = 0, 1, 2, \dots $$
# $$ \cos(k.360^\circ + \alpha) = \cos(\alpha), k = 0, 1, 2, \dots $$
#
# We can, of course, use this picture to derive other identities, such as:
# $$ \sin(90^\circ + \alpha) = \cos(\alpha) $$
#
# A very important property of the sine and cosine is that they accept values in the range $(-\infty; \infty)$ and produce values in the range $[-1; 1]$. The two other functions take values in the range $(-\infty; \infty)$ **except when their denominators are zero** and produce values in the same range.
#
# #### Radians
# A degree is a geometric object, $1/360$th of a full circle. This is quite inconvenient when we work with angles. There is another, natural and intrinsic measure of angles. It's called the **radian** and can be written as $\text{rad}$ or without any designation, so $\sin(2)$ means "sine of two radians".
# 
#
# It's defined as *the central angle of an arc with length equal to the circle's radius* and $1\text{rad} \approx 57.296^\circ$.
#
# We know that the circle circumference is $C = 2\pi r$, therefore we can fit exactly $2\pi$ arcs with length $r$ in $C$. The angle corresponding to this is $360^\circ$ or $2\pi\ \text{rad}$. Also, $\pi rad = 180^\circ$.
#
# (Some people prefer using $\tau = 2\pi$ to avoid confusion with always multiplying by 2 or 0.5 but we'll use the standard notation here.)
#
# **NOTE:** All trigonometric functions in `math` and `numpy` accept radians as arguments. In order to convert between radians and degrees, you can use the relations $\text{[deg]} = 180/\pi.\text{[rad]}, \text{[rad]} = \pi/180.\text{[deg]}$. This can be done using `np.deg2rad()` and `np.rad2deg()` respectively.
#
# #### Inverse trigonometric functions
# All trigonometric functions have their inverses. If you plug in, say $\pi/4$ in the $\sin(x)$ function, you get $\sqrt{2}/2$. The inverse functions (also called, arc-functions) take arguments in the interval $[-1; 1]$ and return the angle that they correspond to. Take arcsine for example:
# $$ \arcsin(y) = x: sin(y) = x $$
# $$ \arcsin\left(\frac{\sqrt{2}}{2}\right) = \frac{\pi}{4} $$
#
# Please note that this is NOT entirely correct. From the relations we found:
# $$\sin(x) = sin(2k\pi + x), k = 0, 1, 2, \dots $$
#
# it follows that $\arcsin(x)$ has infinitely many values, separated by $2k\pi$ radians each:
# $$ \arcsin\left(\frac{\sqrt{2}}{2}\right) = \frac{\pi}{4} + 2k\pi, k = 0, 1, 2, \dots $$
#
# In most cases, however, we're interested in the first value (when $k = 0$). It's called the **principal value**.
#
# Note 1: There are inverse functions for all four basic trigonometric functions: $\arcsin$, $\arccos$, $\arctan$, $\text{arccot}$. These are sometimes written as $\sin^{-1}(x)$, $cos^{-1}(x)$, etc. These definitions are completely equivalent.
#
# Just notice the difference between $\sin^{-1}(x) := \arcsin(x)$ and $\sin(x^{-1}) = \sin(1/x)$.
# #### Exercise
# Use the plotting function you wrote above to plot the inverse trigonometric functions.
plot_math_function(lambda x: np.arcsin(x), -3, 5, 1000)
plot_math_function(lambda x: np.arccos(x), -3, 5, 1000)
plot_math_function(lambda x: np.arctan(x), -3, 5, 1000)
def plot_circle(x_c, y_c, r):
"""
Plots the circle with center C(x_c; y_c) and radius r.
This corresponds to plotting the equation x^2 + y^2 = r^2
"""
circle = plt.Circle((x_c, y_c), r)
ax=plt.gca()
ax.add_patch(circle)
plt.axis('scaled')
plt.show()
plot_circle(0, 0, 2)
# ### ** Problem 9. Perlin Noise
# This algorithm has many applications in computer graphics and can serve to demonstrate several things... and help us learn about math, algorithms and Python :).
# #### Noise
# Noise is just random values. We can generate noise by just calling a random generator. Note that these are actually called *pseudorandom generators*. We'll talk about this later in this course.
# We can generate noise in however many dimensions we want. For example, if we want to generate a single dimension, we just pick N random values and call it a day. If we want to generate a 2D noise space, we can take an approach which is similar to what we already did with `np.meshgrid()`.
#
# $$ \text{noise}(x, y) = N, N \in [n_{min}, n_{max}] $$
#
# This function takes two coordinates and returns a single number N between $n_{min}$ and $n_{max}$. (This is what we call a "scalar field").
#
# Random variables are always connected to **distributions**. We'll talk about these a great deal but now let's just say that these define what our noise will look like. In the most basic case, we can have "uniform noise" - that is, each point in our little noise space $[n_{min}, n_{max}]$ will have an equal chance (probability) of being selected.
#
# #### Perlin noise
# There are many more distributions but right now we'll want to have a look at a particular one. **Perlin noise** is a kind of noise which looks smooth. It looks cool, especially if it's colored. The output may be tweaked to look like clouds, fire, etc. 3D Perlin noise is most widely used to generate random terrain.
#
# #### Algorithm
# ... Now you're on your own :). Research how the algorithm is implemented (note that this will require that you understand some other basic concepts like vectors and gradients).
#
# #### Your task
# 1. Research about the problem. See what articles, papers, Python notebooks, demos, etc. other people have created
# 2. Create a new notebook and document your findings. Include any assumptions, models, formulas, etc. that you're using
# 3. Implement the algorithm. Try not to copy others' work, rather try to do it on your own using the model you've created
# 4. Test and improve the algorithm
# 5. (Optional) Create a cool demo :), e.g. using Perlin noise to simulate clouds. You can even do an animation (hint: you'll need gradients not only in space but also in time)
# 6. Communicate the results (e.g. in the Softuni forum)
#
# Hint: [This](http://flafla2.github.io/2014/08/09/perlinnoise.html) is a very good resource. It can show you both how to organize your notebook (which is important) and how to implement the algorithm.
| High-School-Maths/High-School Maths Exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fundamentos de Python
#
# ## Arquivos
#
# ### Lendo arquivos
# +
# abrindo o arquivo para leitura
arq1 = open('arquivos/arquivo1.txt', 'r')
# +
# lendo o arquivo
print(arq1.read())
# +
# contar o número de caracteres
print(arq1.tell())
# +
# retornar para o início do arquivo
print(arq1.seek(0, 0))
# +
# ler os primeiros 10 caracteres
print(arq1.read(10))
# -
# ### Gravando arquivos
# +
# abrindo arquivo para gravação
arq2 = open('arquivos/arquivo1.txt', 'w')
# +
# como abrimos o arquivo apenas para gravação, não podemos usar comandos de leitura
print(arq2.read())
# +
# gravando o arquivo
arq2.write('Testando gravação de arquivos em Python.')
# -
arq2.close()
# +
# lendo o arquivo gravado
arq2 = open('arquivos/arquivo1.txt', 'r')
# -
print(arq2.read())
# +
# acrescentando conteúdo
arq2 = open('arquivos/arquivo1.txt', 'a')
# -
arq2.write('\nAcrescentando conteúdo')
# +
# fecha o arquivo
arq2.close()
# +
# abrir i arquivo no modo leitura
arq2 = open('arquivos/arquivo1.txt', 'r')
# +
# lendo o arquivo
print(arq2.read())
# +
# retornando ao início do arquivo
arq2.seek(0, 0)
# -
# ### Automatizando o processo de gravação
fileName = input('Digite o nome do arquivo: ')
fileName = fileName + '.txt'
arq3 = open(fileName, 'w')
arq3.write('Incluindo texto no arquivo criado')
# ###
| Cap04/Notebooks/.ipynb_checkpoints/Untitled-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 示例为利用pytorch中的CNN进行MNIST字体识别
import os
import torch
import torch.nn.functional as F
import torch.utils.data as Data
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
# +
# torch.manual_seed(1)
# 超参数
EPOCH = 1
BATCH_SIZE = 50
LR = 0.001 # 学习率
DOWNLOAD_MNIST = False
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") # 选择CPU或GPU
# +
# 下载训练数据
import torchvision
if not os.path.exists("./mnist/") or not os.listdir('./mnist/'):
DOWNLOAD_MNIST = True
train_data = torchvision.datasets.MNIST(root='./mnist/', train=True,
transform=torchvision.transforms.ToTensor(), download=DOWNLOAD_MNIST)
# -
# plot
print(train_data.data.size())
print(train_data.targets.size())
plt.imshow(train_data.data[0].numpy(), cmap='gray')
plt.title("数字{}".format(train_data.targets[0].numpy()))
plt.show()
# dataloader
train_loader = Data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
train_x_batch, train_y_batch=next(iter(train_loader))
print(train_x_batch.shape)
print(train_y_batch.shape)
train_y_batch
# 选取测试数据
test_data = torchvision.datasets.MNIST(root='./mnist/', train=False)
test_data_n = torch.unsqueeze(test_data.data.float(), dim=1)
test_label_n = test_data.targets
test_n = Data.TensorDataset(test_data_n, test_label_n)
test_loader = Data.DataLoader(dataset=test_n, batch_size=BATCH_SIZE, shuffle=True)
test_x_batch, test_y_batch=next(iter(test_loader))
print(test_x_batch.shape)
print(test_y_batch.shape)
# +
# 构建CNN网络
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1,28*28
self.conv1=torch.nn.Sequential(
torch.nn.Conv2d(in_channels=1,out_channels=16,kernel_size=5, padding=2), # 1,28*28 ——> 16,28*28
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=2) # 16,28*28 ——> 16, 14*14
)
self.conv2=torch.nn.Sequential(
torch.nn.Conv2d(in_channels=16,out_channels=32,kernel_size=5, padding=2), # 16,14*14 ——> 32,14*14
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=2) # 32,14*14 ——> 32,7*7
)
self.fc1 = torch.nn.Linear(32*7*7, 10)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
# forward传递的tensor相较于module中还包括了batch,即dim=0
x = x.view(x.size()[0], -1) # 全连接层
output = self.fc1(x)
return output
net = Net().to(DEVICE) # 移动到CPU或GPU
print(net)
# -
# 定义优化器和损失函数
criteria = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=LR)
# 将训练的函数封装起来
def train(model, device, train_loader, optimizer, epoch):
model.train() # 训练模式
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
output = model(data)
loss = criteria(output, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (batch_idx+1) % 30 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
# 将测试的函数封装起来
def test(model, device, test_loader):
model.eval() # 测试模式
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += criteria(output, target)
pred = torch.max(output, dim=1)[1]
correct += torch.sum((pred==target))
correct = int(correct.numpy())
avg_loss = test_loss/len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
avg_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
for epoch in range(1, EPOCH + 1):
train(net, DEVICE, train_loader, optimizer, epoch)
test(net, DEVICE, train_loader)
| Pytorch_demos/basic_tutorail/2.6 CNN_MNIST.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Autograd in JAX and PyTorch
#
# - toc: true
# - badges: true
# - comments: true
# - author: <NAME>
# - categories: [ML, JAX, PyTorch]
# #### Basic Imports
import torch
from jax import grad
import jax.numpy as jnp
# #### Creating scalar variables in PyTorch
x_torch = torch.autograd.Variable(torch.tensor(1.), requires_grad=True)
y_torch = torch.autograd.Variable(torch.tensor(1.), requires_grad=True)
# #### Creating scalar variables in JAX
x_jax = jnp.array(1.)
y_jax = jnp.array(1.)
# #### Defining a loss on scalar inputs
def loss(x, y):
return x*x + y*y
# #### Computing the loss on PyTorch input
l_torch = loss(x_torch, y_torch)
l_torch
# #### Computing the loss on JAX input
l_jax = loss(x_jax, y_jax)
# #### Computing the gradient on PyTorch input
l_torch.backward()
x_torch.grad, y_torch.grad
# #### Computing the gradient on JAX input
grad_loss = grad(loss, argnums=[0, 1])
grad_loss(x_jax, y_jax)
# #### Repeating the same procedure as above for both libraries but instead using vector function
def loss(theta):
return theta.T@theta
theta_torch = torch.autograd.Variable(torch.tensor([1., 1.]), requires_grad=True)
theta_torch
l = loss(theta_torch)
l
l.backward()
theta_torch.grad
theta_jax = jnp.array([1., 1.])
loss(theta_jax)
grad_loss = grad(loss, argnums=[0])
grad_loss(theta_jax)
| _notebooks/2022-02-09-autograd-pytorch-jax.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nasa
# language: python
# name: nasa
# ---
# # Storing idx formatted images on disk as PNG and HDF5 file formats
# ## Comparison between storing the images using PNG vs using HDF5
# ### Load into numpy arrays
import idx2numpy as idx2np #to read the idx files
import numpy as np
import glob
import timeit
import os
import png
import csv
import h5py
import matplotlib.pyplot as plt
train_files = [file for file in glob.glob('MNIST_idx/train-*')] #first elem is train images path and second elem is train labels path
test_files = [file for file in glob.glob('MNIST_idc/t10k-*')] #first elem is test images path and second elem is test labels path
png_savePath = 'MNIST_png/'
hdf5_savePath = 'MNIST_hdf5/'
#MEMORY EXTENSIVE!
#both line loads the image and labels as np arrays into the memory
load_training_images = idx2np.convert_from_file(train_files[0])
load_training_labels = idx2np.convert_from_file(train_files[1])
total_images = load_training_images.shape[0] #get the total number of images
# ### PNG and HDF5 comparison
# +
RUN_PNG = True
RUN_HDF5 = True
#chunks of image sizes to be tested
test_imageSize = [total_images//5, 2*(total_images//5), 3*(total_images//5), 4*(total_images//5), total_images]
timeit_number = 1
# -
# #### PNG
if RUN_PNG:
#creates the directory if it does not exist yet
try:
os.mkdir(png_savePath)
print("Directory created successfully!")
except FileExistsError:
print("Directory already exists")
def save_file_png(arr, path, filename):
'''Stores a single numpy array as a png file on disk
Parameters
-----------
arr : image numpy array | numpy array
path : path to the disk to write the image file | string
filename : name of the file to be saved | string
'''
png.from_array(arr, mode='L').save(path+filename)
def labels_csv(path, filename, label):
'''Saves the path of an image and the corresponding label as a csv file
Parameters
----------
path : path of the image file | string
filename : name of the file | string
label : label of the image | integer
'''
#creates a csv file and write the head if the file does not exist yet.
if not os.path.isfile(path + 'info.csv'): #returns False if the file does not exist
with open(path + 'info.csv', 'a') as csvFile:
writer = csv.writer(csvFile)
row_head = ['File_path', 'Label'] #head
writer.writerow(row_head)
data_row = [path + filename, str(label)] #row to append
with open(path + 'info.csv', 'a') as csvFile: #open the csv file in append mode
writer = csv.writer(csvFile)
writer.writerow(data_row)
def PNG_write(file_number):
'''Write 'file_number' files to png on disk
Parameter
--------
file_number : number of files to be written | integer
'''
for counter in range(file_number):
filename = str(counter)
arr = load_training_images[counter]
label = load_training_labels[counter]
path = png_savePath
save_file_png(arr=arr, path=path, filename=filename) #write the image array as png to disk
labels_csv(label=label, filename=filename, path=path) #write the label array to csv on disk
# +
time_png = []
if RUN_PNG:
for x in test_imageSize:
#append the execution time values into the list
time_png.append(timeit.timeit("PNG_write(file_num)", setup="file_num=int(x)", number=timeit_number, globals=globals())/timeit_number)
# -
# #### HDF5
# HDF5, Hierarchical Data Format, consists of two types of objects.
# **1) Datasets ; 2) Groups**
#
# Datasets are multidimensional arrays and group consists of datasets **OR** other groups. Within a dataset, the dimensions and the type of the array have to be uniform.
if RUN_HDF5:
#creates the directory if it does not exist yet
try:
os.mkdir(hdf5_savePath)
print("Directory created successfully!")
except FileExistsError:
print("Directory already exists")
# In saving the images as .png to the disk, a function that is created to store a single file can be repeated until all the images are stored. However, in HDF5, we can store them all at once.
def many_hdf5(file_num):
'''Store multiple images to HDF5.
Parameters
----------
file_num : number of files to be written | integer
'''
images = load_training_images[:file_num]
labels = load_training_labels[:file_num]
file = h5py.File(hdf5_savePath + str(file_num) + '.h5', 'w') #open an hdf5 file
dataset_1 = file.create_dataset("images", np.shape(images), h5py.h5t.STD_U8BE, data=images)
dataset_2 = file.create_dataset("meta", np.shape(labels), h5py.h5t.STD_U8BE, data=labels)
file.close()
# +
time_hdf5 = []
if RUN_HDF5:
for x in test_imageSize:
#append the execution time values into the list
time_hdf5.append(timeit.timeit("many_hdf5(file_num)", setup="file_num=int(x)", number=timeit_number, globals=globals())/timeit_number)
# -
# ### Comparison Graph
x_axis = test_imageSize
y_axis_png = time_png
y_axis_hdf5 = time_hdf5
#Graph plotting
plt.plot(x_axis, y_axis_png, 'r--', label='PNG')
plt.plot(x_axis, y_axis_hdf5, 'g--', label='HDF5')
plt.title("PNG vs HDF5 storing time")
plt.ylabel("Time (s)")
plt.xlabel("No. of Images")
plt.legend()
plt.show()
| Store_Access_Images/storing_images.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.2
# language: julia
# name: julia-1.5
# ---
# # Parsing Tabular Data
# **Originally Contributed by**: <NAME>
# ## Introduction
# An example of how to parse tabular files (CSV) files similar to the format established in
# the [RTS-GMLC](github.com/gridmod/rts-gmlc/RTS_Data/SourceData) and create a `System` using
# [PowerSystems.jl](github.com/NREL-SIIP/PowerSystems.jl)
# ### Dependencies
using SIIPExamples
using PowerSystems
using TimeSeries
using Dates
# ### Fetch Data
# PowerSystems.jl links to some test data that is suitable for this example.
# Let's download the test data
PowerSystems.download(PowerSystems.TestData; branch = "master") # *note* add `force=true` to get a fresh copy
base_dir = dirname(dirname(pathof(PowerSystems)));
# ### The tabular data format relies on a folder containing `*.csv` files and a `user_descriptors.yaml` file
# First, we'll read the tabular data
RTS_GMLC_DIR = joinpath(base_dir, "data", "RTS_GMLC");
rawsys = PowerSystems.PowerSystemTableData(
RTS_GMLC_DIR,
100.0,
joinpath(RTS_GMLC_DIR, "user_descriptors.yaml"),
timeseries_metadata_file = joinpath(RTS_GMLC_DIR, "timeseries_pointers.json"),
generator_mapping_file = joinpath(RTS_GMLC_DIR, "generator_mapping.yaml"),
)
# ### Create a `System`
# Next, we'll create a `System` from the `rawsys` data. Since a `System` is predicated on a
# time series resolution and the `rawsys` data includes both 5-minute and 1-hour resolution
# time series, we also need to specify which time series we want to include in the `System`.
# The `time_series_resolution` kwarg filters to only include time series with a matching resolution.
sys = System(rawsys; time_series_resolution = Dates.Hour(1));
horizon = 24;
interval = Dates.Hour(24);
transform_single_time_series!(sys, horizon, interval);
sys
# ---
#
# *This notebook was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
| notebook/2_PowerSystems_examples/04_parse_tabulardata.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb,md,jl:hydrogen
# text_representation:
# extension: .jl
# format_name: hydrogen
# format_version: '1.3'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.7.0-DEV
# language: julia
# name: julia-1.7
# ---
# %%
module Q
const Foo = NamedTuple{(:a,)}
const Bar = NamedTuple{(:a, :b)}
const Baz = NamedTuple{(:a, :b, :c)}
f(x::Foo) = println("Foo", x)
f(x::Bar) = println("Bar", x)
f(x::Baz) = println("Baz", x)
end
foo = (a = 1,)
bar = (a = 1, b = 2.0)
baz = (a = 1, b = 2.0, c = "three");
Q.f(foo)
Q.f(bar)
Q.f(baz);
# %%
baz = (a = 999, baz.b, baz.c)
baz
# %%
qoo = (a = Ref(1), b = Ref(2.0), c = Ref("three"))
# %%
qoo.a[] = 1000
qoo.b[] = 2000.0
qoo.c[] = "three thousand"
qoo
# %%
qux = (a = fill(1), b = fill(2.0), c = fill("three"))
# %%
qux.a[] = 1000
qux.b[] = 2000.0
qux.c[] = "three thousand"
qux
# %%
| 0002/NamedTuple dispatch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import json
# +
AIRLINES_FILE_CLEANED_CSV = "../data/cleaned/cleaned_earth_airlines.csv"
AIRLINES_FILE_CLEANED_JSON = "../data/cleaned/cleaned_earth_airlines.json"
EU_COUNTRIES_FILE_CLEANED_CSV = "../data/cleaned/eu_cleaned_countries_2019.csv"
EU_AIRLINES_FILE_CLEANED_CSV = "../data/cleaned/eu_cleaned_airlines_2019.csv"
EU_AIRLINES_FILE_CLEANED_JSON = "../data/cleaned/eu_cleaned_airlines_2019.json"
# -
# # Read Airline Data
all_airlines_df = pd.read_csv(AIRLINES_FILE_CLEANED_CSV,
index_col = 0,
header=0)
eu_countries_df = pd.read_csv(EU_COUNTRIES_FILE_CLEANED_CSV,
index_col = 0,
header=0)
with open(AIRLINES_FILE_CLEANED_JSON) as f:
all_airlines = json.load(f)
# # Airline Data Cleaning
# +
eu_countries_list = eu_countries_df['country'].values
eu_airlines_df = all_airlines_df[all_airlines_df['Country'].isin(eu_countries_list)]
# -
eu_airlines_df.reset_index(drop=True, inplace=True)
eu_airlines_df.head()
# # Create JSON Data
# +
eu_airlines_json = {}
eu_airlines_count = len(eu_airlines_df)
for airline_index in range(eu_airlines_count):
airline_data = eu_airlines_df.iloc[airline_index]
airline_code = airline_data["IATA"]
eu_airlines_json[airline_code] = all_airlines[airline_code]
# -
# # Airline Visulization
no_of_countries = len(eu_airlines_df["Country"].value_counts())
print("number of countries:", no_of_countries)
plt.close("all")
plt.figure()
name_plt = eu_airlines_df["Country"].value_counts().plot(kind='bar',figsize=(7,2))
name_plt.set_xlabel("Country")
name_plt.set_ylabel("No. of Airlines")
plt.show()
# # Save Airline Data
eu_airlines_df.to_csv(EU_AIRLINES_FILE_CLEANED_CSV)
with open(EU_AIRLINES_FILE_CLEANED_JSON, 'w') as fp:
json.dump(eu_airlines_json, fp)
| notebooks/eu_data_cleaning_airlines_2019.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 
#
# <a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Finteresting-problems&branch=main&subPath=notebooks/word-decomposition-tree.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
# + [markdown] colab_type="text" id="IhPcpJ74T8FC"
# # Word Decomposition Tree
#
# In this notebook we will investigate one of the [MIT Technology Review Puzzle Corner](https://cs.nyu.edu/~gottlieb/tr/back-issues/) puzzles from [this article](https://www.theguardian.com/science/2020/feb/10/can-you-solve-it-are-you-smart-enough-for-mit):
#
# *The 9-letter word SPLATTERS has an intriguing property. You can remove a single letter to make an 8-letter word, without rearranging the other letters. You can remove another letter to make a 7-letter word, and then a 6-letter word, and so on down to a single-letter word.*
#
# *At no stage is the order of the letters rearranged.*
#
# *Find two other 9-letter words that share the same property.*
# + [markdown] colab_type="text" id="BwMwZ6JMWGAf"
# ### Python Libraries
#
# We will use a few different Python libraries:
#
# `pyspellchecker` to test if fragments are words (this needs to be installed)
#
# `pandas` for creating and manipulating dataframes
#
# `numpy` to find values in a dataframe
#
# `graphviz` for creating the tree visualization
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="_BqBmyVgV0CD" outputId="31b48eb7-d759-4ee7-e208-8641307c9024"
# !pip install pyspellchecker
from spellchecker import SpellChecker
spell = SpellChecker()
import pandas as pd
import numpy as np
from graphviz import Digraph
print('Libraries imported, we are ready to go.')
# + [markdown] colab_type="text" id="bwoPUC-BegD2"
# ## Defining Functions
#
# Let's define a function for checking if a particular word fragment is a word, and another function for finding the "children" of a word by removing a letter from each possible position.
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="hxh1O1rKem1W" outputId="dac9c898-c308-4bf7-db0d-a58ff33e7816"
def is_word(word_fragment):
if len(word_fragment) == 1: # despite what the dictionary says, there should only be two single-letter English words
if word_fragment == 'a' or word_fragment == 'i':
is_word_boolean = True
else:
is_word_boolean = False
else:
is_word_boolean = spell.correction(word_fragment) == word_fragment # if the correct spelling is the same as the word, then it's a word
return is_word_boolean
def find_child_words(word):
child_word_list = []
for index in range(len(word)):
child_word = word[:index] + word[index+1:]
if is_word(child_word):
child_word_list.append(child_word)
return child_word_list
print('Functions defined')
# + [markdown] colab_type="text" id="jjhECHeveHxT"
# ## Decomposing a Word
#
# Starting with the suggested word `splatters`, we will create a dataframe of "child words" (along with the lenght of each child word) and their "parent words".
# + colab={"base_uri": "https://localhost:8080/", "height": 359} colab_type="code" id="TpEIvOKXeLlM" outputId="64ab5374-db2c-4102-ed67-6608032bc37b"
starting_word = 'splatters'
#starting_word = 'sparkling'
#starting_word = 'startling'
df = pd.DataFrame(columns=['Length', 'Word', 'Parent'])
def add_to_word_df(df, child_word, parent_word):
df = df.append({'Length': len(child_word), 'Word': child_word, 'Parent': parent_word}, ignore_index=True)
return df
df = add_to_word_df(df, starting_word, 'NA')
for iterator in range(len(starting_word), 1, -1): # count down iterator
parent_word_list = df[df['Length']==iterator]['Word'].values.tolist() # for each word that is iterator length
for parent_word in parent_word_list:
child_word_list = find_child_words(parent_word)
for child_word in child_word_list:
df = add_to_word_df(df, child_word, parent_word)
df.head(10)
# + [markdown] colab_type="text" id="gfPmZLICfdCx"
# # Manipulating the Dataframe
#
# Now that we've created a data set of possible words and their "children", let's manipulate it so that we have just the unique child words and associated lists of their parent words.
#
# We'll use this to create the tree visualization later.
# + colab={"base_uri": "https://localhost:8080/", "height": 359} colab_type="code" id="rya190sYNHel" outputId="e3a0c63b-8685-48bb-a4c3-e2d46ee841b8"
df_unique_multiple_parents = df.drop_duplicates('Word') # remove duplicate child words
df_unique_multiple_parents.reset_index(drop=True, inplace=True) # re-number the rows
for row_number, row in df_unique_multiple_parents.iterrows():
row[2] = df[df['Word']==row[1]]['Parent'].drop_duplicates().tolist() # add a list of parent words for each child word
df_unique_multiple_parents.head(10)
# + [markdown] colab_type="text" id="eanV260cgAtL"
# # Visualizing the Word Tree
#
# From this new dataframe of unique child words and lists of their parent words, we can create a word tree.
# + colab={"base_uri": "https://localhost:8080/", "height": 848} colab_type="code" id="mGF-yh5AW-Ov" outputId="f70327ba-8920-41a4-b88e-3e48c95164e5"
dot = Digraph()
for row_index, row in df_unique_multiple_parents.iterrows():
word = row['Word']
if row_index == 0: # do this for the first row, which has no parent
dot.node(str(row_index), word)
else:
dot.node(str(row_index), word)
parent_list = row['Parent']
for parent_word in parent_list:
parent_index = np.where(df_unique_multiple_parents['Word'] == parent_word)[0][0]
dot.edge(str(parent_index), str(row_index))
# display the tree
dot
# + [markdown] colab_type="text" id="Ax9TNgyygMGB"
# # Conclusion
#
# According to the [solutions page](https://www.theguardian.com/science/2020/feb/10/did-you-solve-it-are-you-smart-enough-for-mit), The two other nine-letter words that share this "decomposable" property are `sparkling` and `startling`. Try this out using those two words.
#
# As well, there is a [Snopes article](https://www.snopes.com/fact-check/nine-letters-puzzle/) discussing other possible solutions, including some ten-letter possibilities.
#
# It might be an interesting challenge to use some of the above code to check all possible nine-letter English words to see which are decomposible in this manner.
#
# For now, though here is all the code together in one cell.
# +
#starting_word = 'splatters'
starting_word = 'sparkling'
#starting_word = 'startling'
#starting_word = 'splittings'
df = pd.DataFrame(columns=['Length', 'Word', 'Parent'])
def add_to_word_df(df, child_word, parent_word):
df = df.append({'Length': len(child_word), 'Word': child_word, 'Parent': parent_word}, ignore_index=True)
return df
df = add_to_word_df(df, starting_word, 'NA')
for iterator in range(len(starting_word), 1, -1): # count down iterator
parent_word_list = df[df['Length']==iterator]['Word'].values.tolist() # for each word that is iterator length
for parent_word in parent_word_list:
child_word_list = find_child_words(parent_word)
for child_word in child_word_list:
df = add_to_word_df(df, child_word, parent_word)
df_unique_multiple_parents = df.drop_duplicates('Word') # remove duplicate child words
df_unique_multiple_parents.reset_index(drop=True, inplace=True) # re-number the rows
for row_number, row in df_unique_multiple_parents.iterrows():
row[2] = df[df['Word']==row[1]]['Parent'].drop_duplicates().tolist() # add a list of parent words for each child word
dot = Digraph()
for row_index, row in df_unique_multiple_parents.iterrows():
word = row['Word']
if row_index == 0: # do this for the first row, which has no parent
dot.node(str(row_index), word)
else:
dot.node(str(row_index), word)
parent_list = row['Parent']
for parent_word in parent_list:
parent_index = np.where(df_unique_multiple_parents['Word'] == parent_word)[0][0]
dot.edge(str(parent_index), str(row_index))
# display the tree
dot
# -
# [](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| notebooks/word-decomposition-tree.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # CQF - Assignment for Module 3
# Autor: <NAME>
# Email: <EMAIL>
# <center>**Abstract**</center>
# My text
# ## Introduction
# A standard (vanilla) option is a contract that gives the holder the right to buy or sell an underlying asset at a specified price on a specified date. The payoff is related to the underlying asset price. A call option gives the holder the right to buy an underlying asset at the strike price. At expiry date the value of the option is known. For a call it is the maximum of zero and asset price (S) minus the strike price (E), max(0, S-E). For a put the value at expiry is max(0, E-S).
# A binary or sometimes also calles digital option belongs to the group of exotic options. The payoffs are not linearly correlated with the price of the underlying. The binary call option pays a predetermined amount if the price of the underlying asset exceeds the strike price on expiration date. For a put the payoff is paid if the price of the asset is below the strike price.
# In the following analysis I will focus on a binary **call** option of the type "cash-or-nothing". That means that at expiry a fixed amount of cash is transferred. Furhermore I assume for simplicity that the contract pays 1 unit of cash at expiry.
# ## Setting up the model
# The expected value of the discounted payoff under the risk neutral density $\mathbb{Q}$ is defined as
# $$ V(S,t) = e^{-r(T-t)}\mathbb{E}^\mathbb{Q}[\text{Payoff}(S_T)]$$
# The todays stock price is denoted by $S_0$, the strike price by $E$, the time to expiry $T-t$ is measured in years. The volatility is deoted by $\sigma$ and the risk free rate $r$ is assumed to be constant.
# - $S_0 = 100$
# - $E = 100$
# - $T-t = 1$
# - $\sigma$ = 20% = 0.2
# - $r$ = 5% = 0.05
# We are asked to use the Euler-Maruyama scheme to simulate the underlying stock price. Therefore I set one year to 252 days and $\delta t$ to 1/252.
# The payoff of the binary option is described by the Heaviside function
# $$
# \mathcal{H}(x)=
# \begin{cases}
# 1 & \text{$x>0$}\\
# 0 & \text{$x\leq0$}
# \end{cases}
# $$
# For the case of our binary call option the Heaviside function can be rewritten as
# $$
# \text{Payoff} = \mathcal{H}(x)=
# \begin{cases}
# 1 & \text{$S(T)>E$}\\
# 0 & \text{otherwise}
# \end{cases}
# $$
S_0 <- 100
E <- 100
maturity <- 1
days <- 252
dt <- 1 / days
sigma <- 0.2
r <- 0.05
print(E)
| CQF_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Mining Melon's Twitter Feed
# ### Set-up and loading data
import pandas as pd
import csv
import os
import numpy as np
# step up directory to find data
#owd=os.getcwd()
os.chdir("..") # to move up directory
os.chdir("..") # to move up directory
path = os.getcwd()+"/data/boogie2988.csv"
data = pd.read_csv(path)
data.head()
print(data.dtypes)
# ### Natural Language Processing
data['date'] = pd.to_datetime(data['date'], format='%Y-%m-%d')
data['retweet_date'] = pd.to_datetime(data['retweet_date'], format='%Y%m%d')
import nltk
nltk.download('punkt')
nltk.download('stopwords')
# initial set up and word tokenisation
frame = data['tweet'].str.lower().str.cat(sep=' ')
words = nltk.tokenize.word_tokenize(frame)
word_dist = nltk.FreqDist(words)
print(word_dist)
# text cleaning using nltk pre-made english stopword list
# convert this to a list before computation for speed
stopwords = set(nltk.corpus.stopwords.words('english'))
# +
dict_filter = lambda word_dist, stopwords: \
dict((word,word_dist[word])
for word in word_dist if word not in stopwords
and word.isalpha()
and word not in ['http', 'https'])
filtered_word_freq = dict_filter(word_dist, stopwords)
# -
# convert to pandas dataframe
rslt = pd.DataFrame(list(filtered_word_freq.items()),
columns=['word', 'freq'])
rslt.head()
sorted_rslt = rslt.sort_values(by='freq', ascending=False)
sorted_rslt = sorted_rslt[:20]
# ### Visualising the data
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
# %matplotlib inline
#
# +
# most commonly used words
sorted_rslt.reset_index(level=0, inplace=True)
plt.rcParams["figure.figsize"] = (15, 12)
bar = sns.barplot(x = sorted_rslt['word'],
y = sorted_rslt['freq'],
data = sorted_rslt,
palette = 'Blues_d')
plt.show()
# -
# save to file
fig = bar.get_figure()
fig.savefig("img/output.png")
| code/boogie/BoogieTweets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import pandas as pd
import cv2, os, time
import numpy as np
import tensorflow as tf
tf.__version__
# ### Loading tflite model using interpreter
interpreter = tf.lite.Interpreter(model_path = 'lite-model_movenet_singlepose_lightning_3.tflite')
interpreter.allocate_tensors() #allocating tensors to optimize inference
# +
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
cv2.imshow('Frame',frame)
img = frame.copy()
img = tf.image.resize_with_pad(np.expand_dims(img, axis=0), 192,192)
input_img = tf.cast(img, dtype=tf.float32)
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.set_tensor(input_details[0]['index'], np.array(input_img))
interpreter.invoke()
output_result = interpreter.get_tensor(output_details[0]['index'])
print(output_result)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
# -
| Movenet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="NCpnYj2bfXvG"
#
# # Wellness 심리 상담 데이터에 대한 KoGPT2 학습
# + [markdown] id="EN1oD-YEfukE"
# ## 1.Google Drive 연동
# - 모델 파일과 학습 데이터가 저장 되어있는 구글 드라이브의 디렉토리와 Colab을 연동.
# - 좌측상단 메뉴에서 런타임-> 런타임 유형 변경 -> 하드웨어 가속기 -> GPU 선택 후 저장
# + [markdown] id="7TlLDiAJf0Zz"
# ### 1.1 GPU 연동 확인
# + id="Q1qPeOGVfkb_" executionInfo={"status": "ok", "timestamp": 1603365409280, "user_tz": -540, "elapsed": 1368, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="b5dd12e3-0244-4924-c062-cebb427de4b5" colab={"base_uri": "https://localhost:8080/", "height": 377}
# !nvidia-smi
# + [markdown] id="9tW2_JPaf79o"
# ### 1.2 Google Drive 연동
# 아래 코드를 실행후 나오는 URL을 클릭하여 나오는 인증 코드 입력
# + id="7voF7JHtf4J5" executionInfo={"status": "ok", "timestamp": 1603365468552, "user_tz": -540, "elapsed": 60627, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="703fb3d7-16a9-4a81-bb81-e073a196b3de" colab={"base_uri": "https://localhost:8080/", "height": 35}
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="T4MqQlXugAdI"
# **Colab 디렉토리 아래 dialogLM 경로 확인**
#
#
#
# + id="mdqgpGkLgGoc" executionInfo={"status": "ok", "timestamp": 1603365469663, "user_tz": -540, "elapsed": 61728, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="3fa9b22d-71e3-45c4-e5b3-8a323f3faa3b" colab={"base_uri": "https://localhost:8080/", "height": 71}
# !ls drive/'My Drive'/'WEB_Ask_06devbros'/'ai'/'chatbot'
# + [markdown] id="wA36dTxFgMA8"
# **필요 패키지 설치**
# + id="OdbnMf8NjSHN" executionInfo={"status": "ok", "timestamp": 1603365483483, "user_tz": -540, "elapsed": 75539, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="a6850530-9915-485b-d37c-a46dd8a108ea" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# !pip install -r drive/'My Drive'/'WEB_Ask_06devbros'/'ai'/'chatbot'/requirements.txt
# + id="sxX_AhGI8PSb" executionInfo={"status": "ok", "timestamp": 1603365486101, "user_tz": -540, "elapsed": 78148, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="f6284d13-8ec8-4193-fb9e-0f7487a32015" colab={"base_uri": "https://localhost:8080/", "height": 161}
# !pip install --upgrade tokenizers==0.8.1.rc1
# + [markdown] id="i51bCZyCjZzy"
# ## KoGPT-2 Training
# + [markdown] id="UFSAkW6MkZb7"
# **Path 추가**
# + id="BRNM3Al6kdCI" executionInfo={"status": "ok", "timestamp": 1603365486103, "user_tz": -540, "elapsed": 78147, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}}
import sys
sys.path.append('/content/drive/My Drive/WEB_Ask_06devbros/ai/chatbot')
# + id="75HEefA54Uvw" executionInfo={"status": "ok", "timestamp": 1603365486104, "user_tz": -540, "elapsed": 78140, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="e6b41a15-eacf-444c-d8f6-f41d04f04ea6" colab={"base_uri": "https://localhost:8080/", "height": 55}
print(sys.path)
# + [markdown] id="KrFfFlV8jfcg"
# ### 2.1 import package
# + id="88ApclaNjVvh" executionInfo={"status": "ok", "timestamp": 1603365494199, "user_tz": -540, "elapsed": 86231, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}}
import os
import numpy as np
from tqdm import tqdm
import torch
from torch.utils.data import dataloader
from dataloader.wellness import WellnessAutoRegressiveDataset
from model.kogpt2 import DialogKoGPT2
# + id="iHvrtMJaoV_Z" executionInfo={"status": "ok", "timestamp": 1603365494205, "user_tz": -540, "elapsed": 86230, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="4d18b9cc-5f39-4796-f561-f68cc5386502" colab={"base_uri": "https://localhost:8080/", "height": 35}
torch.cuda.is_available()
# + id="Xay8BpzSUa4u" executionInfo={"status": "ok", "timestamp": 1603365494206, "user_tz": -540, "elapsed": 86222, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="a639fc52-b445-4f4b-cb16-c623577058ac" colab={"base_uri": "https://localhost:8080/", "height": 377}
# !nvidia-smi
# + [markdown] id="mZuWyjJEjxsF"
# ### KoGPT2 Training for Wellness dataset
# + id="Y9vwUZQ9j0DN" executionInfo={"status": "ok", "timestamp": 1603378515193, "user_tz": -540, "elapsed": 12189356, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}} outputId="41a0d0a8-ede7-4f0a-8dfc-40fa090aedc9" colab={"base_uri": "https://localhost:8080/", "height": 371, "referenced_widgets": ["7a985428cde1424d9e25b33842d9724b", "8c8968c6301f44b39e9162dc711fa5ef", "fffa0e0aec284721b1c280e432fa9e11", "f90f970ca3e44dd497f1a45c5f517180", "<KEY>", "<KEY>", "52a92cf76d564bb8bad63d92cc443a8b", "<KEY>", "3e9f3ff5ec6a4fe5b467539d97257f8e", "8c9d44a26fc4428f8ce078e591d1e5b4", "0e8f346c6e4246c38a0770b6b5feb1e7", "<KEY>", "b8416de965e34c52a6fc7aecd9a0f03a", "30e635ee24b749dc83ad8f11cf136a8c", "446526688efe4b11a40734a6cd56aff2", "<KEY>", "<KEY>", "33e1a6b422b24e0dbd008e37789b135e", "9be6e365ee5c432db31908da48573970", "<KEY>", "ec14eb252e6648a4a0639785fc994b27", "588afe0498394112810ca37b376e0c18", "<KEY>", "3edc62efc8a94148b9ca01bf0b1ae818", "<KEY>", "<KEY>", "b009751e7a3b482891c83435d256c3a4", "<KEY>", "de282477c47343618bcea0dbc3321def", "c0501afdfbef4700a1255885ddfd7c4c", "<KEY>", "<KEY>", "93f6eb7d12a143c58f67a032b3d5de9a", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "6ceded0174fc47a6aac1288c365ee40a", "<KEY>", "<KEY>", "9287b3b945984e7aa669d9012f7de351", "7550b8dafac94be98b63425629793e46", "<KEY>", "<KEY>", "4020e89772e34f0e8edd40050a801657", "1967fff3964148eaa6d825ca496e72d8", "c840da01a9dc476abfcec34a23f2516e", "<KEY>"]}
root_path='/content/drive/My Drive/WEB_Ask_06devbros/ai/chatbot'
data_path = f"{root_path}/data/chatbot_wellness_dialog_for_autoregressive.txt"
checkpoint_path =f"{root_path}/checkpoint"
load_ckpt_path = f"{checkpoint_path}/kogpt2-wellnesee-auto-regressive-20201022.pth"
save_ckpt_path = f"{checkpoint_path}/kogpt2-wellnesee-auto-regressive-20201022-add-chatbotdata.pth"
n_epoch = 3 # Num of Epoch
batch_size = 2 # 배치 사이즈
ctx = "cuda" if torch.cuda.is_available() else "cpu"
device = torch.device(ctx)
save_step = 100 # 학습 저장 주기
learning_rate = 2e-5 # Learning Rate
dataset= WellnessAutoRegressiveDataset(data_path)
train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True)
checkpoint = torch.load(load_ckpt_path, map_location=device)
model = DialogKoGPT2()
model.load_state_dict(checkpoint['model_state_dict'])
model.to(device)
loss_fct = torch.nn.CrossEntropyLoss(ignore_index=3)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
losses =[]
for epoch in range(n_epoch):
count = 0
with tqdm(total=len(train_loader), desc=f"Train({epoch})") as pbar:
for i, data in enumerate(train_loader):
optimizer.zero_grad()
data = torch.stack(data) # list of Tensor로 구성되어 있기 때문에 list를 stack을 통해 변환해준다.
data = data.transpose(1, 0)
data= data.to(ctx)
outputs = model(data, labels=data)
_, logits = outputs[:2]
# Shift so that tokens < n predict n
shift_logits = logits[..., :-1, :].contiguous()
shift_labels = data[..., 1:].contiguous()
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
loss.backward()
optimizer.step()
losses.append(loss.item())
# if count % 10 == 0:
# print('epoch no.{} train no.{} loss = {}'.format(epoch, count + 1, loss))
if (count > 0 and count % save_step == 0) or (len(data) < batch_size):
torch.save({
'epoch': epoch,
'train_no': count,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss
}, save_ckpt_path)
count += 1
pbar.update(1)
pbar.set_postfix_str(f"Loss: {loss.item():.3f} ({np.mean(losses):.3f})")
# + id="g8PpEntLsWkH" executionInfo={"status": "ok", "timestamp": 1603378515195, "user_tz": -540, "elapsed": 26, "user": {"displayName": "\uc774\uc6a9\uc6b0", "photoUrl": "", "userId": "08200059377748879944"}}
| ai/chatbot/train/train-kogpt2-for-wellness.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/henriquehsilva/i2a2-challenges/blob/master/cognitivo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ellIx4wbGlzm" colab_type="text"
# # Requisitos
# 1. Conversão do formato dos arquivos: Converter o arquivo CSV presente no diretório `data/input/users/load.csv,` para um formato colunar de alta performance de leitura de sua escolha. Justificar brevemente a escolha do formato;
#
# 2. Deduplicação dos dados convertidos: No conjunto de dados convertidos haverão múltiplas entradas para um mesmo registro, variando apenas os valores de alguns dos campos entre elas. Será necessário realizar um processo de deduplicação destes dados, a fim de apenas manter a última entrada de cada registro, usando como referência o id para identificação dos registros duplicados e a data de atualização (update_date) para definição do registro mais recente;
#
# 3. Conversão do tipo dos dados deduplicados: No diretório config haverá um arquivo JSON de configuração (types_mapping.json), contendo os nomes dos campos e os respectivos tipos desejados de output. Utilizando esse arquivo como input, realizar um processo de conversão dos tipos dos campos descritos, no conjunto de dados deduplicados;
#
# # Notas gerais
# - Todas as operações devem ser realizadas utilizando Spark. O serviço de execução fica a seu critério, podendo utilizar tanto serviços locais como serviços em cloud. Justificar brevemente o serviço escolhido (EMR, Glue, Zeppelin, etc.).
#
# - Cada operação deve ser realizada no dataframe resultante do passo anterior, podendo ser persistido e carregado em diferentes conjuntos de arquivos após cada etapa ou executados em memória e apenas persistido após operação final.
#
# - Você tem liberdade p/ seguir a sequência de execução desejada;
#
# - Solicitamos a transformação de tipos de dados apenas de alguns campos. Os outros ficam a seu critério
#
# - O arquivo ou o conjunto de arquivos finais devem ser compactados e enviados por e-mail.
# + [markdown] id="2GYgjVhJLd8s" colab_type="text"
# # Setup
# + id="HAzMeTuvtc6l" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="20c9c102-3b62-494e-bf9c-296477b20ecc"
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
from keras.utils import get_file
# Auxiliary libraries
import pandas as pd
print(f"TensorFlow V{tf.__version__} 🦾")
# + id="BEVmGrAU-HEg" colab_type="code" colab={}
TEST_ENG_DATA_URL = 'https://storage.googleapis.com/cognitivo-hiring.henriquesilva.dev'
path_to_zip = get_file('teste-eng-dados.zip', origin = f"{TEST_ENG_DATA_URL}/teste-eng-dados.zip", archive_format = 'zip', extract = True)
# + id="Yed4NwOGAlhc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f6714201-49e4-472a-d59f-f84ba35f5243"
# !ls -a /root/.keras/datasets
# + id="FoVRtKfxr3Dz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="771add7d-cb1a-46bf-94d1-6ed79ed5bb9b"
# !ln -s /root/.keras/datasets
# + id="gBZoFvXIF7Mf" colab_type="code" colab={}
input_file = '/content/datasets/data/input/users/load.csv'
input_df = pd.read_csv(input_file)
# + id="iLoquELCGKTW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0aadd37a-f266-4971-9557-81944ed5774e"
input_df.shape[0]
# + id="9UHKvmitHl22" colab_type="code" colab={}
input_df.to_parquet('/content/datasets/data/input/users/load.parquet', engine='pyarrow')
# + id="QyJnW1HxH8zA" colab_type="code" colab={}
raw_load = pd.read_parquet('/content/datasets/data/input/users/load.parquet', engine='pyarrow')
# + id="hqKKmJcy6X86" colab_type="code" colab={}
# input_df.to_hdf('/content/datasets/data/input/users/load.h5', 'data')
# + id="MCoonTmL6-kw" colab_type="code" colab={}
# h5_load = pd.read_hdf('/content/datasets/data/input/users/load.h5')
# + id="WujNKDd6LKzF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="f183a1c4-1e84-4d86-a9d9-38bb54978c5d"
raw_load.head(raw_load.shape[0])
# + id="8Xe7ELpvFTG-" colab_type="code" colab={}
raw_load.sort_values('id', inplace=True)
# + id="Mhp_GMCtioBk" colab_type="code" colab={}
raw_load.sort_values('update_date', ascending = False, inplace = True)
# + id="JO0Mj7C3Bpp5" colab_type="code" colab={}
duplicate_result = raw_load.groupby('id').nunique()
# + id="ozuE59MEESFE" colab_type="code" colab={}
raw_load.drop(duplicate_result.index, inplace=True)
# + id="X0EmlcwQxclU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="7e6a8b59-dbdd-46f8-f348-01f079625aba"
duplicate_result.index
# + id="jR7_zSfLy4Rt" colab_type="code" colab={}
types_config = pd.read_json('/content/datasets/config/types_mapping.json', orient='index')
# + id="_3Rc8wRX1O1r" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 323} outputId="d890049d-dc51-40d4-f64e-1be29eaba365"
for key in types_config.T.columns:
raw_load.astype({ key: "integer" }).dtypes
# + id="iiU00ol71q5R" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fd6cba8b-2553-44e8-f806-765a7e0a9603"
for value in types_config.T.values:
print(value)
# + id="vemWWY0u2GIh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 163} outputId="141b729a-f816-4924-b549-1b498b5b5fb0"
new_raw
# + id="8qdo9k-czkP5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 129} outputId="e019f142-96a6-4a66-bbe5-6f95e6234030"
raw_load.df.astype({'age': 'create_date', 'update_date'}).dtypes
| cognitivo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Problem statement
#
# Given the root of a binary tree, find the diameter.
#
# *Note: Diameter of a Binary Tree is the maximum distance between any two nodes*
class BinaryTreeNode:
def __init__(self, data):
self.left = None
self.right = None
self.data = data
# +
def diameter_of_binary_tree(root):
return diameter_of_binary_tree_func(root)[1]
def diameter_of_binary_tree_func(root):
if root is None:
return 0, 0
left_height, left_diameter = diameter_of_binary_tree_func(root.left)
right_height, right_diameter = diameter_of_binary_tree_func(root.right)
current_height = max(left_height, right_height) + 1
height_diameter = left_height + right_height
current_diameter = max(left_diameter, right_diameter, height_diameter)
return current_height, current_diameter
# -
# You can use the following function to test your code with custom test cases. The function `convert_arr_to_binary_tree` takes an array input representing level-order traversal of the binary tree.
#
#
# <img src='./resources/01-binary-tree.png'>
#
# The above tree would be represented as `arr = [1, 2, 3, 4, None, 5, None, None, None, None, None]`
#
# Notice that the level order traversal of the above tree would be `[1, 2, 3, 4, 5]`.
#
# Note the following points about this tree:
# * `None` represents the lack of a node. For example, `2` only has a left node; therefore, the next node after `4` (in level order) is represented as `None`
# * Similarly, `3` only has a left node; hence, the next node after `5` (in level order) is represted as `None`.
# * Also, `4` and `5` don't have any children. Therefore, the spots for their children in level order are represented by four `None` values (for each child of `4` and `5`).
# +
from queue import Queue
def convert_arr_to_binary_tree(arr):
"""
Takes arr representing level-order traversal of Binary Tree
"""
index = 0
length = len(arr)
if length <= 0 or arr[0] == -1:
return None
root = BinaryTreeNode(arr[index])
index += 1
queue = Queue()
queue.put(root)
while not queue.empty():
current_node = queue.get()
left_child = arr[index]
index += 1
if left_child is not None:
left_node = BinaryTreeNode(left_child)
current_node.left = left_node
queue.put(left_node)
right_child = arr[index]
index += 1
if right_child is not None:
right_node = BinaryTreeNode(right_child)
current_node.right = right_node
queue.put(right_node)
return root
# -
def test_function(test_case):
arr = test_case[0]
solution = test_case[1]
root = convert_arr_to_binary_tree(arr)
output = diameter_of_binary_tree(root)
print(output)
if output == solution:
print("Pass")
else:
print("Fail")
# +
arr = [1, 2, 3, 4, 5, None, None, None, None, None, None]
solution = 3
test_case = [arr, solution]
test_function(test_case)
# +
arr = [1, 2, 3, 4, None, 5, None, None, None, None, None]
solution = 4
test_case = [arr, solution]
test_function(test_case)
# +
arr = [1, 2, 3, None, None, 4, 5, 6, None, 7, 8, 9, 10, None, None, None, None, None, None, 11, None, None, None]
solution = 6
test_case = [arr, solution]
test_function(test_case)
# -
| Data Structures/Trees/Diameter of a Binary Tree.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (python37)
# language: python
# name: python37
# ---
# +
import numpy as np
from matplotlib import pyplot as plt
import pandas as pd
import scipy as sp
from statsmodels.api import OLS
import statsmodels.tools.tools
from pandas import DataFrame
# # import sys to add path to datasets folder
# import sys
# sys.path.append('/Users/stuartjenkins/Documents/$$Datasets/GMST')
from GIR import *
# +
## Python code to import and process the different historical temperature observation datasets used in Chapter 1, SR1.5.
# Written by <NAME> (<EMAIL>) (18/12/2018)
# ------------------------------------------------------------------------------------------------
# ------------------------------------------------------------------------------------------------
# -------------------------------------------------
# Import and rebaseline the observations ready for plotting
# -------------------------------------------------
def temp_import():
"""
Imports the HadCRUT4, HadCRUT4-CW, NOAA and GISTEMP datasets, re-baselines them to 1850-1900
"""
# define the baseline year range, and common reference range
base_low=1850.
base_high=1900.
com_ref_low=1880.
com_ref_high=2017.
# define variable representing the frequency of temperature observations data ('mon' = monthly)
temp_freq='mon'
# -------------------------------------------------
## Import the temperature observation datasets ##
#Specify the GMST best-estimate temperature timeseries files to load from
gmst_files = {'HadCRUT4':'../../../$$Datasets/GMST/HadCRUT.4.6.0.0.monthly_ns_avg.txt',
'GISTEMP':'../../../$$Datasets/GMST/GLB.Ts+dSST.csv',
'NOAA':'../../../$$Datasets/GMST/aravg.mon.land_ocean.90S.90N.v4.0.1.201803.asc',
'Cowtan-Way':'../../../$$Datasets/GMST/had4_krig_v2_0_0.txt'}
gmst_names = gmst_files.keys()
# make a common years vector, which we can use as the years variable on all imported temperature datasets
years_com = np.arange(1850. + 1./24,1850. + 1./24 + (2020)*1./12,1.0/12)[:-1]
# define dictionary gmst to hold the temperature data and its averages etc.
gmst = {}
# Go through the datasets imported from the files referenced in 'gmst_files' above and load them
for key in gmst_names:
if key in ['HadCRUT4','Cowtan-Way']:
data = np.genfromtxt(gmst_files[key])
temps = data[:,1]
years = years_com[:len(temps)]
if key in ['GISTEMP']:
f_giss = open(gmst_files[key],'r')
temps = []
counter = 0
for line in f_giss:
if counter>=2:
temps.extend([float(f) for f in line.split(',')[1:13] if f != '***'])
counter = counter + 1
temps=np.array(temps)
years = years_com[years_com>1880.][:len(temps)]
if key in ['NOAA']:
data = np.genfromtxt(gmst_files[key])
temps = data[:,2]
years = years_com[years_com>1880.][:len(temps)]
gmst[key] = {'Temp':temps,'Years':years}
#Set the datasets to a common reference period
hc_ref = np.mean(gmst['HadCRUT4']['Temp'][np.logical_and(gmst['HadCRUT4']['Years']>=com_ref_low,
gmst['HadCRUT4']['Years']<(com_ref_high+1))]) - np.mean(gmst['HadCRUT4']['Temp'][np.logical_and(gmst['HadCRUT4']['Years']>=base_low,
gmst['HadCRUT4']['Years']<(base_high+1))])
for key in gmst_names:
gmst[key]['Temp'] = gmst[key]['Temp'][gmst[key]['Years'] < 2018.]
gmst[key]['Years'] = gmst[key]['Years'][gmst[key]['Years'] < 2018.]
#Express relative to a common base period
gmst[key]['Temp'] = gmst[key]['Temp'] - np.mean(gmst[key]['Temp'][np.logical_and(gmst[key]['Years']>=com_ref_low,
gmst[key]['Years']<(com_ref_high+1))])
#Set NOAA and GISTEMP datasets relative to HadCRUT4 value over the base period
if key in ['NOAA','GISTEMP']:
gmst[key]['Temp'] = gmst[key]['Temp'] + hc_ref
else:
gmst[key]['Temp'] = gmst[key]['Temp'] - np.mean(gmst[key]['Temp'][np.logical_and(gmst[key]['Years']>=base_low,gmst[key]['Years']<(base_high+1))])
return gmst
# -------------------------------------------------
# -----------------------------------------------
# Find the min, mean and max values from the temperature observations
# -----------------------------------------------
def calc_mean_min_max(gmst):
"""
Requires gmst to have dictionary strings: HadCRUT4, Cowtan-Way, GISTEMP, NOAA
"""
obs_max = np.zeros_like(gmst['HadCRUT4']['Years'])
obs_min = np.zeros_like(gmst['HadCRUT4']['Years'])
obs_mean = np.zeros_like(gmst['HadCRUT4']['Years'])
for y in range(0,len(gmst['HadCRUT4']['Years'])):
year_vals = []
#Loop over AR5 datasets and Cowtan-Way
for ob in ['HadCRUT4','NOAA','GISTEMP','Cowtan-Way']:
# collect the temperature value at a given year in each dataset and store in val
val = gmst[ob]['Temp'][gmst[ob]['Years']==gmst['HadCRUT4']['Years'][y]]
if len(val)>0:
year_vals.append(val)
# find the min, mean and max values from each year
obs_max[y] = np.max(year_vals)
obs_min[y] = np.min(year_vals)
obs_mean[y] = np.mean(year_vals)
# save as entries in gmst
gmst['Temp-max'] = obs_max
gmst['Temp-min'] = obs_min
gmst['Temp-mean'] = obs_mean
return gmst
# -------------------------------------------------
# -----------------------------------------------
# Using OLS regression to scale anthropogenic and natural contributions to observed GMST data
# Methodology follows Haustein et al. (Scientific Reports, 2017)
# -----------------------------------------------
def calc_gwi(obs,obs_years,reg_type='mon',base_low=1850.,base_high=1900, name=''):
#Express the observations relative to the base period
obs = obs - np.mean(obs[np.logical_and(obs_years>=base_low,obs_years<(base_high+1))])
#Load the best estimate forcings from Piers
forc_file = '../../../$$Datasets/RF/AWI_all_forcing_CH4updated.txt'
data = np.genfromtxt(forc_file,skip_header=1)
years = data[:,0]
tot_forc = data[:,2]
ant_forc = data[:,3]
# #Integrate anthropogenic and natural forcing with standard FAIR parameters
# C, t_nat = fair_scm(other_rf=tot_forc-ant_forc)
# C, t_anthro = fair_scm(other_rf=ant_forc)
# #Express relative to the centre of the base period
# t_nat = t_nat - np.mean(t_nat[np.logical_and(years>=base_low,years<base_high+1)])
# t_anthro = t_anthro - np.mean(t_anthro[np.logical_and(years>=base_low,years<base_high+1)])
# # -----------------------------------------------
# # Prepare the temperatures run through FaIR, so they lie on same year-grid as observations, so they can be compared
# # -----------------------------------------------
# #Interpolate the annual forced responses to the grid of the observed data
# if reg_type !='mon':
# t_nat = np.interp(obs_years+0.5, years+0.5, t_nat)
# t_anthro = np.interp(obs_years+0.5, years+0.5, t_anthro)
# else:
# t_nat = np.interp(obs_years, years+0.5, t_nat)
# t_anthro = np.interp(obs_years, years+0.5, t_anthro)
# #Linearly project the final half year
# t_anthro[obs_years>(years[-1]+0.5)] = 12*(t_anthro[obs_years<=(years[-1]+0.5)][-1] - t_anthro[obs_years<=(years[-1]+0.5)][-2]) * (obs_years[obs_years>(years[-1]+0.5)] - obs_years[obs_years<=(years[-1]+0.5)][-1]) \
# +t_anthro[obs_years<=(years[-1]+0.5)][-1]
# t_nat[obs_years>(years[-1]+0.5)] = 12*(t_nat[obs_years<=(years[-1]+0.5)][-1] - t_nat[obs_years<=(years[-1]+0.5)][-2]) * (obs_years[obs_years>(years[-1]+0.5)] - obs_years[obs_years<=(years[-1]+0.5)][-1]) \
# +t_nat[obs_years<=(years[-1]+0.5)][-1]
# # -----------------------------------------------
# #Use scipy defined OLS regression function to complete OLD regression of observations data on natural and anthropogenic warming with a constant
# y = np.copy(obs)
# x = DataFrame({'x1': (t_anthro), 'x2': (t_nat)})
# # add constant vector on to dataframe we will fit to temp observations
# x = statsmodels.tools.tools.add_constant(x)
# # complete OLS regression of anthropogenic and natural temperatures (found from FaIR integrated best estimate forcing) onto given observed temperature dataset.
# model = OLS(y, x)
# result = model.fit()
# # collect output scaling factors for anthro and natural temperature timeseries
# sf = result.params
# #Form scaled anthropgenic warming index
# awi = t_anthro * sf['x1']
# #Scaled natural warming index
# nwi = t_nat * sf['x2']
# #Scaled total externally forced warming index
# gwi = awi + nwi
# print(name, ' AWI scale factor: ', sf['x1'], '\n', name, ' NWI scale factor: ', sf['x2'])
# return awi, nwi
return
# -------------------------------------------------
# -
# +
# Method:
# Using attributable warming plus GCP emissions
# Fit r0 and rT / rC (fixed ratio) to present day concentrations
fit_time_period = list(set(Best_emission_estimates['default','CO2'].dropna().index).intersection(set(Attributable_warming.index)))
CO2_fit_warming = Attributable_warming.loc[fit_time_period].values.flatten()
CO2_fit_emissions = convert_forc_to_model_input(Best_emission_estimates['default','CO2'],'fit_CO2','CO2')
CO2_original_parameters = convert_forc_to_model_input(default_gas_forcing_params()['default','CO2'],'tune_CO2','CO2')
def fit_CO2_params(x,fit_time):
fit_params = CO2_original_parameters.copy()
rT_rC_scaling = 0.019/4.165
fit_params.loc[['r0','rT','rC'],('tune_CO2','CO2')] = [ x[0] , x[1] , x[1] * rT_rC_scaling ]
fit_model_run = prescribed_temps_gas_cycle(T=CO2_fit_warming,emissions_in=CO2_fit_emissions,gas_parameters=fit_params)['C']
return np.sum((CMIP6_concs_extended.loc[2017,'CO2'] - fit_model_run.loc[2017,('fit_CO2','tune_CO2','CO2')])**2)
fit_result = sp.optimize.minimize(fit_CO2_params,x0=[32,4.165],args=2017,method='Nelder-mead')
fig,ax = plt.subplots(figsize=(10,6))
ax.plot(CMIP6_concs_extended.loc[fit_time_period,'CO2'],'k',label='CMIP6 historical')
tuned_params = CO2_original_parameters.copy()
rT_rC_scaling = 0.019/4.165
tuned_params.loc[['r0','rT','rC'],('tune_CO2','CO2')] = [ fit_result.x[0] , fit_result.x[1] , fit_result.x[1] * rT_rC_scaling ]
tuned_model_run = prescribed_temps_gas_cycle(T=CO2_fit_warming,emissions_in=CO2_fit_emissions,gas_parameters=tuned_params)['C'].loc[fit_time_period]
ax.plot(tuned_model_run,'r',label='FaIR v2')
plt.xlim(1850,2017)
plt.title('Observed vs best-fit modelled CO$_2$ concentrations')
plt.legend()
print('r0:',fit_result.x[0])
print('rT:',fit_result.x[1])
print('rC:',fit_result.x[1] * rT_rC_scaling)
| GIR/legacy_notebooks/fits for 1% per yr CMIP6 runs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: mmp_guide
# language: python
# name: mmp_guide
# ---
# # Using two spatial weights matrices
#
# Some functions are using spatial weights for two different purposes. Therefore two matrices have to be passed. We will illustrate this case measuring building adjacency and mean interbuilding distance.
import momepy
import geopandas as gpd
import matplotlib.pyplot as plt
# We will again use `osmnx` to get the data for our example and after preprocessing of building layer will generate tessellation.
# + tags=[]
import osmnx as ox
gdf = ox.geometries.geometries_from_place('Kahla, Germany', tags={'building':True})
gdf_projected = ox.projection.project_gdf(gdf)
buildings = momepy.preprocess(gdf_projected, size=30,
compactness=True, islands=True, verbose=False)
buildings['uID'] = momepy.unique_id(buildings)
limit = momepy.buffered_limit(buildings)
tessellation = momepy.Tessellation(buildings, unique_id='uID', limit=limit,
verbose=False).tessellation
# -
# ## Building adjacency
#
# Building adjacency is using `spatial_weights_higher` to denote the area within which the calculation occurs (required) and `spatial_weights` to denote adjacency of buildings (optional, the function can do it for us). We can use distance band of 200 meters to define `spatial_weights_higher`.
# + tags=["hide_output"]
import libpysal
dist200 = libpysal.weights.DistanceBand.from_dataframe(buildings, 200,
ids='uID')
# + tags=[]
adjac = momepy.BuildingAdjacency(
buildings, spatial_weights_higher=dist200, unique_id='uID')
buildings['adjacency'] = adjac.series
# + tags=["hide_input"]
f, ax = plt.subplots(figsize=(10, 10))
buildings.plot(ax=ax, column='adjacency', legend=True, cmap='viridis', scheme='naturalbreaks', k=10)
ax.set_axis_off()
plt.show()
# -
# If we want to specify or reuse `spatial_weights`, we can generate them as Queen contiguity weights. Using `libpysal` or `momepy` (momepy will use the same libpysal method, but you don't need to import libpysal directly):
queen = libpysal.weights.Queen.from_dataframe(buildings,
silence_warnings=True,
ids='uID')
queen = momepy.sw_high(k=1, gdf=buildings, ids='uID', contiguity='queen')
# + tags=[]
buildings['adj2'] = momepy.BuildingAdjacency(buildings,
spatial_weights_higher=dist200,
unique_id='uID',
spatial_weights=queen).series
# + tags=["hide_input"]
f, ax = plt.subplots(figsize=(10, 10))
buildings.plot(ax=ax, column='adj2', legend=True, cmap='viridis')
ax.set_axis_off()
plt.show()
# -
# ## Mean interbuilding distance
#
# Mean interbuilding distance is similar to `neighbour_distance`, but it is calculated within vicinity defined in `spatial_weights_higher`, while `spatial_weights` captures immediate neighbours.
sw1 = momepy.sw_high(k=1, gdf=tessellation, ids='uID')
sw3 = momepy.sw_high(k=3, gdf=tessellation, ids='uID')
# + tags=[]
interblg_distance = momepy.MeanInterbuildingDistance(
buildings, sw1, 'uID', spatial_weights_higher=sw3)
buildings['mean_ib_dist'] = interblg_distance.series
# -
# `spatial_weights_higher` is optional and can be derived from `spatial_weights` as weights of higher order defined in `order`.
# + tags=[]
buildings['mean_ib_dist'] = momepy.MeanInterbuildingDistance(
buildings, sw1, 'uID', order=3).series
# + tags=["hide_input"]
f, ax = plt.subplots(figsize=(10, 10))
buildings.plot(ax=ax, column='mean_ib_dist', scheme='quantiles', k=10, legend=True, cmap='viridis')
ax.set_axis_off()
plt.show()
| docs/user_guide/weights/two.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DLProfile Example using mmaADSP Benchmark
# ## Set imports and neccessary environment variables
import pathlib
import os
import sys
import matplotlib.pyplot as plt
import warnings
import pprint
import pandas
VANIDL_DIR="{}".format(pathlib.Path(os.getcwd()).parent.parent.parent.absolute())
sys.path.insert(0, VANIDL_DIR)
warnings.filterwarnings('ignore')
os.environ["DARSHAN_DIR"] = "/soft/perftools/darshan/darshan-3.1.8"
os.environ["VANIDL_DIR"] = VANIDL_DIR
# #### Formatting
pp = pprint.PrettyPrinter(indent=1)
class color:
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
# ## Create instrance of DL Profile and load the darshan file
from src.vanidl import VaniDL
profile = VaniDL()
# !rm /tmp/temp_analysis/cosmoflow_run3_p8*
# +
DATAPATH_INCLUDES = []
status = profile.Load("/home/dhari/darshan-logs/benchmark/cosmoflow/optimization/tc_32.darshan", data_paths_include=DATAPATH_INCLUDES)
if status:
print("Darshan Trace loaded Successfully!")
else:
print("Darshan Trace load Failed!")
print(profile._error_str())
# -
# ## Use Profile object to analyze the darshan I/O trace.
# ### Verify if object works
#
# The GetDXTAsDF() function enables users to perform analysis
df = profile.GetDXTAsDF()
pp.pprint("Files used in the application")
pp.pprint(df['Filename'].nunique())
pp.pprint(sorted(df['Filename'].unique().tolist()))
df_normal = profile.GetTraceAsDF()
pp.pprint("Files used in the application")
pp.pprint(df_normal['Filename'].nunique())
pp.pprint(sorted(df_normal['Filename'].unique().tolist()))
# ### Collect the summary of the Application
# +
summary = profile.GetSummary()
print("\n")
print(color.BOLD + "Data Access Summary (from Darshan):"+ color.END)
print("Total Job time\t\t\t:\t{:0.2f} seconds".format(summary['job_time']))
#FIXME: calculate time per rank and then take max across it.
print("Time spent in I/O\t\t:\t{:0.2f} seconds".format(summary['total_io_time']/8))
print("% Time spent in I/O\t\t:\t{:0.2f}%".format(float(summary['total_io_time'])/8*100/summary['job_time']))
print("Total Data Accessed\t\t:\t{:0.2f} GB".format(float(summary['total_io_bytes'])/1024.0/1024.0/1024.0))
print("Data Access Modules used\t:\t{}".format(summary['io_interface_used']))
print("Data Operations\t\t\t:\t{}".format(summary['io_operations_used']))
print("# of files used\t\t\t:\t{}".format(len(summary['files_used'])))
print("# of MPI Ranks\t\t\t:\t{:0.0f} ranks".format(summary['num_ranks']))
print(color.UNDERLINE + "Data Transfer size:"+ color.END)
print("\tMin,Max\t\t\t:\t{:0.0f} bytes and {:0.0f} bytes".format(summary['data_transfer_size']['min'],summary['data_transfer_size']['max']))
print("\tAverage\t\t\t:\t{:0.0f} bytes".format(summary['data_transfer_size']['mean']))
print("\tMedian\t\t\t:\t{:0.0f} bytes".format(summary['data_transfer_size']['median']))
print(color.UNDERLINE + "Data Transfer bandwidth: (per rank)"+ color.END)
print("\tMin,Max\t\t\t:\t{:0.0f} B/s and {:0.0f} MB/s".format(summary['data_transfer_bandwidth']['min'],summary['data_transfer_bandwidth']['max']/1024.0/1024.0))
print("\tAverage\t\t\t:\t{:0.0f} MB/s".format(summary['data_transfer_bandwidth']['mean']/1024.0/1024.0))
print("\tMedian\t\t\t:\t{:0.0f} MB/s".format(summary['data_transfer_bandwidth']['median']/1024.0/1024.0))
print(color.UNDERLINE + "Access Pattern:"+ color.END)
print("\tSequential\t\t:\t{:0.2f}%".format(float(summary['access_pattern']['sequential'])))
print("\tConsecutive\t\t:\t{:0.2f}%".format(float(summary['access_pattern']['consecutive'])))
#An I/O op issued at an offset greater than where the previous I/O op ended.
#An I/O op issued at the offset immediately after the end of the previous I/O
print("\n")
print(color.BOLD + "Files Summary:"+ color.END)
print("File Types\t\t\t:\t{}".format(summary['file_used_summary']['types']))
print(color.UNDERLINE + "Dataset Size:"+ color.END)
print("\tTotal\t\t\t:\t{:0.1f} GB".format(32))
print("\tMin,Max\t\t\t:\t{:0.1f} GB and {:0.1f} GB".format(2,2))
print("\tAverage\t\t\t:\t{:0.1f} GB".format(2))
# -
pp.pprint("Job time : {} seconds".format(profile.GetJobTime()))
pp.pprint("Time spent by application on I/O: {} seconds".format(profile.GetIOTime()/8))
# ### I/O time spent on each file
for file in df['Filename'].unique()[:16]:
print("I/O time for file {}: {:0.2f} seconds".format(file,profile.GetIOTime(filepath=file)))
# ### I/O Time spent per rank
for rank in df['Rank'].unique()[:16]:
print("I/O time for rank {}: {:0.2f} seconds".format(rank,profile.GetIOTime(rank=rank)))
"Total I/O performed by application: {:0.2f} GB".format(float(profile.GetIOSize())/1024.0/1024.0/1024.0)
# ### I/O performed on each file
for file in df['Filename'].unique()[:16]:
print("I/O performed on file {}: {:0.2f} MB".format(file,float(profile.GetIOSize(filepath=file))/1024.0/1024.0))
for rank in df['Rank'].unique()[:16]:
print("I/O performed by rank {}: {:0.2f} MB".format(rank, float(profile.GetIOSize(rank=rank))/1024.0/1024.0))
print("Size of dataset (bytes)")
pp.pprint(profile.GetFileSizes())
file="/lus/theta-fs0/projects/MLPerfHPC/cosmoflow/dataset/cosmoUniverse_2019_02_4parE/dim128_cube_nT4/cosmoUniverse_2019_02_4parE-dim128_cube_nT4-rec1003.tfrecords"
# !ls -l $file
# ### How application access data over time.
tl = profile.CreateIOTimeline(time_step=0.001)
plt.figure(figsize=(8,4))
plt.xlabel("Timeline (ms)")
plt.ylabel("# of I/O Operations")
plt.grid()
plt.plot(tl['time_step'], tl['operation_count']);
plt.figure(figsize=(8,4))
plt.xlabel("Timeline (ms)")
plt.ylabel("I/O Performed (bytes)")
plt.grid()
plt.plot(tl['time_step'], tl['io_bytes']);
# ### How files are accessed over the duration of the Job.
for file in df['Filename'].unique()[:16]:
tl = profile.CreateIOTimeline(filepath=file,time_step=0.001)
tl.plot(x='time_step',y='operation_count', title=file)
plt.show()
# ### Show how each file is accessed by each rank.
for rank in df['Rank'].unique()[:16]:
tl = profile.CreateIOTimeline(rank=rank, time_step = 0.001)
tl.plot(x='time_step',y='operation_count', title=rank)
plt.show()
# ### Data Transfer Size distribution within the application
request_df = profile.GetIORequestDistribution()
df['Length'].plot(kind='hist', figsize=(5, 3));
plt.xlabel("Transfer Size (bytes)")
# ### Data Transfer Size distribution for each file.
for file in df['Filename'].unique()[:16]:
tl = profile.GetIORequestDistribution(filepath=file)
tl.plot(kind='bar', figsize=(10, 4), title=file)
# ### Data Transfer Sizes per Rank
for rank in df['Rank'].unique()[:16]:
tl = profile.GetIORequestDistribution(rank=rank)
tl.plot(kind='bar', figsize=(10, 4), title=rank)
plt.show()
# ### File summary of each file accessed by the Application
pp = pprint.PrettyPrinter(indent=1)
for file in df['Filename'].unique()[:16]:
if os.path.exists(file):
pp.pprint(profile.GetFileSummary(file))
| examples/benchmark/8/Cosmoflow-Optimizer-Copy3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Environment Setup
# +
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
import sklearn
import imblearn
# Ignore warnings
import warnings
warnings.filterwarnings('ignore')
# Settings
pd.set_option('display.max_columns', None)
#np.set_printoptions(threshold=np.nan)
np.set_printoptions(precision=3)
sns.set(style="darkgrid")
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
print("pandas : {0}".format(pd.__version__))
print("numpy : {0}".format(np.__version__))
print("matplotlib : {0}".format(matplotlib.__version__))
print("seaborn : {0}".format(sns.__version__))
print("sklearn : {0}".format(sklearn.__version__))
print("imblearn : {0}".format(imblearn.__version__))
# +
# Dataset field names
datacols = ["duration","protocol_type","service","flag","src_bytes",
"dst_bytes","land","wrong_fragment","urgent","hot","num_failed_logins",
"logged_in","num_compromised","root_shell","su_attempted","num_root",
"num_file_creations","num_shells","num_access_files","num_outbound_cmds",
"is_host_login","is_guest_login","count","srv_count","serror_rate",
"srv_serror_rate","rerror_rate","srv_rerror_rate","same_srv_rate",
"diff_srv_rate","srv_diff_host_rate","dst_host_count","dst_host_srv_count",
"dst_host_same_srv_rate","dst_host_diff_srv_rate","dst_host_same_src_port_rate",
"dst_host_srv_diff_host_rate","dst_host_serror_rate","dst_host_srv_serror_rate",
"dst_host_rerror_rate","dst_host_srv_rerror_rate","attack", "last_flag"]
# Load NSL_KDD train dataset
df_train = pd.read_csv("NSL_KDD_dataset/KDDTrain.txt", sep=",", names=datacols)
df_train = df_train.iloc[:,:-1] # removes an unwanted extra field
# Load NSL_KDD test dataset
df_test = pd.read_csv("NSL_KDD_dataset/KDDTest.txt", sep=",", names=datacols)
df_test = df_test.iloc[:,:-1]
# -
# ### Train Dataset
# View train data
df_train.head(10)
# train set dimension
print('Train set dimension: {} rows, {} columns'.format(df_train.shape[0], df_train.shape[1]))
# ### Test Dataset
# View train data
df_test.head(10)
# test set dimension
print('Test set dimension: {} rows, {} columns'.format(df_test.shape[0], df_test.shape[1]))
df_train.attack.unique()
df_train.attack.nunique()
# ### Data Preprocessing
mapping = {'ipsweep': 'Probe','satan': 'Probe','nmap': 'Probe','portsweep': 'Probe','saint': 'Probe','mscan': 'Probe',
'teardrop': 'DoS','pod': 'DoS','land': 'DoS','back': 'DoS','neptune': 'DoS','smurf': 'DoS','mailbomb': 'DoS',
'udpstorm': 'DoS','apache2': 'DoS','processtable': 'DoS',
'perl': 'U2R','loadmodule': 'U2R','rootkit': 'U2R','buffer_overflow': 'U2R','xterm': 'U2R','ps': 'U2R',
'sqlattack': 'U2R','httptunnel': 'U2R',
'ftp_write': 'R2L','phf': 'R2L','guess_passwd': '<PASSWORD>','warezmaster': 'R2L','warezclient': 'R2L','imap': 'R2L',
'spy': 'R2L','multihop': 'R2L','named': 'R2L','snmpguess': 'R2L','worm': 'R2L','snmpgetattack': 'R2L',
'xsnoop': 'R2L','xlock': 'R2L','sendmail': 'R2L',
'normal': 'Normal'
}
# Apply attack class mappings to the dataset
df_train['attack_class'] = df_train['attack'].apply(lambda v: mapping[v])
df_test['attack_class'] = df_test['attack'].apply(lambda v: mapping[v])
df_train
# Drop attack field from both train and test data
df_train.drop(['attack'], axis=1, inplace=True)
df_test.drop(['attack'], axis=1, inplace=True)
# ### Exploratory Data Analysis
df_train
df_train['num_outbound_cmds'].value_counts()
df_test['num_outbound_cmds'].value_counts()
# +
# 'num_outbound_cmds' field has all 0 values. Hence, it will be removed from both train and test dataset since it is a redundant field.'
df_train.drop(['num_outbound_cmds'], axis=1, inplace=True)
df_test.drop(['num_outbound_cmds'], axis=1, inplace=True)
# -
df_train.head(5)
# +
# Attack Class Distribution
attack_class_freq_train= df_train[['attack_class']].apply(lambda x: x.value_counts())
attack_class_freq_test= df_test[['attack_class']].apply(lambda x: x.value_counts())
attack_class_freq_train['frequency_percent_train'] = round((100 * attack_class_freq_train/ attack_class_freq_train.sum()),2)
attack_class_freq_test['frequency_percent_test'] = round((100 * attack_class_freq_test / attack_class_freq_test.sum()),2)
attack_class_dist = pd.concat([attack_class_freq_train,attack_class_freq_test], axis=1)
attack_class_dist
# -
# Attack class bar plot
plot = attack_class_dist[['frequency_percent_train', 'frequency_percent_test']].plot(kind="bar",figsize=(10,6));
plot.set_title("Attack Class Distribution", fontsize=20);
plot.grid(color='lightgray', alpha=0.5);
# ### Scaling Numeric Attributes
# +
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# extract numerical attributes and scale it to have zero mean and unit variance
cols = df_train.select_dtypes(include=['float64','int64']).columns
sc_train = scaler.fit_transform(df_train.select_dtypes(include=['float64','int64']))
sc_test = scaler.fit_transform(df_test.select_dtypes(include=['float64','int64']))
# turn the result back to a dataframe
sc_traindf = pd.DataFrame(sc_train, columns = cols)
sc_testdf = pd.DataFrame(sc_test, columns = cols)
# -
sc_traindf.head(10)
sc_traindf.shape
# ### Encoding of Categorical Attributes
# +
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
# extract categorical attributes from both training and test sets
cattrain = df_train.select_dtypes(include=['object']).copy()
cattest = df_test.select_dtypes(include=['object']).copy()
# -
cattrain
# encode the categorical attributes
traincat = cattrain.apply(encoder.fit_transform)
testcat = cattest.apply(encoder.fit_transform)
traincat
# separate target column from encoded data
enctrain = traincat.drop(['attack_class'], axis=1)
enctest = testcat.drop(['attack_class'], axis=1)
enctrain
cat_Ytrain = traincat[['attack_class']].copy()
cat_Ytest = testcat[['attack_class']].copy()
cat_Ytrain
# ### Data Sampling
# +
from imblearn.over_sampling import RandomOverSampler
from collections import Counter
# define columns and extract encoded train set for sampling
sc_traindf = df_train.select_dtypes(include=['float64','int64'])
refclasscol = pd.concat([sc_traindf, enctrain], axis=1).columns
refclass = np.concatenate((sc_train, enctrain.values),axis=1)
X = refclass
c= cat_Ytrain.values.shape[0]
y = cat_Ytrain.values.reshape(c,)
c= cat_Ytest.values.shape[0]
y_test = cat_Ytest.values.reshape(c,)
# -
# apply the random over-sampling
ros = RandomOverSampler(random_state=42)
X_res, y_res = ros.fit_sample(X, y)
print('Original dataset shape {}'.format(Counter(y)))
print('Resampled dataset shape {}'.format(Counter(y_res)))
#what is the use of ros
# ### Feature Selection
# +
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier();
# fit random forest classifier on the training set
rfc.fit(X_res, y_res);
# extract important features
score = np.round(rfc.feature_importances_,3)
importances = pd.DataFrame({'feature':refclasscol,'importance':score})
importances = importances.sort_values('importance',ascending=False).set_index('feature')
# plot importances
plt.rcParams['figure.figsize'] = (11, 4)
importances.plot.bar();
# -
importances
# +
from sklearn.feature_selection import RFE#recursive feature elimination
import itertools
rfc = RandomForestClassifier()
# create the RFE model and select 10 attributes
rfe = RFE(rfc, n_features_to_select=10)
rfe = rfe.fit(X_res, y_res)
# summarize the selection of the attributes
feature_map = [(i, v) for i, v in itertools.zip_longest(rfe.get_support(), refclasscol)]
selected_features = [v for i, v in feature_map if i==True]
# -
feature_map
selected_features
# ### Dataset Partition
# define columns to new dataframe
newcol = list(refclasscol)
newcol.append('attack_class')
newcol
# add a dimension to target
new_y_res = y_res[:, np.newaxis]
new_y_res
# create a dataframe from sampled data
res_arr = np.concatenate((X_res, new_y_res), axis=1)
res_df = pd.DataFrame(res_arr, columns = newcol)
res_df
# create test dataframe
reftest = pd.concat([sc_testdf, testcat], axis=1)
reftest['attack_class'] = reftest['attack_class'].astype(np.float64)
reftest['protocol_type'] = reftest['protocol_type'].astype(np.float64)
reftest['flag'] = reftest['flag'].astype(np.float64)
reftest['service'] = reftest['service'].astype(np.float64)
reftest
from collections import defaultdict
classdict = defaultdict(list)
# +
# create two-target classes (normal class and an attack class)
attacklist = [('DoS', 0.0), ('Probe', 2.0), ('R2L', 3.0), ('U2R', 4.0)]
normalclass = [('Normal', 1.0)]
def create_classdict():
'''This function subdivides train and test dataset into two-class attack labels'''
for j, k in normalclass:
for i, v in attacklist:
restrain_set = res_df.loc[(res_df['attack_class'] == k) | (res_df['attack_class'] == v)]
classdict[j +'_' + i].append(restrain_set)
# test labels
reftest_set = reftest.loc[(reftest['attack_class'] == k) | (reftest['attack_class'] == v)]
classdict[j +'_' + i].append(reftest_set)
create_classdict()
# -
for k, v in classdict.items():
print(k)
pretrain = classdict['Normal_DoS'][0]
pretest = classdict['Normal_DoS'][1]
grpclass = 'Normal_DoS'
pretrain
pretest
# ### Finalize data preprocessing for training
# +
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder()
Xresdf = pretrain
newtest = pretest
# -
Xresdfnew = Xresdf[selected_features]
Xresdfnum = Xresdfnew.drop(['service'], axis=1)#without service column
Xresdfcat = Xresdfnew[['service']].copy()#only service column
Xtest_features = newtest[selected_features]
Xtestdfnum = Xtest_features.drop(['service'], axis=1)
Xtestcat = Xtest_features[['service']].copy()
# +
# Fit train data
enc.fit(Xresdfcat)
# Transform train data
X_train_1hotenc = enc.transform(Xresdfcat).toarray()
# Fit test data
enc.fit(Xtestcat)
# Transform test data
X_test_1hotenc = enc.transform(Xtestcat).toarray()
# -
X_test_1hotenc
X_train = np.concatenate((Xresdfnum.values, X_train_1hotenc), axis=1)
X_test = np.concatenate((Xtestdfnum.values, X_test_1hotenc), axis=1)
X_test=np.concatenate((X_test,extra),axis=1)
extra=np.zeros(dtype=int,shape=(17169,2))
y_train = Xresdf[['attack_class']].copy()
c= y_train.values.shape[0]
Y_train = y_train.values.reshape(c,)
y_test = newtest[['attack_class']].copy()
c, r = y_test.values.shape
Y_test = y_test.values.reshape(c,)
# ## Train Models
# +
from sklearn.svm import SVC
from sklearn import tree
from sklearn.model_selection import cross_val_score
from sklearn.neighbors import KNeighborsClassifier
# Train KNeighborsClassifier Model
KNN_Classifier = KNeighborsClassifier(n_jobs=-1)
KNN_Classifier.fit(X_train, Y_train);
# Train Decision Tree Model
DTC_Classifier = tree.DecisionTreeClassifier(criterion='entropy', random_state=0)
DTC_Classifier.fit(X_train, Y_train);
# Train SVM Model
SVC_Classifier = SVC(random_state=0)
SVC_Classifier.fit(X_train, Y_train)
# -
# ## Evaluate Models
# +
from sklearn import metrics
models = []
models.append(('SVM Classifier', SVC_Classifier))
models.append(('Decision Tree Classifier', DTC_Classifier))
models.append(('KNeighborsClassifier', KNN_Classifier))
for i, v in models:
scores = cross_val_score(v, X_train, Y_train, cv=10)
predicted=v.predict(X_train)
accuracy = metrics.accuracy_score(Y_train, predicted)
confusion_matrix = metrics.confusion_matrix(Y_train, predicted)
classification = metrics.classification_report(Y_train, predicted)
print()
print('============================== {} {} Model Evaluation =============================='.format(grpclass, i))
print()
print ("Cross Validation Mean Score:" "\n", scores.mean())
print()
print ("Model Accuracy:" "\n", accuracy)
print()
print("Confusion matrix:" "\n", confusion_matrix)
print()
print("Classification report:" "\n", classification)
print()
# -
# ## Test Models
for i, v in models:
predicted=v.predict(X_test)
accuracy = metrics.accuracy_score(Y_test, predicted)
confusion_matrix = metrics.confusion_matrix(Y_test, predicted)
classification = metrics.classification_report(Y_test, predicted)
print()
print('============================== {} {} Model Test Results =============================='.format(grpclass, i))
print()
print ("Model Accuracy:" "\n", accuracy)
print()
print("Confusion matrix:" "\n", confusion_matrix)
print()
print("Classification report:" "\n", classification)
print()
| .ipynb_checkpoints/Understand_Project-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 課程目標
#
# Keras:
#
# 1. Keras 架構
# 2. 如何安裝
# 3. 後臺設定
#
import keras
from keras import backend as K
from keras.layers import Layer
# # 範例重點
#
# 1. 如何使用Keras 設定後台
#
# 2. 如何配置 CPU/GPU
#
# 3. 常見錯誤處理
#
#
print(keras.__version__)
# GPU加速测试, True - Windows用戶得到True也沒有關係,因為Anaconda中已經內置了MKL加速庫
import numpy
id(numpy.dot) == id(numpy.core.multiarray.dot)
#檢查Keras float
K.floatx()
#設定浮點運算值
K.set_floatx('float16')
K.floatx()
# # **常見錯誤處理
#
# **常見錯誤:**FutureWarning: Conversion of the second argument of issubdtype from floatto np.floatingis deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
# from ._conv import register_converters as _register_converters
# **解決方案:**pip install h5py==2.8 .0rc1,安裝h5py,用於模型的保存和載入
#
# **切換後端**Using TensorFlow backend.
# 但是keras的backend同時支持tensorflow和theano.
# 並且默認是tensorflow,
#
# **常見錯誤:**TypeError: softmax() got an unexpected keyword argument 'axis'
# **解決方案:**pKeras與tensorflow版本不相符,盡量更新最新版本:pip install keras==2.2
#
#
# +
from keras.utils import multi_gpu_model
from keras.models import Model
from keras.layers import Input, Dense
a = Input(shape=(32,))
b = Dense(32)(a)
model = Model(inputs=a, outputs=b)
config = model.get_config()
print(config)
| 2nd-ML100Days/homework/D-066/Day66-Keras_Introduction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Iramuk-ganh/rl/blob/main/practice_vi.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="sCZlI6EGXwVM"
# ### Algorithm
# -initialize state_values as zeros
#
# -find q_value function q(s,a) i.e reward due to action a for first step and then following greedy policy wrt value function
#
# -find new_state_values as max q(s,a) over all possible actions
#
# -max_diff = max(new_state_values - state_values)
#
# -state_values = new_state_values
#
# -repeat until max_diff < 0.01
# + [markdown] id="MghVR9S5Kobl"
# ###Code for display
# + colab={"base_uri": "https://localhost:8080/"} id="xclTJQJgKJgy" outputId="209ccb1a-2479-4bda-d324-17b0f18639b3"
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
# !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
# !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/grading.py -O ../grading.py
# !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week2_model_based/submit.py
# !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week2_model_based/mdp.py
# !touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
# !bash ../xvfb start
os.environ['DISPLAY'] = ':1'
# + [markdown] id="15lmvM2ZKy_S"
# ###Three Levels of nested dictionary is used in transition_probs
# + id="bfcVLvGtK0nI"
transition_probs = {
's0':{
'a0':{'s0':0.5, 's2':0.5},
'a1':{'s2':1}
},
's1': {
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2': {
'a0': {'s0': 0.4, 's2': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2': 0.4}
}
}
rewards ={
's1':{'a0':{'s0': +5}},
's2':{'a1':{'s0':-1}}
}
# + id="l6wYQPxXKK9V"
from mdp import MDP
# + id="fCXnHs0lKNgD"
mdp = MDP(transition_probs, rewards, initial_state='s0')
# + colab={"base_uri": "https://localhost:8080/"} id="9zgdeO9NLe0G" outputId="e9196272-80d7-4fa5-eb0a-fd45c60ce211"
print('initial state =', mdp.reset())
next_state, reward, done, info = mdp.step('a1')
print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done))
# + colab={"base_uri": "https://localhost:8080/"} id="qSDoS6VJLh8J" outputId="ee7421b9-0af8-4636-b575-62fb8100650f"
print("mdp.get_all_states =", mdp.get_all_states())
print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1'))
print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0'))
print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0'))
print("mdp.get_transition_prob('s1', 'a0', 's0') = ", mdp.get_transition_prob('s1', 'a0', 's0'))
# + [markdown] id="ISEBA4OzLwAE"
# ### Value Iteration
#
# Now let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__teration
#
# Here's the pseudo-code for VI:
#
# ---
#
# `1.` Initialize $V^{(0)}(s)=0$, for all $s$
#
# `2.` For $i=0, 1, 2, \dots$
#
# `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$
#
# ---
# + [markdown] id="acqhCXxIL2un"
# First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows
#
# $$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$
#
# + id="Fiul7MWIOSK6"
import numpy as np
# + id="Cu6jAka5LnBc"
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
possible_states = mdp.get_next_states(state, action)
l = [mdp.get_transition_prob(state, action, s)*(mdp.get_reward(state, action, s) + gamma * state_values[s]) for s in possible_states]
q_value = np.sum(l)
return q_value
# + id="QWb5elJjOjMc"
test_Vs = {s: i for i, s in enumerate(sorted(mdp.get_all_states()))}
assert np.isclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.isclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
# + [markdown] id="dmDk986nPqCZ"
# ###np.sum(list) can be done for a list too.
# + colab={"base_uri": "https://localhost:8080/"} id="qyQF3-kKNqLG" outputId="bed712f0-166c-46e0-8be5-9cac9c2cfc43"
l = [1, 2, 3]
np.sum(l)
# + [markdown] id="A-IR8ZWPQneX"
# ###Making dictionary using enumerate and iterable objects like array, list
# + colab={"base_uri": "https://localhost:8080/"} id="YxxgmxC1P1hM" outputId="dfbe1c1e-91c4-48df-e612-551a7c1c5ac6"
a = np.arange(4, 16, 4)
dict = {s:i for i,s in enumerate(a)}
a, dict
# + id="GP4aP1NlO_a_"
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as in formula above. Please do not change state_values in process. """
if mdp.is_terminal(state):
return 0
possible_actions = mdp.get_possible_actions(state)
value = np.max([get_action_value(mdp, state_values, state, a, gamma) for a in possible_actions])
return value
# + id="z2p8-cFhSRTj"
test_Vs_copy = test_Vs
assert np.isclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.isclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 1.08)
assert np.isclose(get_new_state_value(mdp, {'s0': -1e10, 's1': 0, 's2': -2e10}, 's0', 0.9), -13500000000.0), \
"Please ensure that you handle negative Q-values of arbitrary magnitude correctly"
assert test_Vs == test_Vs_copy, "Please do not change state_values in get_new_state_value"
# + [markdown] id="dcd9AmMeSdk1"
# ###Finally, let's combine everything we wrote into a working value iteration algorithm.
# + colab={"base_uri": "https://localhost:8080/"} id="84o22EaWSV-m" outputId="700f0115-60b4-4caa-87ab-338af6a47705"
gamma = 0.9#close to 1, near-sighted
num_iter = 100
#stopping condition
min_difference = 0.001
#initialize V(s)
state_values = {s:0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above.
# It must be a dict {state : float V_new(state)}
new_state_values = {s:get_new_state_value(mdp, state_values, s, gamma) for s in mdp.get_all_states()}
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter%4i | diff:%6.5f | " %(i, diff), end="")
print(' '.join("V(%s) = %.3f" % (s, v) for s, v in state_values.items()))
state_values = new_state_values
if diff < min_difference:
print("Terminated")
break
# + colab={"base_uri": "https://localhost:8080/"} id="wa3udgkFVE3Z" outputId="0a6e2956-cded-4199-c23b-74f9bd908e73"
print("Final state values:", state_values)
# + id="q6bwmPd3Ve7k"
assert abs(state_values['s0'] - 3.781) < 0.01
assert abs(state_values['s1'] - 7.294) < 0.01
assert abs(state_values['s2'] - 4.202) < 0.01
# + id="A6GG8nTHViyd"
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
if mdp.is_terminal(state):
return None
best_action_index = np.argmax([get_action_value(mdp, state_values, state, a, gamma) for a in mdp.get_possible_actions(state)])
return mdp.get_possible_actions(state)[best_action_index]
# + id="LZaBiB7xWYgZ"
assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1'
assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0'
assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a1'
assert get_optimal_action(mdp, {'s0': -1e10, 's1': 0, 's2': -2e10}, 's0', 0.9) == 'a0', \
"Please ensure that you handle negative Q-values of arbitrary magnitude correctly"
assert get_optimal_action(mdp, {'s0': -2e10, 's1': 0, 's2': -1e10}, 's0', 0.9) == 'a1', \
"Please ensure that you handle negative Q-values of arbitrary magnitude correctly"
# + [markdown] id="esO-RnrtXJC0"
# ### Measure agent's average reward
# Use append(element) to append to a list
# + colab={"base_uri": "https://localhost:8080/"} id="7wOH_6ArWa8p" outputId="949328cb-6c5a-455e-f74d-242d723a80e6"
s = mdp.reset()
rewards = []
for _ in range(10000):
s,r,done,_ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
print("average reward: ", np.mean(rewards))
assert(0.40 < np.mean(rewards) < 0.55)
# + [markdown] id="WqOGm9EtZn3u"
# ##Frozen lake
# + id="dkqnDlvbZbph"
from mdp import FrozenLakeEnv
# + colab={"base_uri": "https://localhost:8080/"} id="icGRKHK9Z0xw" outputId="cba8c7bd-5bfc-4d3f-b255-b93c09055ae6"
mdp = FrozenLakeEnv(slip_chance=0)
mdp.render()
# + id="XiPtAFkuZ7S5"
def value_iteration(mdp, state_values=None, gamma=0.9, num_iter=1000, min_difference=1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
state_values = state_values or {s: 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = {s:get_new_state_value(mdp, state_values, s, gamma) for s in mdp.get_all_states()}
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s])
for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f " %
(i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
break
return state_values
# + colab={"base_uri": "https://localhost:8080/"} id="ia9uvsJJacyr" outputId="b985f5aa-1726-45af-a245-90a1d19c5e76"
state_values = value_iteration(mdp)
# + colab={"base_uri": "https://localhost:8080/"} id="8SJ-08vragsg" outputId="9ab8bdc3-5ab4-41e6-8b1c-a3a6157f99e3"
s = mdp.reset()
mdp.render()
for t in range(100):
a = get_optimal_action(mdp, state_values, s, gamma)
print(a, end='\n\n')
s, r, done, _ = mdp.step(a)
mdp.render()
if done:
break
| practice_vi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pytorch Autograd
# +
import torch
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
from torch import nn
# -
# ## 1. PyTorch Basics for undersanding Autograds
#
# **Tensors**: It is just an n-dimensional array in PyTorch. Tensors support some additional enhancements which make them unique:
# * Apart from CPU, they can be loaded or the GPU for faster computations.
# * On setting `.requires_grad = True` they start forming a backward graph that tracks every operation applied on them to calculate the gradients using something called a dynamic computation graph (DCG).
#
# > In earlier versions of PyTorch, the `torch.autograd.Variable` class was used to create tensors that support gradient calculations and operation tracking but as of `PyTorch v0.4.0` Variable class has been deprecated. `torch.Tensor` and `torch.autograd.Variable` are now the same class. More precisely, `torch.Tensor` is capable of tracking history and behaves like the old Variable
# +
x = torch.randn(2, 2, requires_grad = True)
# From numpy
x = np.array([1., 2., 3.]) #Only Tensors of floating point dtype can require gradients
x = torch.from_numpy(x)
# Now enable gradient
x.requires_grad_(True)
# _ above makes the change in-place (its a common pytorch thing)
# -
# > Note: By PyTorch’s design, gradients can only be calculated for **floating** point tensors.
# **Autograd**: This class is an engine to calculate derivatives (Jacobian-vector product to be more precise).
# * It records a graph of all the operations performed on a `gradient enabled` tensor and creates an acyclic graph called the dynamic computational graph.
# * The leaves of this graph are input tensors and the roots are output tensors. Gradients are calculated by tracing the graph from the root to the leaf and multiplying every gradient in the way using the chain rule.
# ## 2. Neural networks and Backpropagation
#
# Neural networks are nothing more than composite mathematical functions that are delicately tweaked (trained) to output the required result. The tweaking or the training is done through a remarkable algorithm called backpropagation. Backpropagation is used to calculate the gradients of the loss with respect to the input weights to later update the weights and eventually reduce the loss.
#
# > In a way, back propagation is just fancy name for the chain rule — <NAME>
#
# Creating and training a neural network involves the following essential steps:
# 1. Define the architecture
# 2. Forward propagate on the architecture using input data
# 3. Calculate the loss
# 4. **Backpropagate to calculate the gradient for each weight**
# 5. Update the weights using a learning rate
# The change in the loss for a small change in an input weight is called the gradient of that weight and is calculated using backpropagation. The gradient is then used to update the weight using a learning rate to overall reduce the loss and train the neural net.
#
# This is done in an iterative way. For each iteration, several gradients are calculated and something called a computation graph is built for storing these gradient functions. PyTorch does it by building a Dynamic Computational Graph (DCG). This graph is built from scratch in every iteration providing maximum flexibility to gradient calculation. For example, for a forward operation (function)`Mul` a backward operation (function) called `MulBackwardis` dynamically integrated in the backward graph for computing the gradient.
# ## 3. Dynamic Computational graph
#
# Gradient enabled tensors (variables) along with functions (operations) combine to create the dynamic computational graph.
# * The flow of data and the operations applied to the data are defined at runtime hence constructing the computational graph dynamically.
# * This graph is made dynamically by the autograd class under the hood. [You don’t have to encode all possible paths before you launch the training — what you run is what you differentiate](https://pytorch.org/docs/stable/notes/autograd.html).
# A simple DCG for multiplication of two tensors would look like this:
#
# <img src="./assets/simple_autograd.png" width="430" height="430" />
# Every variable object has several attributes some important of which are:
#
# * **Data**: It is the data a variable is holding. `a` holds a 1x1 tensor with the value equal to 2.0 while `b` holds 3.0. `c` holds the product of two i.e. 6.0
#
# * **requires_grad**: This attribute, if true starts tracking all the operation history and forms a backward graph for gradient calculation. For an arbitrary tensor a It can be manipulated in-place as follows: `a.requires_grad_(True)`.
#
# > If there’s a single input to an operation that requires gradient, its output will also require gradient.
# >
# > Conversely, only if all inputs don’t require gradient, the output also won’t require it.
# >
# > Backward computation is never performed in the subgraphs, where all Tensors didn’t require gradients.
#
# * **grad**: grad holds the value of gradient. If `requires_grad` is False, it will hold a `None` value. Even if `requires_grad` is True, it will hold a `None` value unless `.backward()` function is called from some other node. For example, if you call `out.backward()` for some variable out that involved `x` in its calculations then `x.grad` will hold `∂out/∂x`.
#
# * **grad_fn**: This is the backward function used to calculate the gradient.
#
# * **is_leaf**: A node is leaf if :
# * It was initialized explicitly by some function like `x = torch.tensor(1.0)` or `x = torch.randn(1, 1)` (basically all the tensor initializing methods).
# * It is created after operations on tensors which all have `requires_grad = False`.
# * It is created by calling `.detach()` method on some tensor.
#
# On calling `backward()`, gradients are populated only for the nodes which have both `requires_grad` and `is_leaf` True. Gradients are of the output node from which `.backward()` is called, w.r.t other leaf nodes.
# On turning `requires_grad = True` PyTorch will start tracking the operation and store the gradient functions at each step as follows:
#
# <img src="./assets/autograd_backprop.png" width="550" height="550" />
# In the picture:
#
# * The `Mul` operational funcation has access to a context variable called `ctx` and it can store any values it needs for the backwards pass in `ctx`.
# * `ctx` would be passed to the `MulBackward` operation in the backward pass.
# * `MulBackward` function has the attribute `next_functions`, which is a list of tuples that each is associated with
# the different inputs that were passed to 'Mul' function.
# * AccumulateGrad is associated with tensor `a` and it accumulates the gradient for the tensor `a`.
# * None is associated with tensor `b`. It is none because tensor `b` has `requires_grad` set to `False` so we don't need to pass a gradient to it.
#
# The following code would generate the above graph under the PyTorch’s hood:
# +
# Creating the graph
a = torch.tensor(2.0, requires_grad = True)
b = torch.tensor(3.0)
c = a * b
# Displaying
for i, name in zip([a, b, c], "abc"):
print(f"{name}\ndata: {i.data}\nrequires_grad: {i.requires_grad}\n\
grad: {i.grad}\ngrad_fn: {i.grad_fn}\nis_leaf: {i.is_leaf}\nrequires_grad: {i.requires_grad}")
# -
# Using **torch.no_grad() context** to turn off gradient calculation
#
# To stop PyTorch from tracking the history and forming the backward graph, the code can be wrapped inside the context `with torch.no_grad():`. It will make the code run faster whenever gradient tracking is not needed.
# +
# Creating the graph
x = torch.tensor(1.0, requires_grad = True)
# Check if tracking is enabled
print(x.requires_grad) #True
y = x * 2
print(y.requires_grad) #True
with torch.no_grad():
# Check if tracking is enabled
y = x * 2
print(y.requires_grad) #False
# -
# ## 4. Backward() function
#
# Backward is the function which actually calculates the gradient by passing it’s argument (1x1 unit tensor by default) through the backward graph all the way up to every leaf node traceable from the calling root tensor. The calculated gradients are then stored in `.grad` of every **leaf node**.
#
# > Remember, the backward graph is already made dynamically during the forward pass. Backward function only calculates the gradient using the already made graph and stores them in leaf nodes.
# Creating the graph
x = torch.tensor(1.0, requires_grad = True)
z = x ** 3
z.backward() #Computes the gradient
print(z.grad_fn)
print(x.grad.data) #Prints '3' which is dz/dx
# An important thing to notice is that when `z.backward()` is called, a tensor is automatically passed as `z.backward(torch.tensor(1.0))`. The `torch.tensor(1.0)` is the external gradient provided to terminate the chain rule gradient multiplications. This external gradient is passed as the input to the `PowBackward` function to further calculate the gradient of `x`. The dimension of tensor passed into `.backward()` must be the same as the dimension of the tensor whose gradient is being calculated.
#
# For example, if the gradient enabled tensor `x` and `y` are as follows:
x = torch.tensor([0.0, 2.0, 8.0], requires_grad = True)
y = torch.tensor([5.0 , 1.0 , 7.0], requires_grad = True)
z = x * y
# Then, to calculate gradients of `z` (a `1x3` tensor) with respect to `x` or `y` , an external gradient needs to be passed to `z.backward()`function as follows: `z.backward(torch.FloatTensor([1.0, 1.0, 1.0])`
z.backward(torch.FloatTensor([1.0, 1.0, 1.0]))
print(z.grad_fn)
print(x.grad.data)
# > `z.backward()` would give a RuntimeError: grad can be implicitly created only for scalar outputs
# +
# This wouled give "grad can be implicitly created only for scalar outputs" error.
# z.backward()
# -
# The tensor passed into the backward function acts like weights for a weighted output of gradient. Mathematically, this is the vector multiplied by the Jacobian matrix of non-scalar tensors. Hence it should almost always be a unit tensor of dimension same as the tensor backward is called upon, unless weighted outputs needs to be calculated.
#
# > Backward graph is created automatically and dynamically by **autograd** class during **forward pass**. `Backward()` simply calculates the gradients by passing its argument to the already made backward graph.
# ## 5. Backward calculation
# <img src="./assets/calculate_backprop.png" width="580" height="580" />
# +
# Creating the graph
a = torch.tensor(2.0, requires_grad = True)
b = torch.tensor(3.0)
c = a * b
print("Forward pass:")
for i, name in zip([a, b, c], "abc"):
print(f"{name}\ndata: {i.data}\nrequires_grad: {i.requires_grad}\n\
grad: {i.grad}\ngrad_fn: {i.grad_fn}\nis_leaf: {i.is_leaf}\nrequires_grad: {i.requires_grad}")
c.backward()
print("\n")
print("After backward on c:")
# Displaying
for i, name in zip([a, b, c], "abc"):
print(f"{name}\ndata: {i.data}\nrequires_grad: {i.requires_grad}\n\
grad: {i.grad}\ngrad_fn: {i.grad_fn}\nis_leaf: {i.is_leaf}\nrequires_grad: {i.requires_grad}")
# -
# ## 6. Freeze specified layers
#
# Set **.requires_grad = False** of model's parameters is especially useful when you want to freeze part of your model, or you know in advance that you’re not going to use gradients w.r.t. some parameters.
#
# For example if you want to finetune a pretrained CNN, it’s enough to switch the `requires_grad` flags in the frozen base, and no intermediate buffers will be saved, until the computation gets to the last layer, where the affine transform will use weights that require gradient, and the output of the network will also require them.
# First, we create a toy feed-forward net that we use to show how to freeze specified layers.
# toy feed-forward net
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(10, 5)
self.fc2 = nn.Linear(5, 5)
self.fc3 = nn.Linear(5, 1)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
x = self.fc3(x)
return x
# +
# define random data
random_input = torch.randn(10,)
random_target = torch.randn(1,)
# define net
net = Net()
# print fc2 weight
print('fc2 weight before train:')
print(net.fc2.weight)
# train the net
criterion = nn.MSELoss()
optimizer = optim.SGD(net.parameters(), lr=0.1)
for i in range(100):
net.zero_grad()
output = net(random_input)
loss = criterion(output, random_target)
loss.backward()
optimizer.step()
# print the trained fc2 weight
print('fc2 weight after train:')
print(net.fc2.weight)
# save the net
torch.save(net.state_dict(), 'model')
# +
# delete and redefine the net
del net
net = Net()
# load the weight
net.load_state_dict(torch.load('model'))
# print the pre-trained fc2 weight
print('fc2 pretrained weight and bias (same as the one above):')
print(net.fc2.weight)
print(net.fc2.bias)
# -
# Then, let's freeze the the fc2 layer and only train fc1 and fc3 layers
# +
# we want to freeze the fc2 layer this time: only train fc1 and fc3
net.fc2.weight.requires_grad = False
net.fc2.bias.requires_grad = False
# NOTE: pytorch optimizer explicitly accepts parameter that requires grad
# see https://github.com/pytorch/pytorch/issues/679
# optimizer = optim.Adam(filter(lambda p: p.requires_grad, net.parameters()), lr=0.1)
# this raises ValueError: optimizing a parameter that doesn't require gradients
optimizer = optim.Adam(net.parameters(), lr=0.1)
# +
# train again
criterion = nn.MSELoss()
# define new random data
random_input = torch.randn(10,)
random_target = torch.randn(1,)
for i in range(100):
net.zero_grad()
output = net(random_input)
loss = criterion(output, random_target)
loss.backward()
optimizer.step()
# print the retrained fc2 weight and bias
print('fc2 weight and bias (frozen) after retrain:')
print(net.fc2.weight)
print(net.fc2.bias)
# -
# We can see that the weight and bias of fc2 is the same as the ones before retraining: only fc1 & fc3 changed.
#
# Let's unfreeze the fc2 layer this time for extra tuning.
# +
net.fc2.weight.requires_grad = True
net.fc2.bias.requires_grad = True
# # add the unfrozen fc2 weight to the current optimizer
# optimizer.add_param_group({'params': net.fc2.parameters()})
# re-retrain
for i in range(100):
net.zero_grad()
output = net(random_input)
loss = criterion(output, random_target)
loss.backward()
optimizer.step()
# print the re-retrained fc2 weight
print('fc2 weight and bias (unfrozen) after re-retrain:')
print(net.fc2.weight)
print(net.fc2.bias)
# -
# We can see that the weight and bias of fc2 are also changed.
# ## References:
#
# * [[Youtube] PyTorch Autograd Explained - In-depth Tutorial](https://www.youtube.com/watch?v=MswxJw-8PvE)
# * [Autograd Mechanics](https://pytorch.org/docs/stable/notes/autograd.html)
# * [PyTorch Autograd - Understanding the heart of PyTorch’s magic](https://towardsdatascience.com/pytorch-autograd-understanding-the-heart-of-pytorchs-magic-2686cd94ec95)
| Part 11 - Pytorch Autograd.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### based on https://github.com/higgsfield/RL-Adventure
# %matplotlib inline
import collections
import gym
import matplotlib.pyplot as plot
import numpy as np
import random
import time
import keras as ks
from IPython.display import clear_output
env = gym.make('CartPole-v1')
#env = env.unwrapped
class Replay(object):
def __init__(self, maxlen=1000):
self.buffer = collections.deque(maxlen=maxlen)
def add(self, state, action, next_state, reward, done):
state = np.expand_dims(state, 0)
next_state = np.expand_dims(next_state, 0)
self.buffer.append((state, action, next_state, reward, done))
def sample(self, n):
state, action, next_state, reward, done = zip(*random.sample(self.buffer, n))
state = np.concatenate(state, axis=0)
next_state = np.concatenate(next_state, axis=0)
reward = np.array(reward)
done = np.array(done)
return state, action, next_state, reward, done
def __len__(self):
return len(self.buffer)
# +
def build_model():
model = ks.Sequential([
ks.layers.Dense(32, input_shape=env.observation_space.shape, activation='relu'),
# ks.layers.Dense(64, activation='relu'),
# ks.layers.Dense(32, activation='relu'),
ks.layers.Dense(env.action_space.n)
])
opt = ks.optimizers.Nadam()
loss = ks.losses.mean_squared_error
model.compile(opt, loss)
return model
DOUBLE_MODEL = True
def blend_models(dst, src, k):
if src is dst:
return
sw = src.get_weights()
dw = dst.get_weights()
for n in range(len(sw)):
dw[n] = sw[n] * k + dw[n] * (1 - k)
dst.set_weights(dw)
model = build_model()
target_model = build_model() if DOUBLE_MODEL else model
#model.summary()
replay = Replay(1000)
batch_size = 32
epsilon = 1.0
gamma = 0.95
all_rewards = [0]
losses = []
state = env.reset()
for frame in range(100000):
epsilon = max(0.01, epsilon * 0.995)
if random.random() < epsilon:
action = env.action_space.sample()
else:
q = model.predict(np.expand_dims(state, 0))
action = np.argmax(q, 1)[0]
next_state, reward, done, info = env.step(action)
replay.add(state, action, next_state, reward, done)
all_rewards[-1] += reward
state = next_state
if done:
all_rewards.append(0)
state = env.reset()
if len(replay) > batch_size:
q_state, q_action, q_next_state, q_reward, q_done = replay.sample(batch_size)
target = target_model.predict(q_state)
q_expect = q_reward + gamma * (1 - q_done) * np.amax(target_model.predict(q_next_state), 1)
loss = 0
for n, (action, expect) in enumerate(zip(q_action, q_expect)):
p = target[n, action]
target[n, action] = expect
loss += (p - expect) ** 2
losses.append(loss / batch_size)
model.fit(q_state, target, epochs=1, verbose=0)
if (frame + 1) % 4 == 0:
blend_models(target_model, model, 0.5)
quality = 0 if len(all_rewards) < 100 else np.mean(all_rewards[-100:])
solved = quality > 475
if (frame + 1) % 1000 == 0 or solved:
clear_output(False)
plot.figure(figsize=(22,7))
plot.subplot(121)
plot.title('rewards (frame=%dk, quality=%g)' % (np.round(frame/1000), quality))
plot.plot(all_rewards[:-1])
plot.subplot(122)
plot.title('losses')
plot.plot(losses)
plot.show();
if solved:
print('solved.')
break
| jupyter/cartpole/cartpole-v1-dqn-keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Now that we solved a PDE where shocks don't show up, let's look at the case when they do.
#
# When the characteristic curves intersect somewhere in the x, y plane, this would result in multiple values of z at the same (x,y) point. That's not possible, so instead, we allow for a shock, or discontinuity.
#
# We will examine shocks in the context of homogeneous quasi-linear and reducible PDEs.
#
# We will stick to our chromatography example, this time with new initial conditions. We are in adsorption mode (initially filled with inert with material being pumped in). The concentration in the feed increases linearly to $\bar{C}$ as time goes up to $\Delta t$, then remains constant.
#
# $$I_1: C = C^0(\xi_1) = 0, x = \xi_1, t=0, 0 \le \xi_1, \le L $$
# $$I_2: C = \begin{cases}
# C^i(\xi_2) = \frac{\bar{C}t}{\Delta t} & x=0, t=\xi_2, 0<\xi_2<\Delta t \\
# C^i(\xi_3) = \bar{C} & x=0, t= \xi_3, \xi_3 > \Delta t
# \end{cases} $$
# We will introduce the following dimensionless quantities:
#
# $$z = \frac{x}{L}, \tau = \frac{t}{\Delta t}, u = \frac{C}{\bar{C}}, \alpha = \frac{L \epsilon}{v \Delta t}, \beta = \frac{(1 - \epsilon)\Gamma^{\infty}K}{\epsilon}, \sigma = K \bar{C} $$
#
# This reduces the chromatography equation to:
#
# $$\frac{\partial u}{\partial z} + \Psi(u)\frac{\partial u}{\partial \tau} = 0, 0<z<1, \tau>0 $$
#
# where
#
# $$\Psi(u) = \frac{d \tau}{d z} = \frac{L}{\Delta t V(C)} = \alpha \bigg[ 1+ \frac{\beta}{(1+\sigma u)^2} \bigg] $$
#
# with the ICs
#
# $$I_1: u=0, z=\xi_1, \tau=0, 0 \le \xi_1, \le 1 $$
# $$I_2: u = \begin{cases}
# \xi_2 & z=0, \tau=\xi_2, 0<\xi_2<1 \\
# 1 & z=0, \tau= \xi_3, \xi_3 > 1
# \end{cases} $$
# The characteristic straight lines are defined by the slope:
#
# $$\frac{d \tau}{d z} = \Psi(u) $$
#
# Integrating with the first IC gives:
#
# $$\tau = \Psi(0)(z - \xi_1), 0 \le \xi_1 \le 1 $$
#
# Two different expressions are obtained when integrating with the second condition:
#
# $$\tau = \xi_2 + \Psi(\xi_2)z, 0 < \xi_2 < 1 $$
#
# and
#
# $$\tau = \xi_3 + \Psi(1)z, \xi_3 > 1 $$
# Let's eliminate $\xi_1$ and $\xi_3$ from the outermost equations. We get:
#
# $$u = 0, 0 < \tau < \Psi(0)z $$
# $$u = 1, \tau > 1 + \Psi(1)z $$
#
# In the intermediate region, the solution follows the middle curve (the one with $\xi_2$). This can be expressed:
#
# $$\tau = u + \Psi(u)z, 0 < u < 1 $$
#
# We can make this explicit with our expression for $\Psi$:
#
# $$z = \frac{(\tau - u)(1 + \sigma u)^2}{\alpha [\beta + (1 + \sigma u)^2]}, 0 < u < 1$$
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
# +
x = np.linspace(0, 1, 300)
t = np.linspace(0, 1, 600)
G = 0.5
K = 2
v = 2
Cbar = 1
eps = 0.5
dt = 0.25
L = 1
z = x/L
tau = t/dt
alpha = L*eps/(v*dt)
beta = (1 - eps)*G*K/eps
sig = K*Cbar
# -
beta
def psi(u):
V = alpha*(1+(beta/((1+sig*u)**2)))
return V
# +
fig, ax = plt.subplots()
utest = np.linspace(0, 1, 100)
psitest = psi(utest)
ax.plot(utest, psitest)
ax.set_xlabel('u')
ax.set_ylabel(r'$\Psi$')
# +
zv, tauv = np.meshgrid(z, tau)
u = 0*zv
u[tauv > 1 + psi(1)*zv] = 1
# +
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(zv, tauv, u, cmap=cm.coolwarm)
ax.set_xlabel('z')
ax.set_ylabel(r'$\tau$')
ax.view_init(elev=75, azim=-90)
# -
ttest = tauv[(tauv < 1 + psi(1)*zv) & (tauv > psi(0)*zv)]
ztest = zv[(tauv < 1 + psi(1)*zv) & (tauv > psi(0)*zv)]
Cinit = ztest*0 + 0.5
Cinit.shape
import scipy.optimize
def psolver(u):
return psi(u)*ztest + u - ttest
umiddle = scipy.optimize.newton_krylov(psolver, Cinit, f_tol=1e-14, maxiter=1000, verbose=True)
u[(tauv < 1 + psi(1)*zv) & (tauv > psi(0)*zv)] = umiddle
# +
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(zv, tauv, u, cmap=cm.coolwarm)
ax.set_xlabel('z')
ax.set_ylabel(r'$\tau$')
ax.view_init(elev=75, azim=-90)
# -
from matplotlib import animation, rc
from IPython.display import HTML
# +
fig, ax = plt.subplots(figsize = (6, 6))
ax.set_xlim((0, 1))
ax.set_ylim((-0.5, 1.5))
line, = ax.plot([], [], lw=2)
def init():
line.set_data([], [])
return (line,)
def animate(i):
ui = u[:, 10*i]
line.set_data(t, ui)
return (line,)
# -
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=15, blit=True)
HTML(anim.to_html5_video())
# This solution looks nice, but it is wrong. Why?
#
# **Hint**: look at the functional form of z(u).
# The profiles become steeper while moving along the column.
#
# Looking at the function for the wave velocity:
#
# $$c = \frac{1}{\Psi(u)} = \frac{(1 + \sigma u)^2}{\alpha[1 + (1 + \sigma u)^2]} $$
#
# which is a monotonically increasing function with respect with u. This means that larger concentrations move faster (have higher wave velocities) through the column than smaller values.
# +
uexp = np.linspace(0, 1, 1000)
tauexp, uexp = np.meshgrid(tau, uexp)
#zexp = 0*zv
zexp = (tauexp - uexp)*(1 + sig*uexp)**2/(alpha*(beta + (1 + sig*uexp)**2))
uexp[zexp < 0] = 0
uexp[zexp > 1] = 1
tauexp[zexp < 0] = 0
tauexp[zexp > 1] = 3
zexp[zexp < 0] = 0
zexp[zexp > 1] = 1
# +
uexp = np.linspace(0, 1, 1000)
tauexp = 0.4
fig, ax = plt.subplots(figsize=(7, 7))
zexp = (tauexp - uexp)*(1 + sig*uexp)**2/(alpha*(beta + (1 + sig*uexp)**2))
uexp[zexp < 0] = 0
uexp[zexp > 1] = 1
#zexp[zexp < 0] = 0
ax.plot(zexp, uexp, c='k', label=r'$\tau$ = {}'.format(tauexp))
ax.set_xlabel('z')
ax.set_ylabel('u')
ax.set_xlim([-1,3])
# -
#uexp[(tauexp > 1 + psi(1)*zexp)] = 0
#uexp[(tauexp < psi(0)*zexp)] = 1
# +
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
#ax.plot_surface(zv, tauv, u, cmap=cm.coolwarm)
ax.plot_surface(zexp, tauexp, uexp, cmap=cm.coolwarm)
ax.set_xlabel('z')
ax.set_ylabel(r'$\tau$')
ax.set_xlim([0,1])
ax.view_init(elev=30, azim=-135)
# +
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
#ax.plot_surface(zv, tauv, u, cmap=cm.coolwarm)
ax.plot_surface(zexp, tauexp, uexp, cmap=cm.coolwarm)
ax.set_xlabel('z')
ax.set_ylabel(r'$\tau$')
ax.set_xlim([0,1])
ax.view_init(elev=30, azim=-45)
# +
fig, ax = plt.subplots(figsize = (6, 6))
ax.set_xlim((0, 1))
ax.set_ylim((-0.5, 1.5))
line, = ax.plot([], [], lw=2, c='k')
line2, = ax.plot([], [], lw=2, c='k')
line3, = ax.plot([], [], lw=2, c='k')
def init():
line.set_data([], [])
line2.set_data([], [])
line3.set_data([], [])
return (line, line2, line3,)
def animate(i):
#tauexp1 = tauexp[0, 10*i]
tauv1 = tauv[10*i, 0]
ui = uexp[:, 10*i]
u2 = u[10*i, :]
zi = zexp[:, 10*i]
z2 = zv[10*i, :]
line.set_data(zi[(zi > 0) & (zi < 1)], ui[(zi > 0) & (zi < 1)])
line2.set_data(z2[tauv1 > 1 + psi(1)*z2], u2[tauv1 > 1 + psi(1)*z2])
line3.set_data(z2[tauv1 < psi(0)*z2], u2[tauv1 < psi(0)*z2])
return (line, line2, line3,)
# +
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=15, blit=True)
HTML(anim.to_html5_video())
# -
# Interesting. While mathematically correct, this solution is physically impossible! How can our concentration take on two values simultaneously at the same point in space? Why?
#
# Our first order model fails and loses reliability in this region. The concentration profiles are steep and exhibit large second order derivatives. Thus, axial dispersion is no longer negligible, and we should revise our model to take this into account.
#
# Since the model is invalid in this region, we replace it with an alternate one. The larger concentration values are **superimposed** on top of the smaller ones. This creates a discontinuity or shock.
#
# The steady-state mass balance at the shock interface requires that no mass be accumulated: the flow entering and leaving the interface must be identical. Both flows are the sum of two contributions: (1) the convective flow of the mobile phase, whose velocity is $v/\epsilon - V_s$ and (2) the stationary phase releases amounts of the adsorbable component. Thus, we have:
#
# $$\epsilon C_{+} \bigg( \frac{v}{\epsilon} - V_s \bigg) + (1 - \epsilon) \Gamma_{-}V_s = \epsilon C_{-} \bigg( \frac{v}{\epsilon} - V_s \bigg) + (1 - \epsilon) \Gamma_{+}V_s$$
#
# which becomes:
#
# $$\frac{v}{V_s} = \epsilon + (1 - \epsilon)\frac{\Gamma_{+} - \Gamma_{-}}{C_{+} - C_{-}} $$
#
# Substitution gives:
#
# $$\frac{v}{ \epsilon V_s} = 1 + \frac{\beta}{u_{+} - u_{-}} \bigg[ \frac{u_{+}}{1 + \sigma u_{+}} - \frac{u_{-}}{1 + \sigma u_{-}} \bigg] $$
# Final solution:
#
# After the shock has been formed:
#
# $$u = 0, \tau < \alpha z - \frac{1}{2 \sigma} + \sqrt{\frac{2 \alpha \beta z}{\sigma}}, z_s < z < \bar{z} $$
#
# $$u = 0, \tau < \bar{\tau} + \alpha \bigg[ 1 + \frac{\beta}{1 + \sigma} \bigg](z - \bar{z}), z > \bar{z} $$
#
# where $z_s$ is where the shock trajectory begins:
#
# $$z_s = \frac{1}{2 \alpha \beta \sigma}, \tau_s = \frac{(1 + \beta)}{2 \beta \sigma} $$
#
# and $\bar{z}$ is where it ends:
#
# $$\bar{z} = \frac{(1 + \sigma)^2}{2 \alpha \beta \sigma}, \bar{\tau} = \frac{(1 + \sigma)^2 + \beta(1 + 2 \sigma)}{2 \beta \sigma} $$
#
# Above the shock trajectory,
#
# $$z = \frac{(\tau - u)(1 + \sigma u)^2}{\alpha [\beta + (1 + \sigma)^2]}, \alpha z - \frac{1}{2 \sigma} + \sqrt{\frac{2 \alpha \beta z}{\sigma}} < \tau < 1 + \Psi(1) z, z_s < z < \bar{z} $$
#
# For large times, it is just 1:
#
# $$u = 1, \tau > 1 + \Psi(1)z, z_s < z < \bar{z} $$
#
# and
#
# $$u = 1, \tau > \bar{\tau} + \alpha \bigg( 1 + \frac{\beta}{1 + \sigma} \bigg) (z - \bar{z}), z > \bar{z} $$
# +
zv, tauv = np.meshgrid(z, tau)
u = 0*zv
zs = 1/(2 * alpha * beta * sig)
taus = (1 + beta)/(2 * beta * sig)
zb = (1 + sig)**2/(2 * alpha * beta * sig)
taub = ((1 + sig)**2 + beta*(1 + 2*sig))/(2*beta*sig)
taulow = alpha*zv - 1/(2*sig) + np.sqrt(2 * alpha * beta * zv/sig)
tauhi = 1 + psi(1)*zv
taularge = taub + alpha*(1 + beta/(1 + sig))*(zv - zb)
# +
u[(tauv > 1 + psi(1)*zv) & (zv > zs) & (zv < zb)] = 1
u[(tauv > taularge) & (zv > zb)] = 1
u[(tauv > 1 + psi(1)*zv) & (zv <= zs)] = 1
# -
ttest = tauv[(tauv < 1 + psi(1)*zv) & (tauv > psi(0)*zv) & (zv < zs)]
ztest = zv[(tauv < 1 + psi(1)*zv) & (tauv > psi(0)*zv) & (zv < zs)]
Cinit = ztest*0 + 0.5
Cinit.shape
import scipy.optimize
def psolver(u):
return psi(u)*ztest + u - ttest
umiddle = scipy.optimize.newton_krylov(psolver, Cinit, f_tol=1e-14, maxiter=100, verbose=True)
u[(tauv < 1 + psi(1)*zv) & (tauv > psi(0)*zv) & (zv < zs)] = umiddle
# +
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(zv, tauv, u, cmap=cm.coolwarm)
ax.set_xlabel('z')
ax.set_ylabel(r'$\tau$')
ax.view_init(elev=75, azim=-90)
# -
ttest = tauv[(tauv > taulow) & (tauv < 1 + psi(1)*zv) & (zv > zs) & (zv < zb)]
ztest = zv[(tauv > taulow) & (tauv < 1 + psi(1)*zv) & (zv > zs) & (zv < zb)]
Cinit = ztest*0 + 0.55
Cinit.shape
import scipy.optimize
def psolver(u):
return psi(u)*ztest + u - ttest
umiddle = scipy.optimize.newton_krylov(psolver, Cinit, f_tol=1e-14, maxiter=100, verbose=True)
u[(tauv > taulow) & (tauv < 1 + psi(1)*zv) & (zv > zs) & (zv < zb)] = umiddle
# +
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(zv, tauv, u, cmap=cm.coolwarm)
ax.set_xlabel('z')
ax.set_ylabel(r'$\tau$')
ax.view_init(elev=75, azim=-90)
# +
fig, ax = plt.subplots(figsize = (6, 6))
ax.set_xlim((0, 1))
ax.set_ylim((-0.5, 1.5))
line, = ax.plot([], [], lw=2, c='k')
def init():
line.set_data([], [])
return (line,)
def animate(i):
#tauexp1 = tauexp[0, 10*i]
tauv1 = tauv[6*i, 0]
u2 = u[6*i, :]
z2 = zv[6*i, :]
line.set_data(z2, u2)
return (line,)
# +
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=99, interval=55, blit=True)
HTML(anim.to_html5_video())
# -
# Solutions of 1st order PDEs with discontinuities are known as *weak solutions*, meaning they are not continuously differentiable.
#
# The discontinuity doesn't necessarily originate from the ICs, as demonstrated in the previous example. The general approach to coming up with a solution to such problems is:
#
# 1. Determine the location of the shock in the (x,t) plane.
# 2. Evaluate the propagation of the shock: $\bigg( \frac{dx}{dt} \bigg)_shock = V_s $
#
# Where do shocks occur then? Let's look at the general form:
#
# $$\frac{\partial \gamma}{\partial x} + \Psi(\gamma) \frac{\partial \gamma}{\partial t} = 0 $$
#
# Wen the wave velocity is positive, and we have the IC:
#
# $$\gamma = \gamma^0(\xi), x = \gamma, t=0, 0<\xi<1 $$
#
# And by definition:
#
# $$\frac{dt}{dx} = \Psi(\gamma) $$
#
# Integrating gives:
#
# $$t = \Psi(\gamma^0(\xi))(x - \xi) $$
#
# The shock occurs at the first intersection of these characteristics.
| notebooks/1st_order_PDEs_shocks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Evaluation for Scaler Class
#
# - Evaluates the effect of preprocessy execute function on model accuracy compared to sklearn
# - Evaluates on 1 dataset
# * Melbourne Housing Snapshot
# - Considering a standard test size of 0.3 for all 3 cases i.e.
# * MinMaxScaling
# * StandardScaling
# * BinaryScaling
# - Using RandomForestRegressor() model
# - Using r2_score of sklearn.metrics
# - Comparisons between sklearn and preprocessy based on accuracy and time have been indicated at the end
# To access preprocessy module. Required in .ipynb files
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
# +
import pandas as pd
import numpy as np
import matplotlib
from sklearn import preprocessing
import warnings
warnings.filterwarnings('ignore')
from preprocessy.scaling import Scaler
from preprocessy.data_splitting import Split
import time
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error,classification_report, r2_score, mean_squared_error
from sklearn.model_selection import train_test_split
from preprocessy.handlenullvalues import NullValuesHandler
np.random.seed(101)
# +
melb_data = pd.read_csv('../datasets/handling_null_values/melb_data.csv')
melb_data_copy2 = melb_data
dtf_1 = pd.DataFrame(columns = ['Accuracy', 'Time'])
dtf_2 = pd.DataFrame(columns = ['Accuracy', 'Time'])
dtf_3 = pd.DataFrame(columns = ['Accuracy', 'Time'])
melb_data.head()
# +
# Consider Price as Target property and others as Predictors
melb_target = melb_data.Price
melb_predictors = melb_data.drop(['Price'], axis=1)
melb_numeric_predictors = melb_predictors.select_dtypes(exclude=['object'])
col_names = list(melb_numeric_predictors.columns)
col_names
# -
melb_data_copy2.isnull().sum().sort_values(ascending = False)[:4]
len(melb_data_copy2)
# ## Fill Null Values and split into train and test dataframes
imputed_df = melb_data_copy2.select_dtypes(exclude=['object']).fillna(melb_data_copy2.select_dtypes(exclude=['object']).mean())
train, test = train_test_split(imputed_df, test_size = 0.3, random_state = 69)
imputed_df.isnull().sum().sort_values(ascending = False)[:4]
imputed_df[:2]
train[:2]
test[:2]
# ## MinMaxScaler
# - smaller standard deviations through the process
# ### sklearn
# +
mm_scaler = preprocessing.MinMaxScaler()
start = time.time()
#Scale the data
df_mm = mm_scaler.fit_transform(imputed_df.drop(['Price'], axis =1))
df_mm = pd.DataFrame(df_mm, columns=col_names)
#Split the data
X_train, X_test, y_train, y_test = train_test_split(df_mm, imputed_df['Price'], test_size=0.3, random_state=69)
#Fit RandomForestRegressor model
model = RandomForestRegressor(random_state = 42)
model.fit(X_train, y_train)
end=time.time()
sklearn_preds = model.predict(X_test)
# Get time and accuracy
sklearn_time = np.round(end - start,4)
sklearn_accuracy = np.round(r2_score(y_test, sklearn_preds),4)
# Print Diff Error Values
print('Mean Absolute Error:', mean_absolute_error(y_test, sklearn_preds))
print('Mean Squared Error:', mean_squared_error(y_test, sklearn_preds))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, sklearn_preds)))
#Append Dataframe
dtf_1.loc['sklearn'] = [sklearn_accuracy, sklearn_time]
# -
# ### preprocessy
# +
def preprocessy_score_dataset(params):
target_col = params["target_label"]
start=time.time()
Scaler().execute(params)
# Train dataset
X_train = params["train_df"].drop(target_col,axis =1)
y_train = params["train_df"][[target_col]]
# Test dataset
X_test = params["test_df"].drop(target_col,axis =1)
y_test = params["test_df"][[target_col]]
# print(X_train[:2])
# print(X_test[:2])
# Fit RandomForestRegressor model
model = RandomForestRegressor(random_state = 42)
model.fit(X_train, y_train)
end=time.time()
preprocessy_preds = model.predict(X_test)
# Get time and accuracy
preprocessy_time = np.round(end - start,4)
preprocessy_accuracy = np.round(r2_score(y_test, preprocessy_preds),4)
# Print Diff Error Values
print('Mean Absolute Error:', mean_absolute_error(y_test, preprocessy_preds))
print('Mean Squared Error:', mean_squared_error(y_test, preprocessy_preds))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, preprocessy_preds)))
return preprocessy_accuracy, preprocessy_time
# -
params = {"train_df": train, "test_df": test, "target_label": "Price", "test_size": 0.3, "type": "MinMaxScaler"}
# +
acc, t = preprocessy_score_dataset(params = params)
dtf_1.loc['Preprocessy'] = [acc, t]
# -
# ## StandardScaler
# ### sklearn
# +
s_scaler = preprocessing.StandardScaler()
start = time.time()
#Scale the data
df_s = s_scaler.fit_transform(imputed_df.drop(['Price'], axis =1))
df_s = pd.DataFrame(df_s, columns=col_names)
#Split the data
X_train, X_test, y_train, y_test = train_test_split(df_s, imputed_df['Price'], test_size=0.3, random_state=69)
#Fit RandomForestRegressor model
model = RandomForestRegressor(random_state = 42)
model.fit(X_train, y_train)
end=time.time()
sklearn_preds = model.predict(X_test)
# Get time and accuracy
sklearn_time = np.round(end - start,4)
sklearn_accuracy = np.round(r2_score(y_test, sklearn_preds),4)
# Print Diff Error Values
print('Mean Absolute Error:', mean_absolute_error(y_test, sklearn_preds))
print('Mean Squared Error:', mean_squared_error(y_test, sklearn_preds))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, sklearn_preds)))
#Append Dataframe
dtf_2.loc['sklearn'] = [sklearn_accuracy, sklearn_time]
# -
# ### preprocessy
# +
params = {"train_df": train, "test_df": test, "target_label": "Price", "test_size": 0.3, "type": "StandardScaler"}
acc, t = preprocessy_score_dataset(params = params)
dtf_2.loc['Preprocessy'] = [acc, t]
# -
# ## BinaryScaler
# ### sklearn
# +
b_scaler = preprocessing.Binarizer()
start = time.time()
#Scale the data
df_b = b_scaler.fit_transform(imputed_df.drop(['Price'], axis =1))
df_b = pd.DataFrame(df_b, columns=col_names)
#Split the data
X_train, X_test, y_train, y_test = train_test_split(df_b, imputed_df['Price'], test_size=0.3, random_state=69)
#Fit RandomForestRegressor model
model = RandomForestRegressor(random_state = 42)
model.fit(X_train, y_train)
end=time.time()
sklearn_preds = model.predict(X_test)
# Get time and accuracy
sklearn_time = np.round(end - start,4)
sklearn_accuracy = np.round(r2_score(y_test, sklearn_preds),4)
# Print Diff Error Values
print('Mean Absolute Error:', mean_absolute_error(y_test, sklearn_preds))
print('Mean Squared Error:', mean_squared_error(y_test, sklearn_preds))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, sklearn_preds)))
#Append Dataframe
dtf_3.loc['sklearn'] = [sklearn_accuracy, sklearn_time]
# -
# ### preprocessy
params = {"train_df": train, "test_df": test, "target_label": "Price", "test_size": 0.3, "type": "BinaryScaler"}
# +
acc, t = preprocessy_score_dataset(params)
dtf_3.loc['Preprocessy'] = [acc, t]
# -
# #### MinMaxScaler
dtf_1
# #### StandardScaler
dtf_2
# #### BinaryScaler
dtf_3
| evaluations/evaluate_scaling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''tf'': conda)'
# language: python
# name: python37664bittfconda946bd5c5de684c5d81ef2ce52df4450d
# ---
# ### ***1. 什么是Graph Embedding,都有哪些算法模型? ***
# Graph Embedding 是一种Embedding 降维技术,可以有效的挖掘图网络中的节点特征表示。Graph Embedding的作用是将图中的点用低维的向量表示,并且这些向量要能反应原先网络的特性。
#
# 主要模型有:
# - 图因式分解机 (factorization methods)
# - 随机游走(random walk techniques)
# - 深度学习 (deep learning)
# ### ***2. 如何使用Graph Embedding在推荐系统,比如NetFlix 电影推荐,请说明简要的思路? ***
# 在推荐系统应用中,可通过以下方法将用户与物品(比如NetFlix电影)的交互信息转换为图结构:
#
# 1. 将每个电影与用户作为图中的节点;
# 2. 将用户与电影的交互关系作为链接节点的边;
# 3. 每条边的权重可通过用户对电影的点击,收藏,评分,以及观看时长等多个指标综合评定计算得出。
#
# 然后通过,RandomWalk, Node2Vec, 或者 GCN 等方法学习得到图的表示,获得图的Embedding。
#
# 最后将Embedding之后的图上的节点,作为输入来训练电影推荐模型。
# ### ***3. 在交通流量预测中,如何使用Graph Embedding,请说明简要的思路? ***
# 在交通流量预测应用中,可通过以下方法将交通网络中的元素构成图。
#
# 1. 将交通网络中的道路交汇点作为图的节点;
# 2. 将连通道路交汇点的道路作为图的边;
# 3. 可通过道路的物理距离,交汇点之间的出入流量交互等信息构成边的权重。
#
# 然后通过,RandomWalk, Node2Vec, 或者 GCN 等方法学习得到图的表示,获得图的Embedding。
#
# 最后将Embedding之后的图上的节点,作为输入来训练交通流量预测模型,比如全连接神经网络。由于交通流量是一个随时间变化的指标,所以还可以在训练基于全连接神经网络的预测模型之前,接一个RNN,LSTM 或 GRU 网络来学习交通流量随着时间变化的信息,作为预测的输入特征。
# ### ***4. 在文本分类中,如何使用Graph Embedding,请说明简要的思路?***
# 在文本分类应用中,可通过以下方法将文本中的元素构成图。
#
# 1. 将文本划分为,章节和单词,每个章节和单词独立构成图的节点;
# 2. 将章节-单词关系,以及单词与单词的关系作为图的边;
# 3. 对于边是章节-单词来说,可以用TF-IDF算法,来计算单词在章节中的重要新程度作为边的权重;而对于单词与单词的边,可以采用Word2Vec算法,根据单词与单词的共现关系,来表示单词与单词之间的边权重。
#
# 然后通过,RandomWalk, Node2Vec, 或者 GCN 等方法学习得到图的表示,获得图的Embedding。
#
# 最后将Embedding之后的图上的节点,作为输入来训练文本分类模型,比如全连接神经网络
#
#
# 此外,文本分类应用的构图方式还可以是:
# 1. 将每篇文章分别构造一个图;
# 2. 文章中的每个词作为图上的节点;
# 3. 依据文章中单词的共现关系来判断节点(单词)之间是否有连通的边。
#
# 最后在图级别上做分类。
| L13/Thinking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="pdu6HdnKsaZR" outputId="cac0e0c8-f139-466d-b5e7-ee4d8fc46fd9"
# !pip install -q bert-extractive-summarizer
# !pip install -q spacy==2.1.3
# !pip install -q transformers==2.2.2
# !pip install -q neuralcoref
# + id="CF88ZdxOtAIG"
from summarizer import Summarizer
from pprint import pprint
# + id="edfIarrYtxYG"
with open("The Present by <NAME>.txt", 'r') as file:
data = file.read().replace('\n', '')
# + id="uurP1DJUzqpN"
data = data.replace("\ufeff", "")
# + colab={"base_uri": "https://localhost:8080/", "height": 52} id="tBHeKCkYtxap" outputId="43c440d4-b2df-49fc-b3a6-8bd1f75a41e2"
data[0:100]
# + id="_mz1CUQ-tcxw" colab={"base_uri": "https://localhost:8080/", "height": 165, "referenced_widgets": ["1ddea2fadcf54f54af27b06de3f6bfbb", "d3c0f66b77244bef9490b1abb203194d", "2cdf125675094fde8c360a73f8a22496", "<KEY>", "156cc1a1dd494b6fa3872e801619a0d0", "26e8689652c84532ab352e23bfa7b993", "<KEY>", "0fe7ee5433f545e29058ba5d1bb2f901", "<KEY>", "a646da07b54441f9be91881d5d9fad80", "<KEY>", "<KEY>", "<KEY>", "2550e787808642b28c9e694c20b682ad", "<KEY>", "24143e15fe744a6d92c526e19a17b0d4", "9b2a34fac4414e58bd3c7440de663661", "<KEY>", "a068918d9ed642aebedd4f69b1117110", "6e390e78d1a34628b3d1e371cba6cdb8", "061d2049dd2e4b8da89f632ea0ec964d", "fb8c6afe56dd4295b30c6a09d541f921", "e5abd3a72c93427986159fe62d2c8161", "904e25aae6394be0baaa20cdfcc412f0"]} outputId="aaea8e17-b1a2-451a-f3cb-eb8686ac99c8"
model = Summarizer()
# + id="wkqAhTfpuQ25"
result = model(data, num_sentences=5, min_length=60)
# + id="kM0e8Qko0EMs"
full = ''.join(result)
# + colab={"base_uri": "https://localhost:8080/"} id="Avej5YzWuW7x" outputId="65d3178c-ba62-4277-ff03-68c6da9b3c8e"
pprint(full)
# + id="bGmJEChzvS6U"
| bert_summarizer_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import necessary modules
import geopandas as gpd
# read in data consisting minimum travel times to different shopping centers
grid = gpd.read_file('grid/grid.shp')
# read in roads
roads = gpd.read_file('roads.shp')
# check the crs
print(roads.crs)
print(grid.crs)
# +
# reproject roads to same crs as the grid
roads = roads.to_crs(crs=grid.crs)
# check the crs of roads
print(roads.crs)
# +
# import necessary module
import matplotlib.pyplot as plt
# create the figure
fig, ax = plt.subplots(figsize=(12,8))
# plot the grid
grid.plot(ax=ax, column='min_t', cmap='RdYlBu', legend=True, legend_kwds={'label': 'Travel time (min) to the nearest shopping center'})
# plot the roads
roads.plot(ax=ax, color='grey', linewidth=1)
| .ipynb_checkpoints/Exercise 5, problem 1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.0 64-bit
# name: python3
# ---
# # Sequence Reconstruction [PASSED]
# Check whether the original sequence org can be uniquely reconstructed from the sequences in seqs. The org sequence is a permutation of the integers from 1 to n, with 1 ≤ n ≤ 104. Reconstruction means building a shortest common supersequence of the sequences in seqs (i.e., a shortest sequence so that all sequences in seqs are subsequences of it). Determine whether there is only one sequence that can be reconstructed from seqs and it is the org sequence.
#
# ### Example 1
# - Input: org: [1,2,3], seqs: [[1,2],[1,3]]
# - Output: false
# - Explanation: [1,2,3] is not the only one sequence that can be reconstructed, because [1,3,2] is also a valid sequence that can be reconstructed.
#
# ### Example 2
# - Input: org: [1,2,3], seqs: [[1,2]]
# - Output: false
# - Explanation: The reconstructed sequence can only be [1,2].
#
# ### Example 3
# - Input: org: [1,2,3], seqs: [[1,2],[1,3],[2,3]]
# - Output: true
# - Explanation: The sequences [1,2], [1,3], and [2,3] can uniquely reconstruct the original sequence [1,2,3].
#
# ### Example 4:
# - Input: org: [4,1,5,2,6,3], seqs: [[5,2,6,3],[4,1,5,2]]
# - Output: true
#
# ## Solution
#
# ### Intuition
# Take the first numbers from each list. From the set of first numbers find the one that appears only in a first place. Add this number to result and delete it from all lists. Run until all input list are empty.
#
# ### Implementation
# +
def is_unique_first(x: int, seqs: list):
return len([s for s in seqs if x in s[1:]]) == 0
def reconstruct(orgs: list, seqs: list) -> bool:
res = []
while seqs:
first: set = set([lst[0] for lst in seqs if is_unique_first(lst[0], seqs)])
if not first or len(first) > 2:
return False
else:
elem = list(first)[0]
res.append(elem)
seqs = [[x for x in lst if x != elem] for lst in seqs]
seqs = [s for s in seqs if s]
return res == orgs
reconstruct([1,2,3], [[1,2],[1,3]])
reconstruct([1,2,3], [[1,2],[1,3],[2,3]])
reconstruct([4,1,5,2,6,3], [[5,2,6,3],[4,1,5,2]])
# -
# Analysis:
# - Time Complexity: O(n^2)
# - Space Complexity: O(n)
| python-data-structures/interviews/goog-2021-02-03.ipynb |