code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text"
# # [deplacy](https://koichiyasuoka.github.io/deplacy/) за анализа на синтаксата
# + [markdown] colab_type="text"
# ## со [Camphr-Udify](https://camphr.readthedocs.io/en/latest/notes/udify.html)
#
# + colab_type="code"
# !pip install deplacy camphr en-udify@https://github.com/PKSHATechnology-Research/camphr_models/releases/download/0.7.0/en_udify-0.7.tar.gz
import pkg_resources,imp
imp.reload(pkg_resources)
import spacy
nlp=spacy.load("en_udify")
doc=nlp("Слобода, сакам да се радувам, а нешто не ме оставаат, разбираш.")
import deplacy
deplacy.render(doc)
deplacy.serve(doc,port=None)
# import graphviz
# graphviz.Source(deplacy.dot(doc))
# + [markdown] colab_type="text"
# ## со [spaCy](https://spacy.io/)
#
# + colab_type="code"
# !pip install deplacy
# !sudo pip install -U spacy
# !sudo python -m spacy download mk_core_news_lg
import pkg_resources,imp
imp.reload(pkg_resources)
import spacy
nlp=spacy.load("mk_core_news_lg")
doc=nlp("Слобода, сакам да се радувам, а нешто не ме оставаат, разбираш.")
import deplacy
deplacy.render(doc)
deplacy.serve(doc,port=None)
# import graphviz
# graphviz.Source(deplacy.dot(doc))
|
doc/mk.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.tree import DecisionTreeRegressor
# Importing the dataset
dataset = pd.read_csv('Position_Salaries.csv')
X = dataset.iloc[:, 1:2].values
y = dataset.iloc[:, 2].values
# Fitting Decision Tree Regression to the dataset
regressor = DecisionTreeRegressor(random_state = 0)
regressor.fit(X, y)
# Predicting a new result
y_pred = regressor.predict(6.5)
print(y_pred)
# Visualising the Regression results
plt.scatter(X, y, color = 'red')
plt.plot(X, regressor.predict(X), color = 'blue')
plt.title('Truth or Bluff (Decision Tree Model)')
plt.xlabel('Position level')
plt.ylabel('Salary')
plt.show()
# Visualising the Decision Tree Regression results (higher resolution) showing the splits
X_grid = np.arange(min(X), max(X), 0.01)
X_grid = X_grid.reshape((len(X_grid), 1))
plt.scatter(X, y, color = 'red')
plt.plot(X_grid, regressor.predict(X_grid), color = 'blue')
plt.title('Truth or Bluff (Decision Tree Regression)')
plt.xlabel('Position level')
plt.ylabel('Salary')
plt.show()
|
Day 6 - Decision Tree Regression/Decision Tree Regression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from pandas import DataFrame, Series
fuel = pd.read_csv("d.fuel.dat", sep=",", index_col=0)
fuel.shape
fuel.loc[5]
fuel.loc[1:5]
fuel["mpg"].mean()
fuel[7:12]["mpg"].mean()
fuel["t_kg"] = 0.45359 * fuel["weight"]
fuel["t_kml"] = fuel["mpg"]/3.789*1.6093
fuel["t_kml"].mean()
fuel["t_kg"].mean()
|
Semester/SW01/SW1.2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/dask_horizontal.svg" align="right" width="30%">
# # 包:半结构化数据的并行列表:
# Dask-bag擅长处理可以表示为任意输入序列的数据。我们将其称为 "混乱 "数据,因为它可以包含复杂的嵌套结构、缺失的字段、数据类型的混合物等。这种*函数式的编程风格很适合标准的Python迭代,比如可以在`itertools`模块中找到。
#
# 在数据处理流水线的初期,当大量的原始数据被首次消耗时,经常会遇到混乱的数据。初始数据集可能是JSON、CSV、XML或任何其他不执行严格结构和数据类型的格式。
# 出于这个原因,最初的数据群发和处理通常使用Python的`list`s、`dict`s和`set`s来完成。
#
# 这些核心数据结构是为通用存储和处理而优化的。 用迭代器/生成器表达式或像`itertools`或[`toolz`](https://toolz.readthedocs.io/en/latest/)这样的库添加流式计算,可以让我们在小空间里处理大量的数据。 如果我们将其与并行处理结合起来,那么我们可以搅动相当数量的数据。
#
# Dask.bag是一个高级的Dask集合,用于自动化这种形式的常见工作负载。 简而言之
#
# dask.bag = map、filter、toolz +并行执行
#
# **相关文档**:
#
# * [Bag documentation](https://docs.dask.org/en/latest/bag.html)
# * [Bag screencast](https://youtu.be/-qIiJ1XtSv0)
# * [Bag API](https://docs.dask.org/en/latest/bag-api.html)
# * [Bag examples](https://examples.dask.org/bag.html)
# ## 创建数据
# %run prep.py -d accounts
# ## 设置
# 同样,我们将使用分布式调度器。调度器将在[后面](05_distributed.ipynb)进行深入解释。
# +
from dask.distributed import Client
client = Client(n_workers=4)
# -
# ## 创建
# 可以从Python序列、从文件、从S3上的数据等创建一个`Bag`。
# 我们演示使用`.take()`来显示数据的元素。(执行`.take(1)`的结果是一个有一个元素的元组)
#
# 请注意,数据被分割成块,每个块有很多项。在第一个例子中,两个分区各包含5个元素,在下面的两个例子中,每个文件被分割成一个或多个字节块。
# 每个元素都是一个整数
import dask.bag as db
b = db.from_sequence([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], npartitions=2)
b.take(3)
# 每个元素是一个文本文件,其中每行是一个JSON对象。
# 注意,压缩是自动处理的
import os
b = db.read_text(os.path.join('data', 'accounts.*.json.gz'))
b.take(1)
# 编辑sources.py来配置源位置
import sources
sources.bag_url
# Requires `s3fs` library
# each partition is a remote CSV text file
b = db.read_text(sources.bag_url,
storage_options={'anon': True})
b.take(1)
# ## 操作
# `Bag`对象拥有Python标准库、`toolz`或`pyspark`等项目中的标准功能API,包括`map`、`filter`、`groupby`等。
#
# 对`Bag`对象的操作会创建新的袋子。 调用`.compute()`方法来触发执行,就像我们对`Delayed`对象的操作一样。
# +
def is_even(n):
return n % 2 == 0
b = db.from_sequence([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
c = b.filter(is_even).map(lambda x: x ** 2)
c
# -
# 阻断形式:等待完成(在这种情况下是非常快的)
c.compute()
# ### 例子: 账户JSON数据
# 我们已经在你的数据目录中创建了一个gzipped JSON数据的假数据集。 这就像我们稍后看到的 "DataFrame "示例中使用的例子一样,只是它将每个单独的 "id "的所有内容捆绑成了一条记录。 这类似于你可能从文档存储数据库或Web API中收集的数据。
#
# 每一行都是一个JSON编码的字典,其键如下
#
# * id:客户的唯一标识符
# * 名称:客户名称
# * 交易:`transaction-id'、`amount'对的清单,该文件中客户的每一笔交易都有一个。
filename = os.path.join('data', 'accounts.*.json.gz')
lines = db.read_text(filename)
lines.take(3)
# 我们的数据以文本行的形式从文件中出来。注意,文件解压是自动发生的。我们可以通过将`json.loads`函数映射到我们的袋子上,让这个数据看起来更合理。
import json
js = lines.map(json.loads)
# take: inspect first few elements
js.take(3)
# ### 基本查询
# 一旦我们将JSON数据解析成合适的Python对象(`dict`s,`list`s等),我们就可以通过创建小的Python函数在我们的数据上运行来执行更有趣的查询。
# filter: keep only some elements of the sequence
js.filter(lambda record: record['name'] == 'Alice').take(5)
# +
def count_transactions(d):
return {'name': d['name'], 'count': len(d['transactions'])}
# map: apply a function to each element
(js.filter(lambda record: record['name'] == 'Alice')
.map(count_transactions)
.take(5))
# -
# pluck: select a field, as from a dictionary, element[field]
(js.filter(lambda record: record['name'] == 'Alice')
.map(count_transactions)
.pluck('count')
.take(5))
# Average number of transactions for all of the Alice entries
(js.filter(lambda record: record['name'] == 'Alice')
.map(count_transactions)
.pluck('count')
.mean()
.compute())
# ### Use `flatten` to de-nest
# In the example below we see the use of `.flatten()` to flatten results. We compute the average amount for all transactions for all Alices.
(js.filter(lambda record: record['name'] == 'Alice')
.pluck('transactions')
.take(3))
(js.filter(lambda record: record['name'] == 'Alice')
.pluck('transactions')
.flatten()
.take(3))
(js.filter(lambda record: record['name'] == 'Alice')
.pluck('transactions')
.flatten()
.pluck('amount')
.take(3))
(js.filter(lambda record: record['name'] == 'Alice')
.pluck('transactions')
.flatten()
.pluck('amount')
.mean()
.compute())
# ### Groupby和Foldby
# 我们经常想通过一些函数或键对数据进行分组。 我们可以使用`.groupby`方法来实现这一目标,该方法简单明了,但会强制对数据进行完全洗牌(成本很高),也可以使用更难使用但更快的`.foldby`方法,该方法将groupby和reduction结合在一起进行流式处理。
#
# * `groupby`。 洗牌数据,使所有键值相同的项都在同一个键值对中。
# * `foldby`。 遍历数据,按键累积结果。
#
# *注意:完整的groupby特别糟糕。在实际工作中,你最好使用 "foldby",或尽可能改用 "DataFrame"。
# ### `groupby`
# Groupby收集集合中的项目,使所有在某个函数下具有相同值的项目被收集在一起,成为一个键值对。
b = db.from_sequence(['Alice', 'Bob', 'Charlie', 'Dan', 'Edith', 'Frank'])
b.groupby(len).compute() # 根据名字的长度分组
b = db.from_sequence(list(range(10)))
b.groupby(lambda x: x % 2).compute()
b.groupby(lambda x: x % 2).starmap(lambda k, v: (k, max(v))).compute()
# ### `foldby`
# Foldby一开始可能很奇怪。 它与其他库的以下函数类似。
#
# * [`toolz.reduceby`](http://toolz.readthedocs.io/en/latest/streaming-analytics.html#streaming-split-apply-combine)
# * [`pyspark.RDD.combinedByKey'](http://abshinn.github.io/python/apache-spark/2014/10/11/using-combinebykey-in-apache-spark/)
#
# 当使用 "foldby "时,您提供的是
#
# 1. 对要素进行分组的关键功能
# 2. 一个二进制运算符,比如你会传递给`reduce`,你用来对每组进行还原。
# 3. 组合二元运算符,可以将数据集不同部分的两次`reduce`调用的结果组合起来。
#
# 你的还原必须是关联性的,它将在你的数据集的每个分区中并行发生。 它将在你的数据集的每个分区中并行发生。 然后,所有这些中间结果将由`combine`二进制操作符合并。
b.foldby(lambda x: x % 2, binop=max, combine=max).compute()
# ### 带账户数据的示例
# 我们发现同名的人数。
# %%time
# 警告,这个需要一段时间... ...
result = js.groupby(lambda item: item['name']).starmap(lambda k, v: (k, len(v))).compute()
print(sorted(result))
# +
# %%time
# This one is comparatively fast and produces the same result.
from operator import add
def incr(tot, _):
return tot + 1
result = js.foldby(key='name',
binop=incr,
initial=0,
combine=add,
combine_initial=0).compute()
print(sorted(result))
# -
# ### 练习:计算每个名字的总金额
# 我们想把`name`键进行分组,然后把每个名字的金额加起来。
#
# 步骤
#
# 1. 创建一个小函数,给定一个像{'name': 'Alice', 'transactions': [{'金额': 1, 'id': 123}, {'金额': 2, 'id': 456}]}产生金额的总和,例如:`3`。
#
# 2. 稍微改变上面`foldby`例子的二进制运算符,使二进制运算符不计算条目数,而是累计金额之和。
# +
# Your code here...
# -
# ## DataFrames
# 出于同样的原因,Pandas通常比纯Python快,`dask.dataframe`可以比`dask.bag`快。 后面我们会更多地使用DataFrames,但从Bag的角度来看,它经常是数据摄取中 "混乱 "部分的终点--一旦数据可以做成数据框架,那么复杂的拆分-应用-合并逻辑将变得更加直接和高效。
#
# 你可以用`to_dataframe`方法将一个简单的元组或平面字典结构的袋子转化为`dask.dataframe`。
df1 = js.to_dataframe()
df1.head()
# 现在,这看起来就像一个定义良好的DataFrame,我们可以有效地对它进行类似Pandas的计算。
# 使用Dask DataFrame,我们事先计算同名同姓的人数需要多长时间? 事实证明,`dask.dataframe.groupby()`比`dask.bag.groupby()`强了不止一个数量级;但是,它仍然无法与`dask.bag.foldby()`相提并论。
# %time df1.groupby('name').id.count().compute().head()
# ### 非正常化
# 这种DataFrame格式不太理想,因为`transactions`列充满了嵌套的数据,所以Pandas必须恢复到`object`dtype,这在Pandas中是相当慢的。 理想的情况是,我们只有在将数据扁平化,使每条记录都是一个单一的 "int"、"string"、"float "等之后,才想转换为数据框架。
# +
def denormalize(record):
# returns a list for each person, one item per transaction
return [{'id': record['id'],
'name': record['name'],
'amount': transaction['amount'],
'transaction-id': transaction['transaction-id']}
for transaction in record['transactions']]
transactions = js.map(denormalize).flatten()
transactions.take(3)
# -
df = transactions.to_dataframe()
df.head()
# %%time
# number of transactions per name
# note that the time here includes the data load and ingestion
df.groupby('name')['transaction-id'].count().compute()
# ## 局限
# 袋子提供了非常通用的计算(任何Python函数。)这种通用性是指
# 是有成本的。 袋子有以下已知的限制
#
# 1. 袋式运算往往比数组/数据框计算慢,在这种情况下,袋式运算的速度会更快。
# 就像Python比NumPy/Pandas慢一样。
# 2. ``Bag.groupby``很慢。 如果可能的话,你应该尝试使用``Bag.foldby``。
# 使用``Bag.foldby``需要更多的思考。更好的做法是,考虑创建
# 归一化数据框。
# ## 更多信息,参考:
#
# * [Bag documentation](https://docs.dask.org/en/latest/bag.html)
# * [Bag screencast](https://youtu.be/-qIiJ1XtSv0)
# * [Bag API](https://docs.dask.org/en/latest/bag-api.html)
# * [Bag examples](https://examples.dask.org/bag.html)
# ## 关闭
client.shutdown()
|
02_bag.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#Part1
input_data=open('input_D10.txt','r')
input_num=input_data.read().strip().split('\n')
input_data.close()
input_list=[num for num in input_num]
#print(input_list)
error=0
for line in input_list:
brackets=[]
#print('line for eval:', line, 'current error:',error)
for i in line:
if i=='[' or i=='(' or i== '{' or i== '<':
brackets.append(i)
else:
#print('state before last bracket close:',brackets)
last_item=brackets.pop()
#print('last open braket to check:', last_item)
if last_item == '(' and i==')':
#print('right brackets pair', last_item, i)
#print('new state of left brackets: ',brackets)
continue
elif last_item == '[' and i==']':
#print('right brackets pair', last_item, i)
#print('new state of left brackets: ',brackets)
continue
elif last_item == '{' and i=='}':
#print('right brackets pair', last_item, i)
#print('new state of left brackets: ',brackets)
continue
elif last_item == '<' and i=='>':
#print('right brackets pair', last_item, i)
#print('new state of left brackets: ',brackets)
continue
else:
if i==')':
error+=3
elif i=="]":
error+=57
elif i=="}":
error+=1197
elif i==">":
error+=25137
#print('wrong bracket:',last_item, i)
break
#print(brackets)
print(error)
# +
import math
#Part2
input_data=open('input_D10.txt','r')
input_num=input_data.read().strip().split('\n')
input_data.close()
input_list=[num for num in input_num]
#print(input_list)
error=0
incomplete=input_list[:]
for line in input_list:
brackets=[]
#print('line for eval:', line, 'current error:',error)
for i in line:
if i=='[' or i=='(' or i== '{' or i== '<':
brackets.append(i)
else:
#print('state before last bracket close:',brackets)
last_item=brackets.pop()
#print('last open braket to check:', last_item)
if last_item == '(' and i==')':
#print('right brackets pair', last_item, i)
#print('new state of left brackets: ',brackets)
continue
elif last_item == '[' and i==']':
#print('right brackets pair', last_item, i)
#print('new state of left brackets: ',brackets)
continue
elif last_item == '{' and i=='}':
#print('right brackets pair', last_item, i)
#print('new state of left brackets: ',brackets)
continue
elif last_item == '<' and i=='>':
#print('right brackets pair', last_item, i)
#print('new state of left brackets: ',brackets)
continue
else:
incomplete.remove(line)
break
#print(incomplete)
final_total=[]
for line in incomplete:
brackets=[]
#print('line for eval:', line, 'current error:',error)
for i in line:
if i=='[' or i=='(' or i== '{' or i== '<':
brackets.append(i)
else:
#print('state before last bracket close:',brackets)
last_item=brackets.pop()
#print('last open braket to check:', last_item)
if last_item == '(' and i==')':
#print('right brackets pair', last_item, i)
#print('new state of left brackets: ',brackets)
continue
elif last_item == '[' and i==']':
#print('right brackets pair', last_item, i)
#print('new state of left brackets: ',brackets)
continue
elif last_item == '{' and i=='}':
#print('right brackets pair', last_item, i)
#print('new state of left brackets: ',brackets)
continue
elif last_item == '<' and i=='>':
#print('right brackets pair', last_item, i)
#print('new state of left brackets: ',brackets)
continue
#print(brackets)
total_score=0
while brackets:
item=brackets.pop()
if item=='(':
value=1
total_score=total_score*5+value
elif item=='[':
value=2
total_score=total_score*5+value
elif item=='{':
value=3
total_score=total_score*5+value
elif item=='<':
value=4
total_score=total_score*5+value
final_total.append(total_score)
final_total.sort()
print(final_total)
print(final_total[math.floor(len(final_total)/2)])
# +
#Small Exmaple
brackets=['<', '{', '(', '[']
final_total=[]
total_score=0
while brackets:
item=brackets.pop()
print(item)
if item=='(':
value=1
total_score=total_score*5+value
print('total_score: ',total_score)
elif item=='[':
value=2
total_score=total_score*5+value
print('total_score: ',total_score)
elif item=='{':
value=3
total_score=total_score*5+value
print('total_score: ',total_score)
elif item=='<':
value=4
total_score=total_score*5+value
print('total_score: ',total_score)
final_total.append(total_score)
print(final_total)
# -
|
Day 10.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # 02 Data Structures and Libraries
#
# # Review Exercises - Solutions
#
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Review Exercise 1 : Combining Imported Functions
#
# The dot product of two vectors can be found as:
#
# \begin{align}
# \mathbf{A} \cdot \mathbf{B} = |\mathbf{A}| |\mathbf{B}| cos(\theta)
# \end{align}
#
# Where:
#
# <br>$\theta$ is the angle between the two vectors
#
# $|\mathbf{A}|$ is the magnitude of vector $\mathbf{A}$.
#
# $|\mathbf{B}|$ is the magnitude of vector $\mathbf{B}$.
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# The magnitude of an $n$-length vector $ \mathbf{A} = [A_1, ..., A_n]$ is:
#
# $|\mathbf{A}| = \sqrt{A_1^2 + ... + A_n^2}$
#
#
#
#
#
#
# + [markdown] slideshow={"slide_type": "slide"}
# <p align="center">
# <img src="../img/dot-product-angle.gif" alt="Drawing" style="width: 300px;"/>
# </p>
#
# Find the angle between the vectors `a` and `b`.
#
# *Hint:*
#
# Use a numpy function from this class to find the dot product.
#
# Search online to find a numpy function that computes *magnitude*.
#
# Search online to find a numpy function for the *inverse cosine*.
# -
# Review Exercise 1 : Find the angle between a and b
a = [9, 2, 7]
b = [4, 8, 10]
# +
# Review Exercise 1 : Find the angle between a and b
# Example Solution
import numpy as np
a = [9, 2, 7]
b = [4, 8, 10]
ab = np.dot(a, b)
maga = np.linalg.norm(a)
magb = np.linalg.norm(b)
theta = np.arccos(ab / (maga * magb))
print(theta)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Review Exercise 2 : Classifer
#
# The dot product also indicates if the angle between two vectors $\mathbf{A}$ and $\mathbf{B}$ is:
#
# - acute ($\mathbf{A} \cdot \mathbf{B}>0$)
# - obtuse ($\mathbf{A} \cdot \mathbf{B}<0$)
# - right angle ($\mathbf{A} \cdot \mathbf{B}==0$)
#
# Using `if`, `elif` and `else`, classify the angle between `a` and `b` as acute, obtuse or right angle.
# +
# Review Exercise 2 : Classifer
a = [-1, 2, 6]
b = [4, 3, 3]
# +
# Review Exercise 2 : Classifer
# Example Solution
a = [-1, 2, 6]
b = [4, 3, 3]
ab = np.dot(a, b)
if ab > 0:
print("theta is acute")
elif ab < 0:
print("theta is obtuse")
else:
print("theta is right angle")
# + [markdown] slideshow={"slide_type": "slide"}
# ## Review Exercise 3: Numpy Package Functions.
# Find a function in the Python Numpy documentation that matches the function definition and use it to solve the problems below:
# + [markdown] slideshow={"slide_type": "slide"}
# __(A)__ Definition: *Calculates the exponential function, $y= e^x$ for all elements in the input array.*
#
# Print a list where each element is the exponential function of the corresponding element in list `a = [0.1, 0, 10]`
# +
# Review Exercise 3A
# Print a list where each element is the exponential function of the corresponding element in list a
# -
# Review Exercise 3A
# Example Solution
a = [0.1, 0, 10]
print(np.exp(a))
# + [markdown] slideshow={"slide_type": "slide"}
# __(B)__ Definition: *Converts angles from degrees to radians.*
#
# Convert angle `theta`, expressed in degrees, to radians:
# <br>`theta` = 47
# +
# Review Exercise 3B
# convert angle `theta`, expressed in degrees, to radians
# +
# Review Exercise 3B
# Example Solution
np.radians(47)
# + [markdown] slideshow={"slide_type": "slide"}
# __(C)__ Definition: *Return the positive square-root of an array, element-wise.*
#
# Generate an array where each element is the square root of the corresponding element in array `a = ([4, 16, 81])`
# +
# Review Exercise 3C
# Print a list where each element is the square root of the corresponding element in list a
# +
# Review Exercise 3C
# Example Solution
a = ([4, 16, 81])
print(np.sqrt(a))
# -
# ## Review Exercise 4: Using a single list with a `for` loop.
# In the cell below, use a `for` loop to print the first letter of each month in the list.
#
#
# +
# Review Exercise 4
# Print the first letter of each month in the list
months = ["January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December"]
# +
# Review Exercise 4
# Example Solution
months = ["January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December"]
for m in months:
print(m[0], end="")
|
ReviewQuestions_ExampleSolutions/02_DataStructures_Libraries__ClassMaterial.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Парсинг – продолжение
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# +
import requests
url = 'http://books.toscrape.com/catalogue/page-1.html'
response = requests.get(url)
response
# -
response.content[:1000]
from bs4 import BeautifulSoup
tree = BeautifulSoup(response.content, 'html.parser')
tree.html.head.title.text.strip()
books = tree.find_all('article', {'class' : 'product_pod'})
books[0]
books[0].find('p', {'class': 'price_color'}).text
books[0].p.get('class')[1]
books[0].a.get('href')
books[0].h3.a.get('title')
def get_page(p):
url = 'http://books.toscrape.com/catalogue/page-{}.html'.format(p)
response = requests.get(url)
tree = BeautifulSoup(response.content, 'html.parser')
books = tree.find_all('article', {'class' : 'product_pod'})
info = []
for book in books:
info.append({'price': book.find('p', {'class': 'price_color'}).text,
'href': book.h3.a.get('href'),
'title': book.h3.a.get('title'),
'rating': book.p.get('class')[1]})
return info
# +
import time
infa = []
for p in range(1,51):
try:
infa.extend(get_page(p))
time.sleep(5)
except:
print(p)
# +
import pandas as pd
df = pd.DataFrame(infa)
print(df.shape)
df.head()
# -
df.to_csv('books_parsed.csv', index=False)
df.to_excel('books_parsed.xlsx', index=False)
df.info()
float(df.loc[0, 'price'][1:])
def get_price(price):
return float(price[1:])
df['price'] = df['price'].apply(get_price)
sns.histplot(data=df, x='price', bins=30);
def get_rating(r):
if r == "One":
return 1
elif r == "Two":
return 2
elif r == 'Three':
return 3
elif r == 'Four':
return 4
else:
return 5
df['rating'] = df['rating'].apply(get_rating)
df.rating.value_counts()
# ## Парсинг – задание
# По аналогии с работой на семинаре Вам предстоит собрать данные с сайта https://quotes.toscrape.com/.
#
# Нужно получить pandas dataframe, где есть колонки:
# * `quote` – цитата
# * `author` – автор
# * `название_тега` – 1, если этот тег стоит у цитаты, и 0, если нет. Количество таких колонок равно количеству тегов на сайте.
#
# Выведите все цитаты, у которых есть тег "truth".
url = 'https://quotes.toscrape.com/page/1/'
response = requests.get(url)
response
tree = BeautifulSoup(response.content, 'html.parser')
quotes = tree.find_all('div', {'class' : 'quote'})
quotes[0]
quotes[0].span.text
quotes[0].find('small', {'class':'author'}).text
quotes[0].find_all('a', {'class': 'tag'})
quotes[0].find_all('a', {'class': 'tag'})[0].text
tags = []
for tag in quotes[0].find_all('a', {'class': 'tag'}):
tags.append(tag.text)
tags
info = []
for q in quotes:
tags = []
for tag in q.find_all('a', {'class': 'tag'}):
tags.append(tag.text)
info.append({'quote': q.span.text,
'author': q.find('small', {'class':'author'}).text,
'tags': tags})
info
response.content[:1000]
def get_page(p):
url = 'https://quotes.toscrape.com/page/{}/'.format(p)
response = requests.get(url)
tree = BeautifulSoup(response.content, 'html.parser')
quotes = tree.find_all('div', {'class' : 'quote'})
info = []
for q in quotes:
tags = []
for tag in q.find_all('a', {'class': 'tag'}):
tags.append(tag.text)
info.append({'quote': q.span.text,
'author': q.find('small', {'class':'author'}).text,
'tags': tags})
return info
# +
info = []
for p in range(1,11):
info.extend(get_page(p))
# -
len(info)
df = pd.DataFrame(info)
df.head()
tags_set = set(df['tags'].explode().values)
tags_set
for tag in tags_set:
df[tag] = [tag in df['tags'].loc[i] for i in df.index]
pd.set_option('display.max_columns', 500)
df.head()
df.columns
for q in df[df['truth']]['quote'].values:
print(q)
# ## Работа с json файлами
# Создать pandas dataframe с такими колонками:
#
# * `username`
# * `changed_lines` – количество измененных строчек
# * `commits` – количество коммитов
# * `new_files` – количество новых файлов, которые сделал этот разработчик
#
# Отсортировать по `username`
# ### pandas
from pandas import json_normalize
import json
with open('commits.json', 'r') as f:
data = json.load(f)
data[0]
data[0]['username']
data = json_normalize(data, 'files', ['username', 'commit_time'])
data
import pandas as pd
data['commit_time'] = pd.to_datetime(data['commit_time'])
data.shape
data.info()
# commits
res = data.groupby('username')[['commit_time']].nunique().reset_index()
res
# changed_lines
data.groupby('username')['changed_lines'].sum().values
res['changed_lines'] = data.groupby('username')['changed_lines'].sum().values
agg = data.groupby(['name', 'username'])[['commit_time']].min().sort_values(['name', 'commit_time'])
agg
d = {}
for file in agg.reset_index()['name'].unique():
d[file] = agg.loc[file].iloc[0].name
d
pd.DataFrame([d]).T.reset_index().groupby(0).count()['index'].values
res['new_files'] = pd.DataFrame([d]).T.reset_index().groupby(0).count()['index'].values
res.sort_values('username', inplace=True)
res
# ### словари
from collections import defaultdict
d = defaultdict()
for k in [1, 2, 3]:
d[k] = 1
with open('commits.json', 'r') as f:
data = json.load(f)
data = sorted(data, key=lambda x: pd.to_datetime(x['commit_time']))
data[0]
somedict = {}
print(somedict[3]) # KeyError
someddict = defaultdict(int)
print(someddict[3]) # print int(), thus 0
someddict
# +
table = defaultdict(lambda: {'commits': 0, 'changed_lines':0, 'new_files':0})
new_files = set()
for commit in data:
user = commit['username']
table[user]['commits'] += 1
for file in commit['files']:
table[user]['changed_lines'] += file['changed_lines']
if file['name'] not in new_files:
new_files.add(file['name'])
table[user]['new_files'] += 1
# -
table
fin = pd.DataFrame(table).T.reset_index().rename(columns={'index': 'username'}).sort_values('username')
fin
|
lect08_requests_BS/2021_DPO_8_2_intro_to_parsing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="EpF3vDz9-2rK" colab_type="code" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 140} outputId="56893294-4b49-4a6d-a544-5df29709702b"
# Run this cell and select the kaggle.json file downloaded
# from the Kaggle account settings page.
from google.colab import files
files.upload()
# Let's make sure the kaggle.json file is present.
# !ls -lha kaggle.json
# Next, install the Kaggle API client.
# !pip install -q kaggle
# The Kaggle API client expects this file to be in ~/.kaggle,
# so move it there.
# !mkdir -p ~/.kaggle
# !cp kaggle.json ~/.kaggle/
# This permissions change avoids a warning on Kaggle tool startup.
# !chmod 600 ~/.kaggle/kaggle.json
# ! kaggle datasets download -d sriramr/fruits-fresh-and-rotten-for-classification
# + id="2OHfZx17-6sp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="605fe0b1-c55d-47fd-d228-8312387d9251"
import zipfile
import os
curDir = os.getcwd()
print(curDir)
local_zip = '/content/fruits-fresh-and-rotten-for-classification.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/content')
zip_ref.close()
# + id="MmXfTJlO_PXV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="23ecf236-85e2-42b5-b4c9-5aa50c44f313"
import random
from keras.preprocessing.image import load_img
import matplotlib.pyplot as plt
import os
filenames = os.listdir('/content/dataset/train/freshoranges')
sample = random.choice(filenames)
image = load_img('/content/dataset/train/freshoranges/'+sample)
plt.imshow(image)
print(image.size)
# + id="YrQnkqVn_tKY" colab_type="code" colab={}
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# from tensorflow.keras.applications import VGG16
from tensorflow.keras.applications import ResNet50V2
from tensorflow.keras.layers import GlobalAveragePooling2D, AveragePooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import Callback, LearningRateScheduler
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import argparse
import cv2
import os
# + id="Rs29SYee_yIQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="f46e6141-903c-4d9d-f2ae-40597cb14d53"
train_datagen = ImageDataGenerator(rescale=1/255,
rotation_range=15,
horizontal_flip=True,
fill_mode='nearest')
validation_datagen = ImageDataGenerator(rescale=1/255)
train_generator = train_datagen.flow_from_directory(
'dataset/train',
target_size = (150,150),
batch_size = 15,
class_mode='categorical')
validation_generator = validation_datagen.flow_from_directory(
'dataset/test',
target_size=(150,150),
batch_size= 15,
class_mode='categorical'
)
# + id="9h7Sgrmo_6cG" colab_type="code" colab={}
## Define a Callback function
desired_accuracy = 0.999
class myCallback(Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>desired_accuracy):
print("\nReached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
my_callback = myCallback()
# + id="r7LyXLpn_-go" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="79359a1b-712d-4bb4-d35f-90c1c7d27796"
baseModel = ResNet50V2(weights="imagenet", include_top=False,
input_tensor=Input(shape=(150, 150, 3)))
headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(4, 4))(headModel)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(64, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(6, activation="softmax")(headModel)
model = Model(inputs=baseModel.input, outputs = headModel)
# Freeze the baseModel layers
for layer in baseModel.layers :
layer.trainable = False
# + id="YpqbL6MKAAW4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="3224bcc7-5aed-44ee-fcf8-f0cb8f3b5779"
model.summary()
# + id="YnX0xlwhAGIL" colab_type="code" colab={}
## Hyperparameter
INIT_LR = 1e-3
BS = 15
EPOCHS = 25
# + id="ALBiTXAwAJjA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4facf591-c1f3-4868-aa81-2b249067cb43"
print("[INFO] compiling model...")
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="binary_crossentropy", optimizer=opt,
metrics=["accuracy"])
# + id="oZVQUfZNARmX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 890} outputId="27499c16-87c7-478b-9311-ef9abfce9b84"
history = model.fit(
train_generator,
validation_data=validation_generator,
steps_per_epoch=len(train_generator) // BS,
validation_steps=len(validation_generator) // BS,
epochs=30,
verbose=1,
callbacks=[my_callback]
)
# + id="K84t-BcLBLn8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="f475c813-332d-4e51-8d06-9aea25516647"
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
# + id="P2E1sM9VyQqn" colab_type="code" colab={}
model.save('transfer_learningv4.h5')
# + id="c5z5-sO7yiFO" colab_type="code" colab={}
|
example_model/transfer_learningv2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MFS demonstration
# This script makes a fake data set and then deconvolves it. Finally the full and residual visibility are plotted.
# +
% matplotlib inline
import os
import sys
sys.path.append(os.path.join('..', '..'))
from data_models.parameters import arl_path
results_dir = arl_path('test_results')
from matplotlib import pylab
import numpy
from astropy.coordinates import SkyCoord
from astropy import units as u
from astropy.wcs.utils import pixel_to_skycoord
from matplotlib import pyplot as plt
from data_models.polarisation import PolarisationFrame
from processing_components.visibility.base import create_visibility
from processing_components.skycomponent.operations import create_skycomponent
from processing_components.image.operations import show_image, export_image_to_fits, smooth_image, \
calculate_image_frequency_moments, calculate_image_from_frequency_moments
from processing_components.image.deconvolution import deconvolve_cube, restore_cube
from libs.image.iterators import image_raster_iter
from processing_components.image.solvers import solve_image
from processing_components.visibility.iterators import vis_timeslice_iter
from processing_components.simulation.testing_support import create_named_configuration, \
create_low_test_image_from_gleam
from processing_components.imaging.weighting import weight_visibility
import logging
log = logging.getLogger()
log.setLevel(logging.DEBUG)
log.addHandler(logging.StreamHandler(sys.stdout))
# -
pylab.rcParams['figure.figsize'] = (12.0, 12.0)
pylab.rcParams['image.cmap'] = 'rainbow'
# Construct LOW configuration
# We create the visibility. This just makes the uvw, time, antenna1, antenna2, weight columns in a table
# +
config = 'core'
if config == 'full':
low = create_named_configuration('LOWBD2')
b = 8e4
cellsize = 0.00001
npixel=5 * 2048
padding = 1
context='2d'
else:
low = create_named_configuration('LOWBD2-CORE')
b = 4e3
cellsize = 0.001
npixel=512
padding = 2
context='2d'
oversampling = 32
nchan = 7
frequency = numpy.linspace(0.8e8, 1.2e8, nchan)
centre_frequency = numpy.array([numpy.average(frequency)])
channel_bandwidth=numpy.array(nchan * [frequency[1]-frequency[0]])
total_bandwidth = numpy.array([numpy.sum(channel_bandwidth)])
times = numpy.linspace(-3, +3, 5) * numpy.pi / 12.0
log.info('Observing times %s' % (times))
log.info("Observing frequencies %s Hz" % (frequency))
log.info("Channel bandwidths %s Hz" % (channel_bandwidth))
log.info("Centre frequency %s Hz" % (centre_frequency))
log.info("Cellsize = %.6f radians" % (cellsize))
# -
phasecentre = SkyCoord(ra=+15.0 * u.deg, dec=-35.0 * u.deg, frame='icrs', equinox='J2000')
vt = create_visibility(low, times, frequency, channel_bandwidth=channel_bandwidth,
weight=1.0, phasecentre=phasecentre,
polarisation_frame=PolarisationFrame('stokesI'))
# Plot the synthesized uv coverage
plt.clf()
plt.plot(vt.uvw[:,0], vt.uvw[:,1], '.', color='b')
plt.plot(-vt.uvw[:,0], -vt.uvw[:,1], '.', color='b')
plt.xlabel("U (wavelengths)")
plt.ylabel("V (wavelengths)")
plt.show()
# Make a test image
model_centrechannel = create_low_test_image_from_gleam(npixel=npixel,
frequency=centre_frequency,
channel_bandwidth=total_bandwidth,
cellsize=cellsize,
phasecentre=phasecentre, applybeam=True)
export_image_to_fits(model_centrechannel, '%s/imaging-mfs-model_centre_channel.fits' %
(results_dir))
model_multichannel = create_low_test_image_from_gleam(npixel=npixel, frequency=frequency,
channel_bandwidth=channel_bandwidth,
cellsize=cellsize,
phasecentre=phasecentre, applybeam=True)
cmodel = smooth_image(model_multichannel)
show_image(cmodel)
plt.title("Smoothed model image")
plt.show()
export_image_to_fits(cmodel, '%s/imaging-mfs-cmodel.fits' % (results_dir))
export_image_to_fits(model_multichannel, '%s/imaging-mfs-multi_channel.fits' % (results_dir))
moment_cube = calculate_image_frequency_moments(model_multichannel,nmoments=3)
export_image_to_fits(moment_cube, '%s/imaging-mfs-moment_cube.fits' % (results_dir))
reconstructed_cube = calculate_image_from_frequency_moments(model_multichannel, moment_cube)
export_image_to_fits(reconstructed_cube, '%s/imaging-mfs-reconstructed_cube.fits' %
(results_dir))
# +
vt.data['vis'] *= 0.0
vt = predict_function(vt, model_multichannel, context='2d')
# To check that we got the prediction right, plot the amplitude of the visibility.
uvdist=numpy.sqrt(vt.data['uvw'][:,0]**2+vt.data['uvw'][:,1]**2)
plt.clf()
plt.plot(uvdist, numpy.abs(vt.data['vis']), '.')
plt.xlabel('uvdist')
plt.ylabel('Amp Visibility')
plt.show()
# -
# Weight the data
vt, density, densitygrid = weight_visibility(vt, model_centrechannel)
plt.clf()
plt.semilogy(uvdist, density, '.')
plt.xlabel('uvdist')
plt.ylabel('Sample density')
plt.show()
density = None
densitygrid = None
# Make the dirty image and point spread function
# +
dirty, sumwt = invert_function(vt, model_multichannel, context='2d', padding=1)
show_image(dirty)
psf, sumwt = invert_function(vt, model_multichannel, context='2d', dopsf=True, padding=1)
print("Max, min in dirty image = %.6f, %.6f, sumwt = %s" %
(dirty.data.max(), dirty.data.min(), sumwt))
print("Max, min in PSF = %.6f, %.6f, sumwt = %s" %
(psf.data.max(), psf.data.min(), sumwt))
export_image_to_fits(dirty, '%s/imaging-mfs-dirty.fits' % (results_dir))
export_image_to_fits(psf, '%s/imaging-mfs-psf.fits' % (results_dir))
# -
comp, residual = deconvolve_cube(dirty, psf, niter=1000, gain=0.7, algorithm='msmfsclean',
scales=[0, 3, 10, 30], threshold=0.01, fractional_threshold=0.001, nmoments=3)
export_image_to_fits(comp, '%s/imaging-mfs-comp.fits' % (results_dir))
clean = restore_cube(model=comp, psf=psf, residual=residual)
export_image_to_fits(residual, '%s/imaging-mfs-residual.fits' % (results_dir))
export_image_to_fits(clean, '%s/imaging-mfs-clean.fits' % (results_dir))
show_image(clean)
plt.show()
# Predict the visibility of the model
vtmodel = create_visibility(low, times, frequency, channel_bandwidth=channel_bandwidth,
weight=1.0, phasecentre=phasecentre,
polarisation_frame=PolarisationFrame('stokesI'))
vtmodel=predict_function(vtmodel, comp, context='2d')
# Now we will plot the original visibility and the residual visibility.
# +
uvdist=numpy.sqrt(vt.data['uvw'][:,0]**2+vt.data['uvw'][:,1]**2)
plt.clf()
plt.plot(uvdist, numpy.abs(vt.data['vis']), '.', color='b', label='Original')
plt.plot(uvdist, numpy.abs(vt.data['vis']-vtmodel.data['vis']), '.', color='r',
label='Residual')
plt.xlabel('uvdist')
plt.ylabel('Amp Visibility')
plt.legend()
plt.show()
# -
|
processing_components/notebooks/imaging-mfs.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (Ubuntu Linux)
# language: python
# name: python3-ubuntu
# resource_dir: /usr/local/share/jupyter/kernels/python3-ubuntu
# ---
# %matplotlib inline
from pathlib import Path
import pandas as pd
events_file = Path(r'athlete_events.csv')
events_file.exists()
Path('.').resolve()
db = pd.read_csv(r'athlete_events.csv')
db
db['NOC'].unique()
len(db)
medals = db.dropna(subset=['Medal'])
medals
len(medals)
len(medals) / len(db)
medal_counts = {}
medals[medals['Medal'] == 'Gold']
for country, medals_country in medals.groupby('NOC'):
print(country)
medals_country[medals_country['Medal'] == 'Gold']
country_list = medals['NOC'].unique()
country_list
# +
## Calculating the number of each medal type earned per country and the most recent year of each win
# -
import numpy as np
len(country_list)
medals[medals['NOC'] == 'CHI']
medals[medals['NOC'] == 'CHI'][['Medal']].eq('Gold').sum()
# # This works?
# +
column_list = ['Country','Gold','Silver','Bronze','Most Recent Gold Year','Most Recent Silver Year','Most Recent Bronze Year']
zero_data = np.zeros(shape = (len(country_list),len(column_list)))
medal_amount_df = pd.DataFrame(zero_data,columns = column_list)
newmedals = medals.sort_values('NOC', ascending = False).drop_duplicates(subset=['Year','Event','Medal'])
for i in range(len(country_list)):
medal_amount_df.loc[i, 'Country'] = country_list[i]
medal_amount_df['Gold'].iloc[i] = newmedals[newmedals['NOC'] == country_list[i]][['Medal']].eq('Gold').sum()
medal_amount_df['Silver'].iloc[i] = newmedals[newmedals['NOC'] == country_list[i]][['Medal']].eq('Silver').sum()
medal_amount_df['Bronze'].iloc[i] = newmedals[newmedals['NOC'] == country_list[i]][['Medal']].eq('Bronze').sum()
if len(newmedals[newmedals['NOC'] == country_list[i]][newmedals['Medal'] == 'Gold'].sort_values('Year', ascending = False)) != 0:
medal_amount_df['Most Recent Gold Year'].iloc[i] = newmedals[newmedals['NOC'] == country_list[i]][newmedals['Medal'] == 'Gold'].sort_values('Year', ascending = False)['Year'].iloc[0]
else:
medal_amount_df['Most Recent Gold Year'].iloc[i] = ''
if len(newmedals[newmedals['NOC'] == country_list[i]][newmedals['Medal'] == 'Silver'].sort_values('Year', ascending = False)) != 0:
medal_amount_df['Most Recent Silver Year'].iloc[i] = newmedals[newmedals['NOC'] == country_list[i]][newmedals['Medal'] == 'Silver'].sort_values('Year', ascending = False)['Year'].iloc[0]
else:
medal_amount_df['Most Recent Silver Year'].iloc[i] = ''
if len(newmedals[newmedals['NOC'] == country_list[i]][newmedals['Medal'] == 'Bronze'].sort_values('Year', ascending = False)) != 0:
medal_amount_df['Most Recent Bronze Year'].iloc[i] = newmedals[newmedals['NOC'] == country_list[i]][newmedals['Medal'] == 'Bronze'].sort_values('Year', ascending = False)['Year'].iloc[0]
else:
medal_amount_df['Most Recent Bronze Year'].iloc[i] = ''
# -
medal_amount_df.sort_values('Gold', ascending = False)
testdf = medals.sort_values('NOC', ascending = False).drop_duplicates(subset=['Year','Event','Medal'])
testdf
testdf[testdf['NOC'] == 'USA'][['Medal']].eq('Gold').sum()
testdf[testdf['Year'] == 2016][['Medal']].eq('Gold').sum()
|
medal_counting2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # VacationPy
# ----
#
# #### Note
# * Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.
#
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
# -
# ### Store Part I results into DataFrame
# * Load the csv exported in Part I to a DataFrame
# +
path = '../WeatherPy/output_data/cities.csv'
cities = pd.read_csv(path)
#index added as a column, removing it
cities.drop('Unnamed: 0', axis='columns', inplace=True)
#some countries have nans for some reason, dropping them
cities.dropna(inplace=True)
cities.head()
# -
# ### Humidity Heatmap
# * Configure gmaps.
# * Use the Lat and Lng as locations and Humidity as the weight.
# * Add Heatmap layer to map.
# +
#configure gmaps
gmaps.configure(api_key=g_key)
#set up heat map
locations = cities[['Lat', 'Lng']]
weight = cities.Humidity
# +
#add heat map
fig = gmaps.figure()
#display heatmap
heat_layer = gmaps.heatmap_layer(locations, weights=weight, dissipating=False)
fig.add_layer(heat_layer)
fig
# -
# ### Create new DataFrame fitting weather criteria
# * Narrow down the cities to fit weather conditions.
# * Drop any rows will null values.
# +
#weather between 70 and 80
hotel_df = cities[(cities.Max_Temp > 70) & (cities.Max_Temp < 80)]
#wind speed less than 10
hotel_df = hotel_df[hotel_df.Wind_Speed < 10]
#zero cloudiness
hotel_df = hotel_df[hotel_df.Cloudiness < 10]
hotel_df.head()
# -
# ### Hotel Map
# * Store into variable named `hotel_df`.
# * Add a "Hotel Name" column to the DataFrame.
# * Set parameters to search for hotels with 5000 meters.
# * Hit the Google Places API for each city's coordinates.
# * Store the first Hotel result into the DataFrame.
# * Plot markers on top of the heatmap.
# +
#add new column
hotel_df['Hotel_Name'] = ""
#base url for api cal;
base_url = 'https://maps.googleapis.com/maps/api/place/nearbysearch/json'
#loop cities with ideal conditions for hotel
for index, row in hotel_df.iterrows():
#set parameters
params = {
'location':f"{row['Lat']}, {row['Lng']}",
'radius':5000,
'keyword':'hotel',
'key':g_key
}
#do api calls, try to store hotel information
response = requests.get(base_url, params=params).json()
try:
hotel_df.loc[index, 'Hotel_Name'] = response['results'][0]['name']
except:
print(f"Skipping {row['City']}... No hotel found")
#get rid of places where we could not locate a hotel
hotel_df.dropna(inplace = True)
hotel_df.head()
# +
# NOTE: Do not change any of the code in this cell
#I gave hotel name an underscore
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel_Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# +
# Add marker layer ontop of heat map
markers = gmaps.marker_layer(locations, info_box_content=hotel_info)
fig.add_layer(markers)
# Display figure
fig
# -
|
VacationPy/VacationPy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## I - ``clean_bat``
#
# For a ``filename.bat`` containg the names of the light curves (yet to be downloaded) of systems with periods larger than the chosen range, this code fetches the LCs from the Kepler database *provided* the SNR of the first reported transit is greater than 7.1. The file ``filename.bat`` comes straight from the Exoplanet Archive Database and contains the light curve IDs (both long cadence and short cadence). See `readme.txt` for more information.
import numpy as np
import re
import pandas as pd
# +
path_file = '/Users/mbadenas/Documents/Master UAB/Tesis UAB/TFM2018/clean_bat_files/LC_p13point5up/'
filename = 'all_targets_P13point5up.bat'
"""
path_file = '/Users/mbadenas/Documents/Master UAB/Tesis UAB/TFM2018/clean_bat_files/LC_p15to15point5/'
filename = 'targets_15to15point.bat'
"""
pattern_slc = re.compile(r'^wget -O \'kplr(?P<k_id>[0-9]{9})-([0-9]{13})_slc.fits.+',
re.MULTILINE) #we only want short cadence (slc)
# -
# After identifying the name pattern of Kepler short-cadence light curves, we will look for targets for which the first transit in the LC has a SN > 7.1 (TESS constraint).
# +
props = path_file+'/all_targets_P13point5up.csv'
#props = path_file+'/all_targets_P15to15point5.csv'
dataset = pd.read_csv(props, sep=',', comment='#', na_values = '\\N')
original_IDs = []
for row in dataset.itertuples(index = True, name='Pandas'):
koi = getattr(row, "kepid")
snr = getattr(row, "koi_model_snr")
N = getattr(row, "koi_num_transits")
num_planets = getattr(row, "koi_count")
snr_first_transit = snr/np.sqrt(N)
if (snr_first_transit>=7.1) and (num_planets==1):
original_IDs.append(koi)
goodSN_IDs = np.array(original_IDs,dtype=int)
# -
"""
print('The number of targets with P>[15,15.5] days is **{}** but \
only **{}** have a SN>7.1 AND only 1 planet'.format(len(dataset.index), len(goodSN_IDs)))
"""
print('The number of targets with P>13.5 days is **{}** but \
only **{}** have a SN>7.1 AND only 1 planet'.format(len(dataset.index), len(goodSN_IDs)))
# We will now proceed to download the SHORT-CADENCE photometric data for targets with 1 planet for which their first transit has a SN > 7.1. Note that some systems only have long-cadence light curves, so the final number of targets may be different from the one shown in the above cell.
#
# There are two ways to download the short-cadence light curves: we can either download 1 light curve for each target (in which case, the code will just fetch the first available LC online), or download them all (in which case, the code will be much slower, especially since one target can have many many LCs accumulated over all the cadences of the Kepler Mission). Our choice is regulated by the boolean `not_all_LC`: the first case (only 1 LC) is achieved with `not_all_LC=True`, while the second (all LCs) can be obtained with `not_all_LC=False`.
#
# The outputs of the code below are: a `*.bat`file with the LC(s) IDs of the appropiate systems, and a `kepler_id.txt` file with the IDs of such systems.
# +
#Create the .bat file where we'll store the LC id's
filename_out = path_file+'/good_list_ids.bat'
f_out = open(filename_out,'w')
f_out.write('#!/bin/sh\n\n')
ids_from_bat = set([])
#How many LC do you want? False to download *all* the LCs for a given system or True to only get 1.
not_all_LC = False
with open(path_file+filename,'r') as f_kepler:
for line in f_kepler:
is_slc = pattern_slc.match(line)
if is_slc:
if int(is_slc.group('k_id')) in goodSN_IDs:
if (not_all_LC and (is_slc.group('k_id') not in ids_from_bat)):
f_out.write(line)
elif not not_all_LC:
f_out.write(line)
ids_from_bat.add(is_slc.group('k_id'))
f_out.close()
ids_final = np.array(list(ids_from_bat),dtype=int)
print(len(ids_final))
np.savetxt(path_file+'kepler_id.txt',ids_final,fmt='%d', newline='\n')
print("There are a total of {} systems with *SHORT-CADENCE* LCs and with a SN (for the first transit) \
greater than 7.1 and only 1 planet".format(len(ids_final)))
# -
|
clean_bat.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.5 64-bit
# name: python3
# ---
# reference to https://jackiexiao.github.io/eat_pytorch_in_20_days/5.%E4%B8%AD%E9%98%B6API/5-4%2CTensorBoard%E5%8F%AF%E8%A7%86%E5%8C%96/
# +
import torch
from torch import nn
from torch.utils.tensorboard import SummaryWriter
from torchkeras import Model, summary
import datetime
logs_dir_root = "./logs/"
# +
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3,out_channels=32,kernel_size = 3)
self.pool = nn.MaxPool2d(kernel_size = 2,stride = 2)
self.conv2 = nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5)
self.dropout = nn.Dropout2d(p = 0.1)
self.adaptive_pool = nn.AdaptiveMaxPool2d((1,1))
self.flatten = nn.Flatten()
self.linear1 = nn.Linear(64,32)
self.relu = nn.ReLU()
self.linear2 = nn.Linear(32,1)
self.sigmoid = nn.Sigmoid()
def forward(self,x):
x = self.conv1(x)
x = self.pool(x)
x = self.conv2(x)
x = self.pool(x)
x = self.dropout(x)
x = self.adaptive_pool(x)
x = self.flatten(x)
x = self.linear1(x)
x = self.relu(x)
x = self.linear2(x)
y = self.sigmoid(x)
return y
net = Net()
print(net)
summary(net, input_shape= (3, 32, 32))
timestamp = "{0:%Y%m%d_%H%M%S/}".format(datetime.datetime.now())
logs_dir = f"./{logs_dir_root}/tensorboard_sigmods/{timestamp}"
writer = SummaryWriter(logs_dir)
writer.add_graph(net, input_to_model = torch.rand(1, 3, 32, 32))
writer.close()
# +
import numpy as np
import torch
import datetime
from torch.utils.tensorboard import SummaryWriter
# f(x) = a*x**2 + b*x + c的最小值
x = torch.tensor(0.0, requires_grad = True) # x需要被求导
a = torch.tensor(1.0)
b = torch.tensor(-2.0)
c = torch.tensor(0.5)
optimizer = torch.optim.SGD(params=[x],lr = 0.01)
def f(x):
result = a*torch.pow(x,2) + b*x + c
return(result)
timestamp = "{0:%Y%m%d_%H%M%S/}".format(datetime.datetime.now())
logs_dir = f'{logs_dir_root}/funcx/{timestamp}'
writer = SummaryWriter(logs_dir)
for i in range(500):
optimizer.zero_grad()
y = f(x)
y.backward()
optimizer.step()
writer.add_scalar("x", x.item(), i) #日志中记录x在第step i 的值
writer.add_scalar("y", y.item(), i) #日志中记录y在第step i 的值
writer.close()
print("y =", f(x).data, ";" , "x =", x.data)
# +
import numpy as np
import torch
from torch.utils.tensorboard import SummaryWriter
import datetime
# create a normalized distribution matrix
def norm(mean, std):
t = std * torch.randn((100, 20)) + mean
return t
timestamp = "{0:%Y%m%d_%H%M%S/}".format(datetime.datetime.now())
logs_dir = f'{logs_dir_root}/distribution/{timestamp}'
writer = SummaryWriter(logs_dir)
for step, mean in enumerate(range(-10, 10, 1)):
w = norm(mean, 1)
writer.add_histogram("w", w, step)
writer.flush()
writer.close()
# +
import datetime
import torch
import torchvision
from torch import nn
from torch.utils.data import Dataset,DataLoader
from torch.utils.tensorboard import SummaryWriter
transform_train = torchvision.transforms.Compose(
[torchvision.transforms.ToTensor()])
transform_valid = torchvision.transforms.Compose(
[torchvision.transforms.ToTensor()])
# -
data_root = "./data/"
# # when set download=True, download error(SSLCertVerificationError), so I download manually via http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz and
# # then cd ${data_cifar10} and tar zxf cifar-10-python.tar.gz
data_cifar10 = f'{data_root}/cifar20/'
print(data_cifar10)
ds_train = torchvision.datasets.CIFAR10(root=data_cifar10, train=True, download=True, transform=transform_train)
ds_valid = torchvision.datasets.CIFAR10(root=data_cifar10, train=False, download=False, transform=transform_valid)
#
data_cifar100 = f'{data_root}/cifar100/'
# download url http://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz
# # cd ${data_cifar100} and tar zxf cifar-100-python.tar.gz
ds_train = torchvision.datasets.CIFAR100(root=data_cifar100, train=True, download=True, transform=transform_train)
ds_valid = torchvision.datasets.CIFAR100(root=data_cifar100, train=False, download=False, transform=transform_valid)
# +
print(ds_train.class_to_idx)
dl_train = DataLoader(ds_train,batch_size = 50,shuffle = True,num_workers=3)
dl_valid = DataLoader(ds_valid,batch_size = 50,shuffle = True,num_workers=3)
dl_train_iter = iter(dl_train)
images, labels = dl_train_iter.next()
timestamp = "{0:%Y%m%d_%H%M%S}/".format(datetime.datetime.now())
logs_dir = f"{logs_dir_root}/visualize_raw_img/{timestamp}"
writer = SummaryWriter(logs_dir)
# display a raw image
writer.add_image('images[0]', images[0])
writer.close()
# 将多张图片拼接成一张图片,中间用黑色网格分割
# concat multipule images to one
# create grid of images
img_grid = torchvision.utils.make_grid(images)
writer.add_image('image_grid', img_grid)
writer.close()
writer.add_images("images", images, global_step = 0)
writer.close()
# +
# %matplotlib inline
# %config InlineBackend.figure_format = 'svg'
from matplotlib import pyplot as plt
figure = plt.figure(figsize=(8,8))
for i in range(9):
img,label = ds_train[i]
img = img.permute(1,2,0)
ax=plt.subplot(3,3,i+1)
ax.imshow(img.numpy())
ax.set_title("label = %d"%label)
ax.set_xticks([])
ax.set_yticks([])
plt.show()
timestamp = "{0:%Y%m%d_%H%M%S}/".format(datetime.datetime.now())
logs_dir = f'{logs_dir_root}/visualize_manually_plot/{timestamp}'
writer = SummaryWriter(logs_dir)
writer.add_figure('figure', figure, global_step=0)
writer.close()
# -
# reference to https://stackoverflow.com/questions/55970686/tensorboard-not-found-as-magic-function-in-jupyter
# %load_ext tensorboard
# %tensorboard --logdir {logs_dir_root}
from tensorboard import notebook
notebook.list() # read data from ~\AppData\Local\Temp\.tensorboard-info on windows
notebook.start(f"--logdir {logs_dir_root}")
|
python/third-party/tensorboard/pytorch/visualizing_pics.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
# %matplotlib inline
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
# +
df = pd.read_csv(url
, names=['sepal length','sepal width','petal length','petal width','target'])
# -
df.head()
features = ['sepal length', 'sepal width', 'petal length', 'petal width']
x = df.loc[:, features].values
y = df.loc[:,['target']].values
# +
#x = StandardScaler().fit_transform(x)
# -
pd.DataFrame(data = x, columns = features).head()
pca = PCA(n_components=3)
principalComponents = pca.fit_transform(x)
principalDf = pd.DataFrame(data = principalComponents
, columns = ['principal component 1', 'principal component 2','principal component 3'])
principalDf.head()
df[['target']].head()
finalDf = pd.concat([principalDf, df[['target']]], axis = 1)
finalDf.head(5)
# +
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(111 , projection='3d')
ax.set_xlabel('Principal Component 1', fontsize = 15)
ax.set_ylabel('Principal Component 2', fontsize = 15)
ax.set_zlabel('Principal Component 3', fontsize = 15)
ax.set_title('3 Component PCA', fontsize = 20)
targets = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
colors = ['r', 'g', 'b']
for target, color in zip(targets,colors):
indicesToKeep = finalDf['target'] == target
ax.scatter(finalDf.loc[indicesToKeep, 'principal component 1']
, finalDf.loc[indicesToKeep, 'principal component 2']
,finalDf.loc[indicesToKeep, 'principal component 3']
, c = color
, s = 75)
ax.legend(targets)
ax.grid()
# -
|
Basic/PCA/pca_fig.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# $\int \frac{(1 - sen(x))}{cos(x)}dx = \int \frac{1}{cos(x)}dx - \int \frac{sen(x)}{cos(x)}dx$
# $\int \frac{1}{cos(x)}dx = \int sec(x)dx = ln(|tan(x) + sec(x)|)$
# $\int \frac{sen(x)}{cos(x)}dx = \int tan(x)dx = ln(|sec(x)|)$
# $\int \frac{(1 - sen(x))}{cos(x)}dx = ln(|tan(x) + sec(x)|) - ln(|sec(X)|) + C$
|
Problemas 7.0/19.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from sklearn.manifold import TSNE
from sklearn.cluster import KMeans
import seaborn as sns
import matplotlib.pyplot as plt
# -
SCORED_PATH = "../output/stack_overflow_bs200_dl02_lr005_lstm128_fc256_glove/scored/scored_output.csv"
scores_df = pd.read_csv(SCORED_PATH)
proba_scores = scores_df.loc[:, scores_df.columns.str.startswith('proba_')].values
kmeans = KMeans(n_clusters=proba_scores.shape[1] + 1, random_state=0)
kmeans.fit(proba_scores)
clusters = kmeans.predict(proba_scores).reshape((-1, 1))
tsne_xy = TSNE(n_components=2).fit_transform(proba_scores)
visualize_df = pd.DataFrame(
np.concatenate((tsne_xy, clusters), axis=1), columns=['tsne_1', 'tsne_2', 'cluster_id'])
plt.figure(figsize=(10, 6))
sns.scatterplot(data=visualize_df,x='tsne_1',y='tsne_2',hue='cluster_id',
legend="full",alpha=0.5, palette='pastel')
plt.title("KMeans Cluster on TSNE 2D Transformation")
plt.show()
scores_df['cluster_id'] = clusters
SAMPLE_SIZE = 20
group_sample_n = SAMPLE_SIZE // scores_df.cluster_id.nunique()
sample_df = scores_df.groupby('cluster_id').apply(lambda x: x.sample(n = group_sample_n)).reset_index(drop=True)
unsampled_count = SAMPLE_SIZE - sample_df.shape[0]
if unsampled_count > 0:
unsampled_ids = scores_df.UID[~np.isin(scores_df.UID, sample_df.UID)]
additional_ids = np.random.choice(unsampled_ids, unsampled_count, replace=False)
additional_df = scores_df.loc[np.isin(scores_df.UID, additional_ids), :]
sample_df = pd.concat([sample_df, additional_df], ignore_index=True)
# +
def kmeans_from_proba(scored_df, tsne_fig_name, score_col_prefix='proba_', random_state=0):
print("Extracting tag scores and training KMeans for clusters")
# extract tag scores into np.array
proba_scores = scored_df.loc[:, scored_df.columns.str.startswith(score_col_prefix)].values
# fit and extract kmeans clusters
kmeans = KMeans(n_clusters=proba_scores.shape[1] + 1, random_state=random_state)
kmeans.fit(proba_scores)
clusters = kmeans.predict(proba_scores).reshape((-1, 1))
print("Visualizing tag score-based KMeans clusters with tSNE")
# visualize the clusters using tsne
tsne_xy = TSNE(n_components=2).fit_transform(proba_scores)
visualize_df = pd.DataFrame(
np.concatenate((tsne_xy, clusters), axis=1), columns=['tsne_1', 'tsne_2', 'cluster_id'])
plt.figure(figsize=(10, 6))
sns.scatterplot(data=visualize_df,x='tsne_1',y='tsne_2',hue='cluster_id',
legend="full",alpha=0.5, palette='pastel')
plt.title("KMeans Cluster on TSNE 2D Transformation")
plt.savefig(tsne_fig_name, bbox_inches='tight')
plt.close()
# save cluster info back to scored_df
print("Saving cluster information back to dataframe")
scored_df['cluster_id'] = clusters
return scored_df
def sample_by_cluster(scored_df, sample_size, cluster_col='cluster_id', row_key='UID'):
print("Sampling records based on cluster information...")
group_sample_n = sample_size // scored_df[cluster_col].nunique()
sample_df = scored_df.groupby(cluster_col).apply(lambda x: x.sample(n=group_sample_n)).reset_index(drop=True)
unsampled_count = sample_size - sample_df.shape[0]
print(f"A total of {sample_df.shape[0]:,} records were sampled based on clusters.")
if unsampled_count > 0:
print(f"{unsampled_count:,} remaining records are to be sampled from total population.")
unsampled_ids = scored_df[row_key][~np.isin(scored_df.UID, sample_df.UID)]
additional_ids = np.random.choice(unsampled_ids, unsampled_count, replace=False)
additional_df = scored_df.loc[np.isin(scored_df[row_key], additional_ids), :]
sample_df = pd.concat([sample_df, additional_df], ignore_index=True)
return sample_df
def sample_by_random(scored_df, sample_size, cluster_col='cluster_id', row_key='UID'):
print("Sampling records based on pure randomness...")
print(f"{sample_size:,} records are to be sampled from total population.")
sample_ids = np.random.choice(scored_df[row_key], sample_size, replace=False)
sample_df = scored_df.loc[np.isin(scored_df[row_key], sample_ids), :].reset_index(drop=True)
return sample_df
# -
scores_df_test = kmeans_from_proba(scores_df, 'test_tsne.png')
clst_sample_df = sample_by_cluster(scores_df_test, 40)
random_sample_df = sample_by_random(scores_df_test, 40)
random_sample_df.cluster_id.value_counts()
clst_sample_df.cluster_id.value_counts()
|
report/dev_scored_record_sampling.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python3
# name: python3
# ---
# # Anomalies in meal prices
#
# In the Chamber of Deputies' CEAP, there is a list of 1,000's of meal expenses made by congresspeople. The law says that the congressperson cannot pay for any other, even being her advisor or SO. We want to work on this analysis to find possibly illegal and immoral expenses. They may have happened when the politician spent more than needed (e.g. the whole menu costs X but the bill was 2X) or too much in an specific period of time. In the end, we also want to alert about too expensive reibursements, even with an explanation behind of it.
#
# Note: remember to correct prices with an inflation index (e.g. IPCA).
# +
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
plt.rcParams['figure.figsize'] = (20, 10)
# +
from serenata_toolbox.datasets import fetch
fetch('2016-11-19-reimbursements.xz', '../data')
fetch('2016-09-03-companies.xz', '../data')
fetch('2016-11-29-yelp-companies.xz', '../data')
fetch('2016-12-02-foursquare-companies.xz', '../data')
# +
import numpy as np
import pandas as pd
dataset = pd.read_csv('../data/2016-11-19-reimbursements.xz',
dtype={'applicant_id': np.str,
'cnpj_cpf': np.str,
'congressperson_id': np.str,
'subquota_number': np.str},
low_memory=False)
dataset = dataset[dataset['congressperson_id'].notnull()]
dataset['issue_date'] = pd.to_datetime(dataset['issue_date'], errors='coerce')
dataset['issue_date_day'] = dataset['issue_date'].apply(lambda date: date.day)
dataset['issue_date_month'] = dataset['issue_date'].apply(lambda date: date.month)
dataset['issue_date_year'] = dataset['issue_date'].apply(lambda date: date.year)
dataset['issue_date_weekday'] = dataset['issue_date'].apply(lambda date: date.weekday())
dataset['issue_date_week'] = dataset['issue_date'].apply(lambda date: date.week)
# -
# The `companies.xz` dataset has businesses placed outside Brazilian area. We intentionally disconsider them.
is_in_brazil = '(-73.992222 < longitude < -34.7916667) & (-33.742222 < latitude < 5.2722222)'
companies = pd.read_csv('../data/2016-09-03-companies.xz',
dtype={'cnpj': np.str},
low_memory=False)
companies = companies.query(is_in_brazil)
companies['cnpj'] = companies['cnpj'].str.replace(r'\D', '')
dataset = pd.merge(dataset, companies,
how='left',
left_on='cnpj_cpf',
right_on='cnpj',
suffixes=('', '_company'))
dataset = \
dataset.query('subquota_description == "Congressperson meal"')
companies = \
companies[companies['cnpj'].isin(dataset.loc[dataset['cnpj'].notnull(),
'cnpj'])]
dataset['total_net_value'].describe()
dataset['total_net_value'].median()
sns.distplot(dataset['total_net_value'],
bins=30,
kde=False)
bottom_99 = dataset['total_net_value'].quantile(0.99)
bottom_99
dataset[dataset['total_net_value'] < bottom_99].shape
sns.distplot(dataset.loc[dataset['total_net_value'] < bottom_99, 'total_net_value'],
bins=30,
kde=False)
# +
bottom_99_dataset = dataset.query('total_net_value > {}'.format(bottom_99))
ranking = bottom_99_dataset.groupby('state_company')['total_net_value'] \
.median().sort_values(ascending=False)
sns.boxplot(x='state_company',
y='total_net_value',
data=bottom_99_dataset,
order=ranking.index)
# -
bottom_99_dataset.query('state_company == "CE"').shape
dataset.query('state_company == "CE"').shape
bottom_99_dataset['state_company'].isnull().sum()
bottom_99_dataset.query('state_company == "CE"') \
.sort_values('total_net_value', ascending=False)
# ## Using Yelp to improve prices information
yelp = pd.read_csv('../data/2016-11-29-yelp-companies.xz',
low_memory=False)
yelp.head()
# We have data for just 8.6% of the companies which received from the "Congressperson meal" subquota.
yelp['price'].notnull().sum()
companies.shape
yelp['price'].isnull().sum()
yelp['price.int'] = yelp['price'].str.len()
states_with_records = \
yelp[yelp['price'].notnull()].groupby('location.state')['location.state'].count() > 10
states_with_records = states_with_records[states_with_records].index
yelp_just_significant_states = \
yelp[yelp['price'].notnull() &
yelp['location.state'].isin(states_with_records)]
yelp_just_significant_states['location.state'].value_counts()
# Yelp won't be that useful for now, since we don't have a lot of data. Will leave it for another analysis.
# ## Predict prices
# The idea here is to try to predict the "right range for prices" in a specific place. If we can have a good accuracy, everything far from the prediction could be considered an outlier.
bottom_99_dataset.iloc[0, :57]
# **DummyRegressor with mean strategy as a baseline**
# +
from sklearn.dummy import DummyRegressor
from sklearn.model_selection import train_test_split
X = bottom_99_dataset[['year']]
y = bottom_99_dataset['total_net_value']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
model = DummyRegressor(strategy='mean')
model.fit(X_train, y_train)
model.score(X_test, y_test)
# +
from sklearn.preprocessing import LabelEncoder
le_state = LabelEncoder()
le_city = LabelEncoder()
factor_columns = ['state_company', 'city']
model_dataset = bottom_99_dataset.dropna(subset=factor_columns)
model_dataset['state_company'] = le_state.fit_transform(model_dataset['state_company'])
model_dataset['city'] = le_city.fit_transform(model_dataset['city'])
model_columns = ['cnpj',
'issue_date_day',
'issue_date_month',
'issue_date_year']
X = model_dataset[model_columns + factor_columns]
y = model_dataset['total_net_value']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# -
# **LinearRegression**
#
# Not very good accuracy.
# +
from sklearn.linear_model import LinearRegression
model = LinearRegression(n_jobs=-1)
model.fit(X_train, y_train)
model.score(X_test, y_test)
# -
# What if we could improve it using the type of business as a feature? e.g. restaurant, hotel, grill...
# +
import unicodedata
def normalize_string(string):
if isinstance(string, str):
nfkd_form = unicodedata.normalize('NFKD', string.lower())
return nfkd_form.encode('ASCII', 'ignore').decode('utf-8')
# +
import nltk
from sklearn.feature_extraction.text import CountVectorizer
stopwords = nltk.corpus.stopwords.words('portuguese')
count_vect = CountVectorizer(stop_words=stopwords)
trade_names = dataset.loc[dataset['supplier'].notnull(),
'supplier'].unique()
trade_names = np.vectorize(normalize_string)(trade_names)
dataset_counts = count_vect.fit_transform(trade_names)
# -
frequent_words = sorted(list(zip(count_vect.get_feature_names(),
np.asarray(dataset_counts.sum(axis=0)).ravel())), key=lambda x: -x[1])
frequent_words[:20]
# +
frequent_words = dict(frequent_words)
excluded_keywords = ['ltda', 'cia', 'epp']
[frequent_words.pop(keyword) for keyword in excluded_keywords]
# +
def business_type(name):
fun = np.vectorize(lambda x: normalize_string(x))
keywords = set(fun(name.split(' '))) - set(stopwords)
key_freqs = list(map(lambda x: (x, frequent_words.get(x)), list(keywords)))
key_freqs = [key_freq for key_freq in key_freqs if key_freq[1] is not None]
if key_freqs:
key_freq = max(key_freqs, key=lambda x: x[1])
return key_freq[0]
dataset['supplier_keyword'] = dataset['supplier'].apply(business_type)
bottom_99_dataset['supplier_keyword'] = \
bottom_99_dataset['supplier'].apply(business_type)
# +
le_state = LabelEncoder()
le_city = LabelEncoder()
le_supplier_keyword = LabelEncoder()
factor_columns = ['state_company', 'supplier_keyword']
model_dataset = bottom_99_dataset.dropna(subset=factor_columns)
model_dataset['state_company'] = le_state.fit_transform(model_dataset['state_company'])
model_dataset['city'] = le_city.fit_transform(model_dataset['city'])
model_dataset['supplier_keyword'] = le_city.fit_transform(model_dataset['supplier_keyword'])
model_columns = ['cnpj',
'issue_date_day',
'issue_date_month',
'issue_date_year']
X = model_dataset[model_columns + factor_columns]
y = model_dataset['total_net_value']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# -
model = LinearRegression(n_jobs=-1)
model.fit(X_train, y_train)
model.score(X_test, y_test)
# Still not good enough. In fact, there's a serious mistake when considering this linear regression method for outlier detection: not just we're assuming that prices follow a linear regression (IMHO still good assumption, though), but outliers should be removed before running the regression. In other words, to find outliers, we must first remove them, having the regression score as just a accuracy estimator. May still be an interesting approach but we want to engage with something simpler first, to get the easier and most anomalous results.
# ## Common CNPJs
#
# Expenses in the same restaurant are expected to follow a normal distribution. Can we find outliers in companies with enough expenses to analyze?
# +
from scipy.stats import normaltest
def normaltest_pvalue(values):
if len(values) >= 20:
return normaltest(values).pvalue
else:
return 1
net_values_by_cnpj = dataset.groupby('cnpj_cpf')['total_net_value'] \
.agg([len, np.mean, np.std, normaltest_pvalue]) \
.sort_values('len', ascending=False) \
.reset_index()
net_values_by_cnpj['threshold'] = net_values_by_cnpj['mean'] + \
3 * net_values_by_cnpj['std']
applicants_per_cnpj = dataset.groupby('cnpj_cpf')['applicant_id'] \
.aggregate(lambda x: len(set(x))).reset_index() \
.rename(columns={'applicant_id': 'congresspeople'})
net_values_by_cnpj = pd.merge(net_values_by_cnpj, applicants_per_cnpj)
net_values_by_cnpj.head()
# -
len(net_values_by_cnpj.query('normaltest_pvalue < .05')) / len(net_values_by_cnpj)
data_with_threshold = pd.merge(dataset, net_values_by_cnpj, on='cnpj_cpf') \
.sort_values('total_net_value', ascending=False)
data_with_threshold['main_activity'] = \
data_with_threshold['main_activity'].apply(normalize_string)
# Let's discard hotel reibursements. There's no method yet to discover for how long the congressperson stayed in the hotel, so we can expect a high standard deviation in their expenses. Even when detecting outliers, it's too hard to investigate manually given the same reason.
#
# Here, we just consider CNPJs where we have more than 20 reimbursements, from at least 3 congresspeople.
is_hotel_reimbursement = data_with_threshold['main_activity'] \
.str.contains('hoteis').astype(np.bool)
outliers = data_with_threshold[~is_hotel_reimbursement] \
.query('(congresspeople > 3) & (len >= 20) & (total_net_value > threshold)')
print(len(outliers), outliers['total_net_value'].sum())
# ## Foursquare
#
# Before totally discarding it, let's see how significant is the Foursquare dataset.
foursquare = pd.read_csv('../data/2016-12-02-foursquare-companies.xz',
low_memory=False)
foursquare.head()
foursquare.iloc[0]
print(foursquare['price.tier'].notnull().sum(),
foursquare['price.tier'].notnull().sum() / len(companies),
foursquare.query('confirmed_match == True')['price.tier'].notnull().sum() / len(companies))
# ### Clustering for find the best group for a new restaurant
companies.shape
# +
# is_cnpj = (dataset['cnpj_cpf'].str.len() == 14) & \
# dataset['cnpj_cpf'].notnull() & \
# dataset['document_type'] != 2
# cnpjs = dataset.sort_values('issue_date') \
# .loc[is_cnpj, ['cnpj_cpf', 'supplier']] \
# .drop_duplicates('cnpj_cpf', keep='last')
# cnpjs.head()
# -
is_cnpj = dataset['cnpj_cpf'].str.len() == 14
cnpj_list = dataset.loc[is_cnpj].groupby('cnpj')['total_net_value'] \
.agg([np.mean, np.std]).reset_index()
cnpj_list.shape
cnpj_list.head()
cnpj_list = pd.merge(cnpj_list,
dataset[['cnpj_cpf', 'supplier']].drop_duplicates('cnpj_cpf'),
how='left',
left_on='cnpj', right_on='cnpj_cpf')
del cnpj_list['cnpj_cpf']
cnpj_list.head()
# +
counts = dataset.loc[is_cnpj].groupby('cnpj')['applicant_id'] \
.agg({'congresspeople': (lambda x: len(np.unique(x))),
'len': (lambda x: len(x))
}).reset_index()
cnpj_list = pd.merge(cnpj_list, counts)
# -
# **Calculate threshold for companies using their own receipts**
# +
threshold_for_cnpjs = cnpj_list.groupby('cnpj') \
.apply(lambda x: x['mean'].mean() + 3 * x['std'].mean()).reset_index() \
.rename(columns={0: 'threshold'})
threshold_for_cnpjs
cnpj_list = pd.merge(cnpj_list, threshold_for_cnpjs)
cnpj_list.head()
# -
HOTEL_REGEX = r'hote[l(eis)(ls)]'
is_hotel_reimbursement = (cnpj_list['supplier'].str.lower().str.contains(HOTEL_REGEX))
# Mark companies as having or not significant data.
rows = (~is_hotel_reimbursement) & \
(cnpj_list['congresspeople'] > 3) & \
(cnpj_list['len'] > 20)
cnpj_list['has_significant_data'] = False
cnpj_list.loc[rows, 'has_significant_data'] = True
print(cnpj_list['has_significant_data'].sum(),
cnpj_list['has_significant_data'].sum() / len(cnpj_list['has_significant_data']))
sns.lmplot('mean', 'std',
data=cnpj_list.query('has_significant_data'),
scatter_kws={'marker': 'D', 's': 100},
size=10)
# **Predict threshold classifying companies in clusters by their price ranges**
X = cnpj_list.loc[cnpj_list['has_significant_data'],
['mean', 'std']]
# +
from sklearn.cluster import KMeans
model = KMeans(n_clusters=3, random_state=0)
model.fit(X)
# -
cnpj_list.loc[cnpj_list['has_significant_data'], 'y'] = model.predict(X)
cnpj_list.query('y.notnull()').head()
rows = (~cnpj_list['has_significant_data']) & \
cnpj_list['std'].notnull() & \
(~is_hotel_reimbursement)
X = cnpj_list.loc[rows, ['mean', 'std']]
cnpj_list.loc[rows, 'y'] = model.predict(X)
threshold_for_groups = cnpj_list.groupby('y') \
.apply(lambda x: x['mean'].mean() + 4 * x['std'].mean()).reset_index() \
.rename(columns={0: 'threshold'})
threshold_for_groups
# +
group_thresholds = pd.merge(cnpj_list.query('~has_significant_data'),
threshold_for_groups,
on='y',
suffixes=('', '_group'))
cnpj_list = pd.merge(cnpj_list,
group_thresholds[['cnpj', 'threshold_group']],
how='left')
cnpj_list.loc[~cnpj_list['has_significant_data'], 'threshold'] = \
cnpj_list['threshold_group']
# -
cnpj_list.query('(~has_significant_data) & std.notnull()').head()
cnpj_list.query('has_significant_data').head()
cnpj_list.query('threshold.notnull()').sample(5, random_state=10)
del cnpj_list['threshold_group']
merged = pd.merge(dataset, cnpj_list,
how='left',
left_on='cnpj_cpf',
right_on='cnpj',
suffixes=('', '_company'))
# +
merged['supplier'] = merged['supplier'].astype(np.str)
is_hotel_reimbursement = \
(merged['supplier'].str.lower().str.contains(HOTEL_REGEX))
merged[~is_hotel_reimbursement].query('total_net_value > threshold').shape
# +
keys = ['year',
'congressperson_name',
'document_id',
'total_net_value',
'threshold',
'<KEY>',
'has_significant_data',
'name']
merged['diff'] = merged['threshold'] - merged['total_net_value']
merged[~(is_hotel_reimbursement | merged['has_significant_data'])] \
.query('(total_net_value > threshold)') \
.sort_values('diff', ascending=False).head(10)[keys]
# -
merged[~is_hotel_reimbursement].shape
merged[~is_hotel_reimbursement] \
.query('(total_net_value > threshold)')['total_net_value'].shape
merged[~is_hotel_reimbursement] \
.query('(total_net_value > threshold)')['total_net_value'].sum()
merged[~is_hotel_reimbursement] \
.query('(total_net_value > threshold) & (has_significant_data == False)')['total_net_value'].shape
# ## Conclusions
#
# For companies with significant data (defined by us as a company which received money at least 20x, from at least 3 distinct congresspeople), we use mean + 3 * std to detect outliers. Does not return all the suspect cases, but all of them, after some sampling investigation, seem to be very suspect.
#
# Since there's "significant data" just for 4% of the companies, we need a way for extrapolating the results for not so known ones. For doing so, we classify companies in 3 clusters using K-Means, considering mean and standard deviation of their prices as features. Once classified, we consider their threshold mean + 4 * stds of their clusters (one extra std compared to places where we have enough reimbursements to know better).
#
# Reimbursements made for expenses in hotels are discarded from this classifier, since they usually contain much more than meals (and we don't know for how long the congressperson was hosted in the place, not yet trustable for legal reports).
|
research/develop/2016-12-01-irio-anomalies-in-meal-prices.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/prudenciadofelixluigi/LinearAlgeb/blob/main/Assignment2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="tLxoxTeX16lk"
# # Welcome to Python Fundamentals
# In this module, we are going to establish our skills in python programming. In this notebook we are going to cover:
#
#
#
# * Variables and Data Types
# * Operations
# * Input and Output Operations
# * Logic Control
#
#
#
#
# + [markdown] id="rRfR3P_h3G6E"
# # Variables and Data Types
# + id="Je5b10jx3NN8"
x = 1
a, b = 3, -2
# + colab={"base_uri": "https://localhost:8080/"} id="Bm-IUGVw3hoh" outputId="ffaaedd3-fcf8-4c87-e924-1751b112ed7a"
type(x)
# + colab={"base_uri": "https://localhost:8080/"} id="oAhEJ0s538zz" outputId="923831a8-5069-4e4e-c9a4-7798fd1aea85"
y = 3.0
type(y)
# + colab={"base_uri": "https://localhost:8080/"} id="iCMxIjtv4BL5" outputId="bd810a51-d1dd-4995-db98-1c6f1d6910ac"
x = float(x)
type(x)
# + [markdown] id="ZKkx9Q144rp-"
# ## Operations
#
# + [markdown] id="CjI8mU1z47lz"
# ### Arithmetic
# + id="jGnxXgQW4_3F"
w, x, y, z = 4.0, -3.0, 1, 32
# + colab={"base_uri": "https://localhost:8080/"} id="vOi87D3r6CC7" outputId="af52edb8-2b99-42ee-b21e-2500866fcbbe"
### addition
S = w + x
S
# + colab={"base_uri": "https://localhost:8080/"} id="nE01Ipaa6NoK" outputId="207abf67-fd8c-400c-c8b8-d516fa29e7e6"
### multiplication
D = w*z
D
# + colab={"base_uri": "https://localhost:8080/"} id="aCFYpVtT6Zsm" outputId="f539fb13-3650-4cd4-dee6-3e38da22dbc4"
### Floor Division
Qf = w//x
Qf
# + colab={"base_uri": "https://localhost:8080/"} id="D90ynz2z7FX2" outputId="f48e1f27-634b-433c-db59-e36f187805eb"
### Exponentiation
E = w**w
E
# + colab={"base_uri": "https://localhost:8080/"} id="ElBQbLS67h4G" outputId="4072b2ee-085e-4467-ba2e-4a108a424d73"
### Modulo
mod = z%x
mod
# + [markdown] id="OKWfdwRq71wO"
# # Assignment
# + id="zM0LMNWi8JkX"
A, B, C, D = 0, 100, 2, 1
# + colab={"base_uri": "https://localhost:8080/"} id="f68hWsJb8O6T" outputId="5e548ecd-f138-4786-994a-df6fd208a40e"
A +- w
A
# + colab={"base_uri": "https://localhost:8080/"} id="fHMW5qqW8fR0" outputId="7fa6e814-12e3-4dcf-860b-d891f6158269"
B-=x
B
# + colab={"base_uri": "https://localhost:8080/"} id="Jm4mtz7m8pbZ" outputId="4fb003d0-62ab-4d6e-a193-dc61b9e8f279"
C *= w
C
# + colab={"base_uri": "https://localhost:8080/"} id="2y5djIdq9UtJ" outputId="30afd665-91fb-416c-e419-2a07573917d6"
D/= x
D
# + colab={"base_uri": "https://localhost:8080/", "height": 180} id="-zIRZsIs9f5s" outputId="090cca6c-6211-4031-af8f-9b4bbabff761"
C **= x
C
# + [markdown] id="pF5U3yxU-VFj"
# # Comparators
# + id="RmVCdtPO-aJw"
size_1, size_2, size_3 = 1, 2.0, "1"
true_size = 1.0
# + colab={"base_uri": "https://localhost:8080/"} id="huhpamaK-hms" outputId="40dc98fd-1322-4fcb-c302-3603a28d0dbb"
## equality
size_1 == true_size
# + colab={"base_uri": "https://localhost:8080/"} id="Rle0cb76-2kn" outputId="9d508e79-7ddb-419e-c9ab-36c729b15674"
## Non-Equality
size_2 != true_size
# + colab={"base_uri": "https://localhost:8080/"} id="vcoa-ZdG_BHq" outputId="1c708e85-7a87-4fa8-9d1a-4551fbeafdaf"
## Inequality
s1 = size_1 > size_2
s2 = size_2 < size_2/2
s3 = true_size >= size_1
s4 = size_2 <= true_size
# + [markdown] id="1-V7rojp_3cP"
# ### Logical Operations
# + colab={"base_uri": "https://localhost:8080/"} id="zE5AJ7SS_6FS" outputId="a3585f00-9981-4f55-d21a-245da8573dbc"
size_1 == true_size
size_1
# + colab={"base_uri": "https://localhost:8080/"} id="YjgaRK1_AG_5" outputId="d118ef8e-c3c7-4168-85ff-a8ea03f580fa"
size_1 is true_size
# + colab={"base_uri": "https://localhost:8080/"} id="TTO0IEZyAaMK" outputId="7aea5dd0-b978-49e3-b75b-c0475c7e86e9"
size_1 is not true_size
# + colab={"base_uri": "https://localhost:8080/"} id="B9QxUqs5AhRa" outputId="89f12551-d276-4a38-a23c-b629fccfca48"
p, q = True, False
conj = p and q
conj
# + colab={"base_uri": "https://localhost:8080/"} id="y4fvDcTbA-nr" outputId="22e00966-b79c-4b84-e19b-7bf0393b6c26"
disj = p or q
disj
# + colab={"base_uri": "https://localhost:8080/"} id="QWslCN0yBPQX" outputId="ee1795f9-20b8-41e1-fbd3-5eaf61b0dd04"
nand = not(p and q)
nand
# + [markdown] id="eVbQ4t1nBzb0"
# ## Input and Output
# + colab={"base_uri": "https://localhost:8080/"} id="QFyn6em0B3Fm" outputId="7a86253a-283d-4174-b07a-5ee95e0d3c56"
print("Hello World! Kumusta Kana")
# + id="f966aTdeCFrY"
cnt = 14000
# + colab={"base_uri": "https://localhost:8080/"} id="M9fH6oO2CIYI" outputId="b22d0c90-9ccb-4729-cd0a-b33b5510eb9b"
string = "Hello World!"
print(string, ", Current COVID count is:", cnt)
cnt += 10000
# + colab={"base_uri": "https://localhost:8080/", "height": 129} id="j32WnFiwDAI_" outputId="81f4ec95-e626-485a-c942-8035d66c6ff8"
print(f"{string}, current count is:" {cnt}")
# + colab={"base_uri": "https://localhost:8080/"} id="B_YszWMAEAzO" outputId="5965c7dc-0b96-4161-982f-009fed01d6e8"
sem_grade = 86.25
name = "Eugene"
print( "Hello {} your semestral grade is: {}" .format(name, sem_grade))
# + colab={"base_uri": "https://localhost:8080/", "height": 129} id="punF60a3Ibuj" outputId="a1064505-a802-438a-85ac-8a3dad727c88"
pg, mg, fg = 0.3, 0.3, 0.4
print( "The weights of your semestral grades are:\
\n\t {:.2%} for Prelims\
\n\t {:.2%} for Midterms, and\
\n\t {:.2%} for Finals." .format(pg, mg, fg) )
# + [markdown] id="AKgfNSDxLQVp"
# ### Looping Statements
# + [markdown] id="QrPRYW0wLWT3"
# ## While
# + colab={"base_uri": "https://localhost:8080/"} id="B-2nvsiJLZtk" outputId="2da708a1-5d34-4b4c-a34e-d3391db8e837"
## while loops
i, j = 0, 10
while(i<=j):
print(f"{i}\t|t{j}")
i += 1
# + [markdown] id="1wr9impWMX8i"
# For
# + colab={"base_uri": "https://localhost:8080/"} id="jNrKlu2HMZow" outputId="53d6dc17-7cdd-4e30-ed90-0e8515e1cafe"
# for(int = 0; i<10; i++)
#printf(i)
#}
i = 0
for i in range (11):
print(1)
# + colab={"base_uri": "https://localhost:8080/"} id="hj3nacsqMral" outputId="1e02ff2e-a34c-4d61-8185-2d5d01245025"
Playlist = ["B<NAME>", "<NAME>", "Buko"]
print('Now Playing:\n')
for song in Playlist:
print(song)
# + [markdown] id="v8ydqfkgNUWZ"
# ## Flow Control
# + colab={"base_uri": "https://localhost:8080/"} id="8xkr14xoNWzd" outputId="48272a58-ec49-47d4-ddd5-54c65b06b7c8"
num_1, num_2 = 14, 13
if(num_1 == num_2):
print("HAHA")
elif(num_1>num_2):
print("HOHO")
else:
print("HUHU")
# + [markdown] id="cHh7cGhIN9kj"
# ## Functions
# + id="94q1cIVMOAE8"
# voide DeleteUser (int userid){
# delete(userid);
#}
def delete_user (userid):
print("Successfully deleted user: {}" .format(userid))
# + colab={"base_uri": "https://localhost:8080/"} id="LTvm8s8dOdal" outputId="85c38249-1776-4aab-8a95-ccd17cf0a6d0"
userid = 2020_100100
delete_user(2020_100100)
# + id="VbIESBeGO_EK"
addend1, addend2 = 5, 6
# + id="Deh8Bg20OzcS"
def add(addend1, addend2):
sum = addend1 + addend2
return sum
# + colab={"base_uri": "https://localhost:8080/"} id="A_i0of1YP1dl" outputId="8e0ee1a8-94d7-4b90-921d-48c8a160eafe"
add(5, 4)
|
Assignment2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# %load_ext autoreload
# %autoreload 2
from matplotlib.pyplot import imshow
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
import os
import sys
sys.path.append('../')
import gp
# +
import glob, os
import mahotas as mh
c_gold = []
c_rhoana = []
for z in range(250,300):
path = '/home/d/data/cylinderNEW/'
gold = sorted(glob.glob(os.path.join(path, 'gold', '*'+str(z)+'.png')))
rhoana = sorted(glob.glob(os.path.join(path, 'rhoana', '*'+str(z)+'.png')))
mask = sorted(glob.glob(os.path.join(path, 'mask', '*'+str(z)+'.png')))
mask = mh.imread(mask[0]).astype(np.bool)
gold = mh.imread(gold[0])
rhoana = mh.imread(rhoana[0])
#convert ids from rgb to single channel
rhoana_single = np.zeros((rhoana.shape[0], rhoana.shape[1]), dtype=np.uint64)
rhoana_single[:, :] = rhoana[:,:,0]*256*256 + rhoana[:,:,1]*256 + rhoana[:,:,2]
gold_single = np.zeros((gold.shape[0], gold.shape[1]), dtype=np.uint64)
gold_single[:, :] = gold[:,:,0]*256*256 + gold[:,:,1]*256 + gold[:,:,2]
# relabel the segmentations
gold_single = gp.Util.relabel(gold_single)
rhoana_single = gp.Util.relabel(rhoana_single)
#mask the rhoana output
rhoana_single[mask==0] = 0
c_gold.append(gold_single)
c_rhoana.append(rhoana_single)
# +
import matplotlib.gridspec as gridspec
from matplotlib import rcParams
rcParams.update({'figure.autolayout': True})
left, width = .25, .5
bottom, height = .25, .5
right = left + width
top = bottom + height
fig = plt.figure(1, figsize=(30,10))
gs = gridspec.GridSpec(1,2,width_ratios=[1,1])
gs.update(wspace=0.12, hspace=0.05)
BEST95 = 2199
DATA = 'cyl'
clamper=20000
plt.subplot(gs[0])
a,b,c,d = gp.Stats.analyze_users2(['auto95FP_NEW'], ['auto00GP_NEW'], c_gold, c_rhoana, returnplt=True, vilabel=True,
hideYlabels=False, showlegend=False,hideXlabels=False,clabel=True,hline=BEST95,
DATADIR='/home/d/CYLINDERSTUDY/',clampX=clamper,data=DATA)
plt.title('Automatic', y=1.02)
plt.subplot(gs[1])
plt.title('Oracle', y=1.02)
a,b,c,d = gp.Stats.analyze_users2(['simuserFP_NEW'], ['simuserGP_NEW'], c_gold, c_rhoana, returnplt=True, vilabel=False,
hideYlabels=True, showlegend=True,bestlabel=True, hideXlabels=False,clabel=True,
DATADIR='/home/d/CYLINDERSTUDY/',clampX=clamper,data=DATA)
# fig.legend()
plt.savefig('/home/d/PAPERGFX/'+DATA+'_trails.pdf', bbox_inches='tight')
# -
|
ipy_test/NEW_TRAIL_CYLINDER.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2 with Spark 1.6
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# # Unpack Data
#
# In this notebook, we'll take the `basic` data set, use `ibmseti` Python package to convert each data file into a spectrogram.
# + deletable=true editable=true
from __future__ import division
# import cStringIO
# import glob
# import json
# import requests
import ibmseti
import os
import zipfile
# import numpy as np
# import matplotlib.pyplot as plt
# import pandas as pd
# +
teamname = 'Machine_Contact_Enterprise_SETI'
mydatafolder = os.path.join( os.environ['PWD'], teamname )
# Create folder for zip to extract to
setiDataDir = mydatafolder + '/dataFinal'
if os.path.exists(setiDataDir) is False:
os.makedirs(setiDataDir)
# Create folder for future spectrograms
outputpng_folder = mydatafolder + '/pngFinal'
if os.path.exists(outputpng_folder) is False:
os.makedirs(outputpng_folder)
# +
# Extract Data folder
def unzip_data(dest, zipf):
# zipf = array of zip files
for i in zipf:
zz = zipfile.ZipFile(mydatafolder + '/' + i )
zz.extractall(dest)
unzip_data(setiDataDir, ['basic4_test_hackathon_final.zip'])
# when using the larger dataset, you will have multiple zip files. Pass an array of those zip files to unzip_data
# -
# Choose your data set!
workingDataDir = setiDataDir
workingIndexFile = os.path.join( mydatafolder + '/public_list_basic_test_hackathon_final.csv' )
# ls $mydatafolder/pngFinal
# + deletable=true editable=true
#Use `ibmseti`, or other methods, to draw the spectrograms
def draw_spectrogram(data):
aca = ibmseti.compamp.SimCompamp(data)
spec = aca.get_spectrogram()
fig, ax = plt.subplots(figsize=(10, 5))
ax.imshow(np.log(spec), aspect = 0.5*float(spec.shape[1]) / spec.shape[0], cmap='gray')
return fig, aca.header()
def convert_to_spectrogram_and_save(row):
try:
uuid, classification = row.split(',')
except:
uuid = row #this handles the test data since it doesn't have "SIGNAL_CLASSIFICATION" in index file
classification = 'unknown: test data'
#create path to local data file
filename = uuid + '.dat'
filepath = os.path.join(workingDataDir, filename)
#retrieve that data file
rawdata = open(filepath).read()
fig, header = draw_spectrogram(rawdata)
png_file_name = filename + '.png'
fig.savefig( os.path.join(outputpng_folder, png_file_name) )
plt.close(fig)
return (filename, header, png_file_name)
# -
rdd = sc.textFile(workingIndexFile, 30).filter(lambda x: x.startswith('UUID') is False) #the filter removes the header
rdd.count()
results = rdd.map(convert_to_spectrogram_and_save).collect()
# + deletable=true editable=true
results[0]
# -
|
notebooks/Unpack+Data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# 给你一个整数数组 nums,请你返回其中 位数 为 偶数 的数字的个数。
#
# 示例 1:
# 输入:nums = [12,345,2,6,7896]
# 输出:2
# 解释:
# 12 是 2 位数字(位数为偶数)
# 345 是 3 位数字(位数为奇数)
# 2 是 1 位数字(位数为奇数)
# 6 是 1 位数字 位数为奇数)
# 7896 是 4 位数字(位数为偶数)
# 因此只有 12 和 7896 是位数为偶数的数字
#
# 示例 2:
# 输入:nums = [555,901,482,1771]
# 输出:1
# 解释:
# 只有 1771 是位数为偶数的数字。
#
# 提示:
# 1 <= nums.length <= 500
# 1 <= nums[i] <= 10^5
# -
class Solution:
def findNumbers(self, nums: List[int]) -> int:
nums = [str(n) for n in nums]
cnt = 0
for n in nums:
if len(n) % 2 == 0:
cnt += 1
return cnt
|
String/1123/1295. Find Numbers with Even Number of Digits.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install /home/knikaido/work/Cornell-Birdcall-Identification/data/resnest50-fast-package/resnest-0.0.6b20200701/resnest/
# !pip install torch==1.6.0
# !pip install opencv-python
# !pip install slackweb
# !pip install torchvision==0.2.2
# !pip install torch_summary
# !pip install albumentations
# +
from pathlib import Path
import numpy as np
import pandas as pd
import typing as tp
import yaml
import random
import os
import sys
import soundfile as sf
import librosa
import cv2
import matplotlib.pyplot as plt
import time
import pickle
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import resnest.torch as resnest_torch
from torchvision import models
from sklearn.model_selection import StratifiedKFold, KFold
from sklearn.metrics import f1_score
from radam import RAdam
from resnet import ResNet, Bottleneck
from albumentations.core.transforms_interface import DualTransform, BasicTransform
import albumentations as albu
pd.options.display.max_rows = 500
pd.options.display.max_columns = 500
# -
with open('0917_config.yml', 'r') as yml:
settings = yaml.safe_load(yml)
def set_seed(seed: int = 42):
random.seed(seed)
np.random.seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed) # type: ignore
# torch.backends.cudnn.deterministic = True # type: ignore
# torch.backends.cudnn.benchmark = True # type: ignore
# +
# def progress_bar(i):
# pro_bar = ('=' * i) + (' ' * (pro_size - i))
# print('\r[{0}] {1}%'.format(pro_bar, i / pro_size * 100.), end='')
# -
# ROOT = Path.cwd().parent
# INPUT_ROOT = ROOT / "input"
INPUT_ROOT = Path("/home/knikaido/work/Cornell-Birdcall-Identification/data")
RAW_DATA = INPUT_ROOT / "birdsong-recognition"
TRAIN_AUDIO_DIR = RAW_DATA / "train_audio"
TRAIN_RESAMPLED_AUDIO_DIRS = [
INPUT_ROOT / "birdsong-resampled-train-audio-{:0>2}".format(i) for i in range(5)
]
TEST_AUDIO_DIR = RAW_DATA / "test_audio"
# +
# train = pd.read_csv(RAW_DATA / "train.csv")
train = pd.read_csv(TRAIN_RESAMPLED_AUDIO_DIRS[0] / "train_mod.csv")
# TRAIN_RESAMPLED_AUDIO_DIRS = [
# INPUT_ROOT / "birdsong-resampled-train-audio-{:0>2}-npy".format(i) for i in range(5)
# ]
# for i in range(len(train)):
# train['resampled_filename'][i] = train['resampled_filename'][i][:-4] + '.npy'
train.head().T
# -
if not TEST_AUDIO_DIR.exists():
TEST_AUDIO_DIR = INPUT_ROOT / "birdcall-check" / "test_audio"
test = pd.read_csv(INPUT_ROOT / "birdcall-check" / "test.csv")
else:
test = pd.read_csv(RAW_DATA / "test.csv")
test.head().T
# +
BIRD_CODE = {
'aldfly': 0, 'ameavo': 1, 'amebit': 2, 'amecro': 3, 'amegfi': 4,
'amekes': 5, 'amepip': 6, 'amered': 7, 'amerob': 8, 'amewig': 9,
'amewoo': 10, 'amtspa': 11, 'annhum': 12, 'astfly': 13, 'baisan': 14,
'baleag': 15, 'balori': 16, 'banswa': 17, 'barswa': 18, 'bawwar': 19,
'belkin1': 20, 'belspa2': 21, 'bewwre': 22, 'bkbcuc': 23, 'bkbmag1': 24,
'bkbwar': 25, 'bkcchi': 26, 'bkchum': 27, 'bkhgro': 28, 'bkpwar': 29,
'bktspa': 30, 'blkpho': 31, 'blugrb1': 32, 'blujay': 33, 'bnhcow': 34,
'boboli': 35, 'bongul': 36, 'brdowl': 37, 'brebla': 38, 'brespa': 39,
'brncre': 40, 'brnthr': 41, 'brthum': 42, 'brwhaw': 43, 'btbwar': 44,
'btnwar': 45, 'btywar': 46, 'buffle': 47, 'buggna': 48, 'buhvir': 49,
'bulori': 50, 'bushti': 51, 'buwtea': 52, 'buwwar': 53, 'cacwre': 54,
'calgul': 55, 'calqua': 56, 'camwar': 57, 'cangoo': 58, 'canwar': 59,
'canwre': 60, 'carwre': 61, 'casfin': 62, 'caster1': 63, 'casvir': 64,
'cedwax': 65, 'chispa': 66, 'chiswi': 67, 'chswar': 68, 'chukar': 69,
'clanut': 70, 'cliswa': 71, 'comgol': 72, 'comgra': 73, 'comloo': 74,
'commer': 75, 'comnig': 76, 'comrav': 77, 'comred': 78, 'comter': 79,
'comyel': 80, 'coohaw': 81, 'coshum': 82, 'cowscj1': 83, 'daejun': 84,
'doccor': 85, 'dowwoo': 86, 'dusfly': 87, 'eargre': 88, 'easblu': 89,
'easkin': 90, 'easmea': 91, 'easpho': 92, 'eastow': 93, 'eawpew': 94,
'eucdov': 95, 'eursta': 96, 'evegro': 97, 'fiespa': 98, 'fiscro': 99,
'foxspa': 100, 'gadwal': 101, 'gcrfin': 102, 'gnttow': 103, 'gnwtea': 104,
'gockin': 105, 'gocspa': 106, 'goleag': 107, 'grbher3': 108, 'grcfly': 109,
'greegr': 110, 'greroa': 111, 'greyel': 112, 'grhowl': 113, 'grnher': 114,
'grtgra': 115, 'grycat': 116, 'gryfly': 117, 'haiwoo': 118, 'hamfly': 119,
'hergul': 120, 'herthr': 121, 'hoomer': 122, 'hoowar': 123, 'horgre': 124,
'horlar': 125, 'houfin': 126, 'houspa': 127, 'houwre': 128, 'indbun': 129,
'juntit1': 130, 'killde': 131, 'labwoo': 132, 'larspa': 133, 'lazbun': 134,
'leabit': 135, 'leafly': 136, 'leasan': 137, 'lecthr': 138, 'lesgol': 139,
'lesnig': 140, 'lesyel': 141, 'lewwoo': 142, 'linspa': 143, 'lobcur': 144,
'lobdow': 145, 'logshr': 146, 'lotduc': 147, 'louwat': 148, 'macwar': 149,
'magwar': 150, 'mallar3': 151, 'marwre': 152, 'merlin': 153, 'moublu': 154,
'mouchi': 155, 'moudov': 156, 'norcar': 157, 'norfli': 158, 'norhar2': 159,
'normoc': 160, 'norpar': 161, 'norpin': 162, 'norsho': 163, 'norwat': 164,
'nrwswa': 165, 'nutwoo': 166, 'olsfly': 167, 'orcwar': 168, 'osprey': 169,
'ovenbi1': 170, 'palwar': 171, 'pasfly': 172, 'pecsan': 173, 'perfal': 174,
'phaino': 175, 'pibgre': 176, 'pilwoo': 177, 'pingro': 178, 'pinjay': 179,
'pinsis': 180, 'pinwar': 181, 'plsvir': 182, 'prawar': 183, 'purfin': 184,
'pygnut': 185, 'rebmer': 186, 'rebnut': 187, 'rebsap': 188, 'rebwoo': 189,
'redcro': 190, 'redhea': 191, 'reevir1': 192, 'renpha': 193, 'reshaw': 194,
'rethaw': 195, 'rewbla': 196, 'ribgul': 197, 'rinduc': 198, 'robgro': 199,
'rocpig': 200, 'rocwre': 201, 'rthhum': 202, 'ruckin': 203, 'rudduc': 204,
'rufgro': 205, 'rufhum': 206, 'rusbla': 207, 'sagspa1': 208, 'sagthr': 209,
'savspa': 210, 'saypho': 211, 'scatan': 212, 'scoori': 213, 'semplo': 214,
'semsan': 215, 'sheowl': 216, 'shshaw': 217, 'snobun': 218, 'snogoo': 219,
'solsan': 220, 'sonspa': 221, 'sora': 222, 'sposan': 223, 'spotow': 224,
'stejay': 225, 'swahaw': 226, 'swaspa': 227, 'swathr': 228, 'treswa': 229,
'truswa': 230, 'tuftit': 231, 'tunswa': 232, 'veery': 233, 'vesspa': 234,
'vigswa': 235, 'warvir': 236, 'wesblu': 237, 'wesgre': 238, 'weskin': 239,
'wesmea': 240, 'wessan': 241, 'westan': 242, 'wewpew': 243, 'whbnut': 244,
'whcspa': 245, 'whfibi': 246, 'whtspa': 247, 'whtswi': 248, 'wilfly': 249,
'wilsni1': 250, 'wiltur': 251, 'winwre3': 252, 'wlswar': 253, 'wooduc': 254,
'wooscj2': 255, 'woothr': 256, 'y00475': 257, 'yebfly': 258, 'yebsap': 259,
'yehbla': 260, 'yelwar': 261, 'yerwar': 262, 'yetvir': 263, 'nocall': 264
}
INV_BIRD_CODE = {v: k for k, v in BIRD_CODE.items()}
# +
SR = 32000
class AudioTransform(BasicTransform):
""" Transform for audio task. This is the main class where we override the targets and update params function for our need"""
@property
def targets(self):
return {"data": self.apply}
def update_params(self, params, **kwargs):
if hasattr(self, "interpolation"):
params["interpolation"] = self.interpolation
if hasattr(self, "fill_value"):
params["fill_value"] = self.fill_value
return params
class PitchShift(AudioTransform):
""" Do time shifting of audio """
def __init__(self, always_apply=False, p=0.5 , n_steps=None):
super(PitchShift, self).__init__(always_apply, p)
'''
nsteps here is equal to number of semitones
'''
self.n_steps = n_steps
def apply(self,data,**params):
'''
data : ndarray of audio timeseries
'''
return librosa.effects.pitch_shift(data,sr=SR,n_steps=self.n_steps)
class AddGaussianNoise(AudioTransform):
""" Do time shifting of audio """
def __init__(self, always_apply=False, p=0.5):
super(AddGaussianNoise, self).__init__(always_apply, p)
def apply(self,data,**params):
'''
data : ndarray of audio timeseries
'''
noise = np.random.randn(len(data))
data_wn = data + 0.005*noise
return data_wn
class NoAugment(AudioTransform):
""" Do time shifting of audio """
def __init__(self, always_apply=False):
super(NoAugment, self).__init__(always_apply)
def apply(self,data,**params):
'''
data : ndarray of audio timeseries
'''
return data
def get_augmentation():
train_transform = [
# PitchShift(p=1.0,n_steps=4),
AddGaussianNoise(p=1.0),
NoAugment(),
]
return albu.OneOf(train_transform) # <- Compose
# +
PERIOD = 5
MEAN = np.array([0.485, 0.456, 0.406])
STD = np.array([0.229, 0.224, 0.225])
def mono_to_color(
X: np.ndarray, mean=None, std=None,
norm_max=None, norm_min=None, eps=1e-6
):
# Stack X as [X,X,X]
X = np.stack([X, X, X], axis=-1)
# Standardize
mean = mean or X.mean()
X = X - mean
std = std or X.std()
Xstd = X / (std + eps)
_min, _max = Xstd.min(), Xstd.max()
norm_max = norm_max or _max
norm_min = norm_min or _min
if (_max - _min) > eps:
# Normalize to [0, 255]
V = Xstd
V[V < norm_min] = norm_min
V[V > norm_max] = norm_max
V = 255 * (V - norm_min) / (norm_max - norm_min)
V = V.astype(np.uint8)
else:
# Just zero
V = np.zeros_like(Xstd, dtype=np.uint8)
return V
def normalize(image, mean, std):
image = (image / 255.0).astype(np.float32)
image = (image - mean) / std
return np.moveaxis(image, 2, 0).astype(np.float32)
class SpectrogramTrainDataset(data.Dataset):
def __init__(
self,
file_list: tp.List[tp.List[str]], img_size=224,
waveform_transforms=None, spectrogram_transforms=None, melspectrogram_parameters={}
):
self.file_list = file_list # list of list: [file_path, ebird_code]
self.img_size = img_size
self.waveform_transforms = waveform_transforms
self.spectrogram_transforms = spectrogram_transforms
self.melspectrogram_parameters = melspectrogram_parameters
with open('./nb034_event_intensity_500to16000hz.pickle', mode='rb') as f:
self.df_event = pickle.load(f)
def __len__(self):
return len(self.file_list)
def __getitem__(self, idx: int):
wav_path, ebird_code = self.file_list[idx]
y, sr = sf.read(wav_path)
if self.waveform_transforms:
y = self.waveform_transforms(y)
else:
len_y = len(y)
effective_length = sr * PERIOD
if len_y < effective_length:
new_y = np.zeros(effective_length, dtype=y.dtype)
start = np.random.randint(effective_length - len_y)
new_y[start:start + len_y] = y
y = new_y.astype(np.float32)
elif len_y > effective_length:
basename = os.path.basename(wav_path)
event_sec_list = self.df_event.query('filename == @basename').event_sec_list.to_list()[0]
start = np.random.randint(len_y - effective_length)
y = y[start:start + effective_length].astype(np.float32)
if len(event_sec_list) == 0:
ebird_code = 'nocall'
else:
# event_sec_list = self.string_to_list(event_sec_list)
end_lim = start + effective_length - sr
start_lim = start + sr
event_list_in_clip = [sec for sec in event_sec_list if (sec*sr < end_lim) and (sec*sr > start_lim)]
# print(event_list_in_clip)
if(len(event_list_in_clip) == 0):
ebird_code = 'nocall'
# on event
# if len(event_sec_list) != 0:
# choice = random.choice(event_sec_list)
# # 前から2.5秒、後ろから2.5秒の範囲におさまってるか(境界問題)
# ed_sec = len_y / sr
# st_range_sec = PERIOD/2 + 0.0001
# ed_range_sec = ed_sec - st_range_sec
# range_in = (st_range_sec <= choice) & (choice <= ed_range_sec)
# if range_in:
# event_start = int((choice - PERIOD/2) * sr)
# y = y[event_start:event_start + effective_length].astype(np.float32)
# ebird_code = 'nocall'
# # start = np.random.randint(len_y - effective_length)
# # y = y[start:start + effective_length].astype(np.float32)
# # y += event_y
# else:
# # ランダムにクロップ
# start = np.random.randint(len_y - effective_length)
# y = y[start:start + effective_length].astype(np.float32)
# off event
# else:
# # event を検出できなかったらランダムにクロップ
# start = np.random.randint(len_y - effective_length)
# y = y[start:start + effective_length].astype(np.float32)
# ebird_code = 'nocall'
# ----
else:
y = y.astype(np.float32)
melspec = librosa.feature.melspectrogram(y, sr=sr, **self.melspectrogram_parameters)
melspec = librosa.power_to_db(melspec).astype(np.float32)
if self.spectrogram_transforms:
melspec = self.spectrogram_transforms(melspec)
else:
pass
# melspec = np.reshape(melspec, (1, 128, -1))
image = mono_to_color(melspec)
height, width, _ = image.shape
image = cv2.resize(image, (int(width * self.img_size / height), self.img_size))
image = normalize(image, mean=MEAN, std=STD)
# image = np.moveaxis(image, 2, 0)
# labels = np.zeros(len(BIRD_CODE), dtype="i")
labels = np.zeros(len(BIRD_CODE), dtype="f")
labels[BIRD_CODE[ebird_code]] = 1
return image, labels
def plot_spec(melspec):
plt.figure(figsize=(6,3), dpi= 80)
melspec = librosa.power_to_db(melspec).astype(np.float32)
plt.imshow(melspec)
class SpectrogramAugmentDataset(data.Dataset):
def __init__(
self,
file_list: tp.List[tp.List[str]], img_size=224,
waveform_transforms=None, spectrogram_transforms=None, melspectrogram_parameters={}
):
self.file_list = file_list # list of list: [file_path, ebird_code]
self.img_size = img_size
self.waveform_transforms = waveform_transforms
self.spectrogram_transforms = spectrogram_transforms
self.melspectrogram_parameters = melspectrogram_parameters
def __len__(self):
return len(self.file_list)
def __getitem__(self, idx: int):
wav_path, ebird_code = self.file_list[idx]
y, sr = sf.read(wav_path)
# transform = AddGaussianNoise(p=1.0)
# y = transform(data=y)['data']
transforms = get_augmentation()
y = transforms(data=y)['data']
if self.waveform_transforms:
y = self.waveform_transforms(y)
else:
len_y = len(y)
effective_length = sr * PERIOD
if len_y < effective_length:
new_y = np.zeros(effective_length, dtype=y.dtype)
start = np.random.randint(effective_length - len_y)
new_y[start:start + len_y] = y
y = new_y.astype(np.float32)
elif len_y > effective_length:
start = np.random.randint(len_y - effective_length)
y = y[start:start + effective_length].astype(np.float32)
else:
y = y.astype(np.float32)
melspec = librosa.feature.melspectrogram(y, sr=sr, **self.melspectrogram_parameters)
# plot_spec(melspec)
melspec = librosa.power_to_db(melspec).astype(np.float32)
if self.spectrogram_transforms:
melspec = self.spectrogram_transforms(melspec)
else:
pass
# melspec = np.reshape(melspec, (1, 128, -1))
image = mono_to_color(melspec)
height, width, _ = image.shape
image = cv2.resize(image, (int(width * self.img_size / height), self.img_size))
image = normalize(image, mean=MEAN, std=STD)
# image = np.moveaxis(image, 2, 0)
# labels = np.zeros(len(BIRD_CODE), dtype="i")
labels = np.zeros(len(BIRD_CODE), dtype="f")
labels[BIRD_CODE[ebird_code]] = 1
return image, labels
class SpectrogramValidDataset(data.Dataset):
def __init__(
self,
file_list: tp.List[tp.List[str]], img_size=224,
waveform_transforms=None, spectrogram_transforms=None, melspectrogram_parameters={}
):
self.file_list = file_list # list of list: [file_path, ebird_code]
self.img_size = img_size
self.waveform_transforms = waveform_transforms
self.spectrogram_transforms = spectrogram_transforms
self.melspectrogram_parameters = melspectrogram_parameters
def __len__(self):
return len(self.file_list)
def __getitem__(self, idx: int):
wav_path, ebird_code = self.file_list[idx]
y, sr = sf.read(wav_path)
if self.waveform_transforms:
y = self.waveform_transforms(y)
else:
len_y = len(y)
effective_length = sr * PERIOD
if len_y < effective_length:
new_y = np.zeros(effective_length, dtype=y.dtype)
start = np.random.randint(effective_length - len_y)
new_y[start:start + len_y] = y
y = new_y.astype(np.float32)
elif len_y > effective_length:
start = np.random.randint(len_y - effective_length)
y = y[start:start + effective_length].astype(np.float32)
else:
y = y.astype(np.float32)
melspec = librosa.feature.melspectrogram(y, sr=sr, **self.melspectrogram_parameters)
# plot_spec(melspec)
melspec = librosa.power_to_db(melspec).astype(np.float32)
if self.spectrogram_transforms:
melspec = self.spectrogram_transforms(melspec)
else:
pass
# melspec = np.reshape(melspec, (1, 128, -1))
image = mono_to_color(melspec)
height, width, _ = image.shape
image = cv2.resize(image, (int(width * self.img_size / height), self.img_size))
image = normalize(image, mean=MEAN, std=STD)
# image = np.moveaxis(image, 2, 0)
# labels = np.zeros(len(BIRD_CODE), dtype="i")
labels = np.zeros(len(BIRD_CODE), dtype="f")
labels[BIRD_CODE[ebird_code]] = 1
return image, labels
# -
def get_loaders_for_training(
args_dataset: tp.Dict, args_loader: tp.Dict,
train_file_list: tp.List[str], val_file_list: tp.List[str]
):
# # make dataset
train_dataset = SpectrogramValidDataset(train_file_list, **args_dataset)
val_dataset = SpectrogramValidDataset(val_file_list, **args_dataset)
# # make dataloader
train_loader = data.DataLoader(train_dataset, **args_loader["train"])
val_loader = data.DataLoader(val_dataset, **args_loader["val"])
return train_loader, val_loader
class ResNeSt(nn.Module):
def __init__(self, base_model_name: str, pretrained=True,
num_classes=265):
super().__init__()
torch.hub.list('zhanghang1989/ResNeSt', force_reload=True)
# load pretrained models, using ResNeSt-50 as an example
base_model = torch.hub.load('zhanghang1989/ResNeSt', 'resnest50', pretrained=True)
layers = list(base_model.children())[:-2]
layers.append(nn.AdaptiveMaxPool2d(1))
self.encoder = nn.Sequential(*layers)
in_features = base_model.fc.in_features
self.classifier = nn.Sequential(
nn.Linear(in_features, 1024), nn.ReLU(), nn.Dropout(p=0.2),
nn.Linear(1024, 1024), nn.ReLU(), nn.Dropout(p=0.2),
nn.Linear(1024, num_classes))
def forward(self, x):
batch_size = x.size(0)
x = self.encoder(x).view(batch_size, -1)
x = self.classifier(x)
return x
# multiclass_proba = F.softmax(x, dim=1)
# multilabel_proba = torch.sigmoid(x)
# return {
# "logits": x,
# "multiclass_proba": multiclass_proba,
# "multilabel_proba": multilabel_proba
# }
# +
# def get_model(args: tp.Dict):
# model = ResNeSt('resnet50')
# return model
# -
def get_model(args: tp.Dict):
model =getattr(resnest_torch, args["name"])(pretrained=args["params"]["pretrained"])
del model.fc
# # use the same head as the baseline notebook.
model.fc = nn.Sequential(
nn.Linear(2048, 1024), nn.ReLU(), nn.Dropout(p=0.2),
nn.Linear(1024, 1024), nn.ReLU(), nn.Dropout(p=0.2),
nn.Linear(1024, args["params"]["n_classes"]))
return model
# +
tmp_list = []
for audio_d in TRAIN_RESAMPLED_AUDIO_DIRS:
if not audio_d.exists():
continue
for ebird_d in audio_d.iterdir():
if ebird_d.is_file():
continue
for wav_f in ebird_d.iterdir():
tmp_list.append([ebird_d.name, wav_f.name, wav_f.as_posix()])
train_wav_path_exist = pd.DataFrame(
tmp_list, columns=["ebird_code", "resampled_filename", "file_path"])
del tmp_list
train_all = pd.merge(
train, train_wav_path_exist, on=["ebird_code", "resampled_filename"], how="inner")
print(train.shape)
print(train_wav_path_exist.shape)
print(train_all.shape)
train_all.head()
# +
skf = StratifiedKFold(settings["split"]["params"]["n_splits"])
train_all["fold"] = -1
for fold_id, (train_index, val_index) in enumerate(skf.split(train_all, train_all["ebird_code"])):
train_all.iloc[val_index, -1] = fold_id
# # check the propotion
fold_proportion = pd.pivot_table(train_all, index="ebird_code", columns="fold", values="xc_id", aggfunc=len)
print(fold_proportion.shape)
fold_proportion.T
# +
use_fold = settings["globals"]["use_fold"]
train_file_list = train_all.query("fold != @use_fold")[["file_path", "ebird_code"]].values.tolist()
val_file_list = train_all.query("fold == @use_fold")[["file_path", "ebird_code"]].values.tolist()
print("[fold {}] train: {}, val: {}".format(use_fold, len(train_file_list), len(val_file_list)))
# -
train_file_list[0]
FREESOUND_DIR = INPUT_ROOT / 'ff1010bird_resample'
TEST_FREE_DIR = INPUT_ROOT / 'wav_32000_test'
free_csv = pd.read_csv(FREESOUND_DIR / 'ff1010bird_metadata.csv')
len(free_csv)
import glob
free_csv = free_csv[free_csv['hasbird']==0]
l = glob.glob(str(TEST_FREE_DIR / '*.wav'))
for i in range(len(l)):
l[i] = l[i].split('/')[-1][:-4]
free_csv = free_csv[~free_csv['itemid'].isin(l)]
free_csv.reset_index(inplace=True, drop=True)
for i in range(len(free_csv)):
free_csv['itemid'][i] = str(FREESOUND_DIR) + '/wav/' +str(free_csv['itemid'][i]) + '.wav'
free_csv.rename(columns={'itemid': 'file_path', 'hasbird': 'ebird_code'}, inplace=True)
free_csv['ebird_code'] = 'nocall'
free_csv
# +
skf = KFold(n_splits=5, random_state=42,shuffle=True)
free_csv["fold"] = -1
for fold_id, (train_index, val_index) in enumerate(skf.split(free_csv)):
free_csv.iloc[val_index, -1] = fold_id
use_fold = settings["globals"]["use_fold"]
train_file_list_free = free_csv.query("fold != @use_fold")[["file_path", "ebird_code"]].values.tolist()
val_file_list_free = free_csv.query("fold == @use_fold")[["file_path", "ebird_code"]].values.tolist()
print("[fold {}] train: {}, val: {}".format(use_fold, len(train_file_list_free), len(val_file_list_free)))
train_file_list.extend(train_file_list_free)
val_file_list.extend(val_file_list_free)
# -
# +
# train_rate = train_all[['ebird_code', 'filename', 'duration']].sort_values('duration')
# train_low_rate = train_rate[train_rate['duration'] >= 4]['filename'].reset_index(drop=True).values.tolist()
# for i in range(len(train_low_rate)):
# train_low_rate[i] = train_low_rate[i].split('.')[0]
# +
# train_file_list_tmp = []
# for i in range(len(train_file_list)):
# if not (train_file_list[i][0].split('/')[-1].split('.')[0] in train_low_rate):
# train_file_list_tmp.append(train_file_list[i])
# train_file_list = train_file_list_tmp
# print("[fold {}] train: {}, val: {}".format(use_fold, len(train_file_list), len(val_file_list)))
# -
set_seed(settings["globals"]["seed"])
device = torch.device(settings["globals"]["device"])
output_dir = Path(settings["globals"]["output_dir"])
epoch = settings["globals"]["num_epochs"]
# +
# # # get loader
train_loader, val_loader = get_loaders_for_training(
settings["dataset"]["params"], settings["loader"], train_file_list, val_file_list)
# # # get model
model = get_model(settings["model"])
# model = get_model()
model = model.to(device)
# # # get optimizer
# optimizer = getattr(
# torch.optim, settings["optimizer"]["name"]
# )(model.parameters(), **settings["optimizer"]["params"])
optimizer = RAdam(model.parameters(),lr=0.001)
# # # get scheduler
scheduler = getattr(
torch.optim.lr_scheduler, settings["scheduler"]["name"]
)(optimizer, **settings["scheduler"]["params"])
# # # get loss
loss_func = getattr(nn, settings["loss"]["name"])(**settings["loss"]["params"])
# -
for batch_idx, (data, target) in enumerate(train_loader):
break
data[11].shape
data = data.to('cpu').detach().numpy().copy()
plt.figure()
plt.imshow(np.moveaxis(data[0], 0, 2))
def log(message):
print(message)
with open(output_dir / 'log.txt', 'a+') as logger:
logger.write(f'{message}\n')
import cloudpickle
def save_model(model):
with open(output_dir / 'model_ResNeSt_norm_best.pkl', 'wb') as f:
cloudpickle.dump(model, f)
# +
def plot_loss(t_loss, v_loss):
plt.figure(figsize=(4,3), dpi= 80)
plt.title('loss')
plt.plot(t_loss, color='tab:red', label='train')
plt.plot(v_loss, color='tab:blue', label='valid')
plt.minorticks_on()
plt.grid(b=True, which='major', color='#666666', linestyle='-')
plt.grid(b=True, which='minor', color='#999999', linestyle='-', alpha=0.2)
plt.legend()
plt.show()
def plot_f1(f1_mi, f1_ma):
plt.figure(figsize=(3,3), dpi= 80)
plt.title('f1')
plt.plot(f1_mi, color='tab:red', label='micro', linewidth=1, marker='x')
plt.plot(f1_ma, color='tab:blue', label='macro', linewidth=1, marker='x')
plt.grid(b=True, which='major', color='#666666', linestyle='-')
plt.minorticks_on()
plt.grid(b=True, which='minor', color='#999999', linestyle='-', alpha=0.2)
plt.legend()
plt.show()
# +
def _train_loop(
manager, args, model, device,
train_loader, val_loader, optimizer, scheduler, loss_func
):
"""Run minibatch training loop"""
while not manager.stop_trigger:
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
with manager.run_iteration():
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = loss_func(output, target)
ppe.reporting.report({'train/loss': loss.item()})
loss.backward()
optimizer.step()
scheduler.step()
def train_loop(
args, model, device,
train_loader, val_loader, optimizer, scheduler, loss_func
):
"""Run minibatch training loop"""
train_losses = []
valid_losses = []
best_f1_micro = 0
f1_macros = []
f1_micros = []
threshold = 0.8
start = time.time()
for i in range(epoch):
epoch_start = time.time()
model.train()
train_loss = 0
valid_loss = 0
scaler = torch.cuda.amp.GradScaler()
for batch_idx, (data, target) in enumerate(train_loader):
# with manager.run_iteration():
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
output = model(data)
loss = loss_func(output, target)
# ppe.reporting.report({'train/loss': loss.item()})
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
# loss.backward()
# optimizer.step()
train_loss += loss.item()
# train_loss += loss.item() * data.size(0)
print("\r"+'train_roop...'+str(batch_idx),end="")
# break
print('')
epoch_train_loss = train_loss / (batch_idx + 1)
train_losses.append(epoch_train_loss)
scheduler.step()
outputs = []
targets = []
model.eval()
for batch_idx, (data, target) in enumerate(val_loader):
with torch.no_grad():
data, target = data.to(device), target.to(device)
output = model(data)
loss = loss_func(output, target)
valid_loss += loss.item()
# valid_loss += loss.item() * data.size(0)
outputs.extend(np.argmax(torch.sigmoid(output).to('cpu').detach().numpy().copy(), axis = 1))
targets.extend(np.argmax(target.to('cpu').detach().numpy().copy(), axis = 1))
print("\r"+'valid_roop...'+str(batch_idx),end="")
# break
print('')
epoch_valid_loss = valid_loss / (batch_idx + 1)
valid_losses.append(epoch_valid_loss)
f1_macro = f1_score(np.array(targets), np.array(outputs), average='macro')
f1_micro = f1_score(np.array(targets), np.array(outputs), average='micro')
log(f'epoch [{i+1}/{epoch}] train_loss = {epoch_train_loss}, valid_loss = {epoch_valid_loss}')
log(f'epoch [{i+1}/{epoch}] f1macro = {f1_macro}, f1micro = {f1_micro}')
epoch_end = time.time() - epoch_start
log("epoch_time:{0}".format(epoch_end) + "[sec]")
log('\n')
f1_micros.append(f1_micro)
f1_macros.append(f1_macro)
plot_loss(train_losses, valid_losses)
plot_f1(f1_micros, f1_macros)
if(f1_micro > best_f1_micro):
print('save_model')
save_model(model)
best_f1_micro = f1_micro
whole_time = time.time() - start
log("elapsed_time:{0}".format(whole_time) + "[sec]")
# break
return model, train_losses, valid_losses, f1_micros, f1_macros
# -
# %%time
# # runtraining
model_, train_losses, valid_losses, f1_micros, f1_macros = train_loop(
settings, model, device,
train_loader, val_loader, optimizer, scheduler, loss_func)
import cloudpickle
# +
# with open(output_dir / 'model_ResNeSt_norm.pkl', 'wb') as f:
# cloudpickle.dump(model_, f)
# -
import slackweb
slack = slackweb.Slack(url="https://hooks.slack.com/services/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")
slack.notify(text="おわた")
with open(output_dir / 'train_losses.pkl', 'wb') as f:
cloudpickle.dump(train_losses, f)
with open(output_dir / 'valid_losses.pkl', 'wb') as f:
cloudpickle.dump(valid_losses, f)
with open(output_dir / 'f1_micros.pkl', 'wb') as f:
cloudpickle.dump(f1_micros, f)
with open(output_dir / 'f1_macros.pkl', 'wb') as f:
cloudpickle.dump(f1_macros, f)
with open(output_dir / 'model_ResNeSt.pkl', 'rb') as f:
net = cloudpickle.load(f)
with open(output_dir / 'train_losses.pkl', 'rb') as f:
train_losses = cloudpickle.load(f)
with open(output_dir / 'valid_losses.pkl', 'rb') as f:
valid_losses = cloudpickle.load(f)
with open(output_dir / 'f1_micros.pkl', 'rb') as f:
f1_micros = cloudpickle.load(f)
with open(output_dir / 'f1_macros.pkl', 'rb') as f:
f1_macros = cloudpickle.load(f)
plt.figure(figsize=(6,5), dpi= 80)
plt.title('loss')
plt.plot(train_losses, color='tab:red', label='train')
plt.plot(valid_losses, color='tab:blue', label='valid')
plt.minorticks_on()
plt.grid(b=True, which='major', color='#666666', linestyle='-')
plt.grid(b=True, which='minor', color='#999999', linestyle='-', alpha=0.2)
plt.legend()
plt.show()
plt.figure(figsize=(3,3), dpi= 80)
plt.title('f1')
plt.plot(f1_micros, color='tab:red', label='micro', linewidth=1, marker='x')
plt.plot(f1_macros, color='tab:blue', label='macro', linewidth=1, marker='x')
plt.grid(b=True, which='major', color='#666666', linestyle='-')
plt.minorticks_on()
plt.grid(b=True, which='minor', color='#999999', linestyle='-', alpha=0.2)
plt.legend()
plt.show()
|
.ipynb_checkpoints/0917_training_template_ResNeSt_265_ff1010-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# <script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
# <script>
# window.dataLayer = window.dataLayer || [];
# function gtag(){dataLayer.push(arguments);}
# gtag('js', new Date());
#
# gtag('config', 'UA-59152712-8');
# </script>
#
# # Start-to-Finish validation of $\psi_4$ in curvilinear coordinates against Cartesian formulation provided by [<NAME>'s Weyl scalars & invariants in Cartesian coordinates module](../../Tutorial-WeylScalarsInvariants-Cartesian.ipynb)
#
# ### Author: <NAME>
#
# <font color='blue'>**This module exists as a modification of [the NRPy+ $\psi_4$ in curvilinear coordinates module](../../Tutorial-Psi4.ipynb), writing all spacetime quantities in terms of ADM variables and their derivatives directly.**</font>
#
# ## A Note on Notation
#
# As is standard in NRPy+,
#
# * Greek indices range from 0 to 3, inclusive, with the zeroth component denoting the temporal (time) component.
# * Latin indices range from 0 to 2, inclusive, with the zeroth component denoting the first spatial component.
#
# As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial module).
#
# <a id='toc'></a>
#
# # Introduction, Table of Contents
# $$\label{toc}$$
#
# This module constructs $\psi_4$, a quantity that is immensely useful when extracting gravitational wave content from a numerical relativity simulation. $\psi_4$ is related to the gravitational wave strain via
#
# $$
# \psi_4 = \ddot{h}_+ - i \ddot{h}_\times.
# $$
#
# We construct $\psi_4$ from the standard ADM spatial metric $\gamma_{ij}$ and extrinsic curvature $K_{ij}$, and their derivatives. The full expression is given by Eq. 5.1 in [<NAME> (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf):
#
# \begin{align}
# \psi_4 &= \left[ {R}_{ijkl}+2K_{i[k}K_{l]j}\right]
# {n}^i\bar{m}^j{n}^k\bar{m}^l \\
# & -8\left[ K_{j[k,l]}+{\Gamma }_{j[k}^pK_{l]p}\right]
# {n}^{[0}\bar{m}^{j]}{n}^k\bar{m}^l \\
# & +4\left[ {R}_{jl}-K_{jp}K_l^p+KK_{jl}\right]
# {n}^{[0}\bar{m}^{j]}{n}^{[0}\bar{m}^{l]},
# \end{align}
#
# Note that $\psi_4$ is imaginary, with the imaginary components originating from the tetrad vector $m^\mu$. This module does not specify a tetrad; instead it only constructs the above expression leaving $m^\mu$ and $n^\mu$ unspecified. The [next module on tetrads defines these tetrad quantities](Tutorial-Psi4_tetrads.ipynb) (currently only a quasi-Kinnersley tetrad is supported).
#
# **This tutorial module is organized as follows:**
#
# 1. [Step 1](#initializenrpy): Initialize needed NRPy+ modules
# 1. [Step 2](#riemann): Constructing the 3-Riemann tensor $R_{ik\ell m}$
# 1. [Step 3](#termone): Constructing the rank-4 tensor in Term 1 of $\psi_4$: $R_{ijkl} + 2 K_{i[k} K_{l]j}$
# 1. [Step 4](#termtwo): Constructing the rank-3 tensor in Term 2 of $\psi_4$: $-8 \left(K_{j[k,l]} + \Gamma^{p}_{j[k} K_{l]p} \right)$
# 1. [Step 5](#termthree): Constructing the rank-2 tensor in term 3 of $\psi_4$: $+4 \left(R_{jl} - K_{jp} K^p_l + K K_{jl} \right)$
# 1. [Step 6](#psifour): Constructing $\psi_4$ through contractions of the above terms with arbitrary tetrad vectors $n^\mu$ and $m^\mu$
# 1. [Step 7](#code_validation): Code Validation against BSSN.Psi4 NRPy+ module
# 1. [Step 8](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF
# <a id='initializenrpy'></a>
#
# # Step 1: Initialize core NRPy+ modules \[Back to [top](#toc)\]
# $$\label{initializenrpy}$$
#
# Let's start by importing all the needed modules from NRPy+:
# +
# Step 1.a: import all needed modules from NRPy+:
import sympy as sp
from outputC import *
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
import reference_metric as rfm
# Step 1.b: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
# Step 1.c: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.d: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
# Step 1.e: Import all ADM quantities as written in terms of BSSN quantities
# import BSSN.ADM_in_terms_of_BSSN as AB
# AB.ADM_in_terms_of_BSSN()
# -
# <a id='riemann'></a>
#
# # Step 2: Constructing the 3-Riemann tensor $R_{ik\ell m}$ \[Back to [top](#toc)\]
# $$\label{riemann}$$
#
# Analogously to Christoffel symbols, the Riemann tensor is a measure of the curvature of an $N$-dimensional manifold. Thus the 3-Riemann tensor is not simply a projection of the 4-Riemann tensor (see e.g., Eq. 2.7 of [Campanelli *et al* (1998)](https://arxiv.org/pdf/gr-qc/9803058.pdf) for the relation between 4-Riemann and 3-Riemann), as $N$-dimensional Riemann tensors are meant to define a notion of curvature given only the associated $N$-dimensional metric.
#
# So, given the ADM 3-metric, the Riemann tensor in arbitrary dimension is given by the 3-dimensional version of Eq. 1.19 in Baumgarte & Shapiro's *Numerical Relativity*. I.e.,
#
# $$
# R^i_{jkl} = \partial_k \Gamma^{i}_{jl} - \partial_l \Gamma^{i}_{jk} + \Gamma^i_{mk} \Gamma^m_{jl} - \Gamma^{i}_{ml} \Gamma^{m}_{jk},
# $$
# where $\Gamma^i_{jk}$ is the Christoffel symbol associated with the 3-metric $\gamma_{ij}$:
#
# $$
# \Gamma^l_{ij} = \frac{1}{2} \gamma^{lk} \left(\gamma_{ki,j} + \gamma_{kj,i} - \gamma_{ij,k} \right)
# $$
#
# Notice that this equation for the Riemann tensor is equivalent to the equation given in the Wikipedia article on [Formulas in Riemannian geometry](https://en.wikipedia.org/w/index.php?title=List_of_formulas_in_Riemannian_geometry&oldid=882667524):
#
# $$
# R^\ell{}_{ijk}=
# \partial_j \Gamma^\ell{}_{ik}-\partial_k\Gamma^\ell{}_{ij}
# +\Gamma^\ell{}_{js}\Gamma_{ik}^s-\Gamma^\ell{}_{ks}\Gamma^s{}_{ij},
# $$
# with the replacements $i\to \ell$, $j\to i$, $k\to j$, $l\to k$, and $s\to m$. Wikipedia also provides a simpler form in terms of second-derivatives of three-metric itself (using the definition of Christoffel symbol), so that we need not define derivatives of the Christoffel symbol:
#
# $$
# R_{ik\ell m}=\frac{1}{2}\left(
# \gamma_{im,k\ell}
# + \gamma_{k\ell,im}
# - \gamma_{i\ell,km}
# - \gamma_{km,i\ell} \right)
# +\gamma_{np} \left(
# \Gamma^n{}_{k\ell} \Gamma^p{}_{im} -
# \Gamma^n{}_{km} \Gamma^p{}_{i\ell} \right).
# $$
#
# First we construct the term on the left:
# +
# Step 2: Construct the (rank-4) Riemann curvature tensor associated with the ADM 3-metric:
RDDDD = ixp.zerorank4()
gammaDD = ixp.register_gridfunctions_for_single_rank2("AUX","gammaDD", "sym01") # The AUX or EVOL designation is *not*
# used in diagnostic modules.
kDD = ixp.register_gridfunctions_for_single_rank2("AUX","kDD", "sym01")
gammaDD_dD = ixp.declarerank3("gammaDD_dD","sym01")
gammaDD_dDD = ixp.declarerank4("gammaDD_dDD","sym01_sym23")
# gammaDD_dDD = AB.gammaDD_dDD
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RDDDD[i][k][l][m] = sp.Rational(1,2) * \
(gammaDD_dDD[i][m][k][l] + gammaDD_dDD[k][l][i][m] - gammaDD_dDD[i][l][k][m] - gammaDD_dDD[k][m][i][l])
# -
# ... then we add the term on the right:
# +
# ... then we add the term on the right:
# Define the Christoffel symbols
GammaUDD = ixp.zerorank3(DIM)
gammaUU,gammadetdummy = ixp.symm_matrix_inverter3x3(gammaDD)
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
GammaUDD[i][k][l] += (sp.Rational(1,2))*gammaUU[i][m]*\
(gammaDD_dD[m][k][l] + gammaDD_dD[m][l][k] - gammaDD_dD[k][l][m])
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
for n in range(DIM):
for p in range(DIM):
RDDDD[i][k][l][m] += gammaDD[n][p] * \
(GammaUDD[n][k][l]*GammaUDD[p][i][m] - GammaUDD[n][k][m]*GammaUDD[p][i][l])
# -
# <a id='termone'></a>
#
# # Step 3: Constructing the rank-4 tensor in Term 1 of $\psi_4$: $R_{ijkl} + 2 K_{i[k} K_{l]j}$ \[Back to [top](#toc)\]
# $$\label{termone}$$
#
# Following Eq. 5.1 in [Baker, <NAME> (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf), the rank-4 tensor in the first term of $\psi_4$ is given by
#
# $$
# R_{ijkl} + 2 K_{i[k} K_{l]j} = R_{ijkl} + K_{ik} K_{lj} - K_{il} K_{kj}
# $$
# +
# Step 3: Construct the (rank-4) tensor in term 1 of psi_4 (referring to Eq 5.1 in
# Baker, <NAME> (2001); https://arxiv.org/pdf/gr-qc/0104063.pdf
rank4term1 = ixp.zerorank4()
# kDD = AB.kDD
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
rank4term1[i][j][k][l] = RDDDD[i][j][k][l] + kDD[i][k]*kDD[l][j] - kDD[i][l]*kDD[k][j]
# -
# <a id='termtwo'></a>
#
# # Step 4: Constructing the rank-3 tensor in Term 2 of $\psi_4$: $-8 \left(K_{j[k,l]} + \Gamma^{p}_{j[k} K_{l]p} \right)$ \[Back to [top](#toc)\]
# $$\label{termtwo}$$
#
# Following Eq. 5.1 in [Baker, <NAME> (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf), the rank-3 tensor in the second term of $\psi_4$ is given by
#
# $$
# -8 \left(K_{j[k,l]} + \Gamma^{p}_{j[k} K_{l]p} \right)
# $$
# First let's construct the first term in this sum: $K_{j[k,l]} = \frac{1}{2} (K_{jk,l} - K_{jl,k})$:
# +
# Step 4: Construct the (rank-3) tensor in term 2 of psi_4 (referring to Eq 5.1 in
# <NAME> (2001); https://arxiv.org/pdf/gr-qc/0104063.pdf
rank3term2 = ixp.zerorank3()
# kDD_dD = AB.kDD_dD
kDD_dD = ixp.declarerank3("kDD_dD","sym01")
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
rank3term2[j][k][l] = sp.Rational(1,2)*(kDD_dD[j][k][l] - kDD_dD[j][l][k])
# -
# ... then we construct the second term in this sum: $\Gamma^{p}_{j[k} K_{l]p} = \frac{1}{2} (\Gamma^{p}_{jk} K_{lp}-\Gamma^{p}_{jl} K_{kp})$:
# ... then we construct the second term in this sum:
# \Gamma^{p}_{j[k} K_{l]p} = \frac{1}{2} (\Gamma^{p}_{jk} K_{lp}-\Gamma^{p}_{jl} K_{kp}):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for p in range(DIM):
rank3term2[j][k][l] += sp.Rational(1,2)*(GammaUDD[p][j][k]*kDD[l][p] - GammaUDD[p][j][l]*kDD[k][p])
# Finally, we multiply the term by $-8$:
# Finally, we multiply the term by $-8$:
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
rank3term2[j][k][l] *= sp.sympify(-8)
# <a id='termthree'></a>
#
# # Step 5: Constructing the rank-2 tensor in term 3 of $\psi_4$: $+4 \left(R_{jl} - K_{jp} K^p_l + K K_{jl} \right)$ \[Back to [top](#toc)\]
# $$\label{termthree}$$
#
# Following Eq. 5.1 in [<NAME> (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf), the rank-2 tensor in the third term of $\psi_4$ is given by
#
# $$
# +4 \left(R_{jl} - K_{jp} K^p_l + K K_{jl} \right),
# $$
# where
# \begin{align}
# R_{jl} &= R^i_{jil} \\
# &= \gamma^{im} R_{ijml} \\
# K &= K^i_i \\
# &= \gamma^{im} K_{im}
# \end{align}
#
# Let's build the components of this term: $R_{jl}$, $K^p_l$, and $K$, as defined above:
# +
# Step 5: Construct the (rank-2) tensor in term 3 of psi_4 (referring to Eq 5.1 in
# Baker, <NAME> (2001); https://arxiv.org/pdf/gr-qc/0104063.pdf
# Step 5.1: Construct 3-Ricci tensor R_{ij} = gamma^{im} R_{ijml}
RDD = ixp.zerorank2()
for j in range(DIM):
for l in range(DIM):
for i in range(DIM):
for m in range(DIM):
RDD[j][l] += gammaUU[i][m]*RDDDD[i][j][m][l]
# Step 5.2: Construct K^p_l = gamma^{pi} K_{il}
KUD = ixp.zerorank2()
for p in range(DIM):
for l in range(DIM):
for i in range(DIM):
KUD[p][l] += gammaUU[p][i]*kDD[i][l]
# Step 5.3: Construct trK = gamma^{ij} K_{ij}
trK = sp.sympify(0)
for i in range(DIM):
for j in range(DIM):
trK += gammaUU[i][j]*kDD[i][j]
# -
# Next we put these terms together to construct the entire term:
# $$
# +4 \left(R_{jl} - K_{jp} K^p_l + K K_{jl} \right),
# $$
# Next we put these terms together to construct the entire term in parentheses:
# # +4 \left(R_{jl} - K_{jp} K^p_l + K K_{jl} \right),
rank2term3 = ixp.zerorank2()
for j in range(DIM):
for l in range(DIM):
rank2term3[j][l] = RDD[j][l] + trK*kDD[j][l]
for p in range(DIM):
rank2term3[j][l] += - kDD[j][p]*KUD[p][l]
# Finally we multiply by +4:
for j in range(DIM):
for l in range(DIM):
rank2term3[j][l] *= sp.sympify(4)
# <a id='psifour'></a>
#
# # Step 6: Constructing $\psi_4$ through contractions of the above terms with an arbitrary tetrad vectors $m^\mu$ and $n^\mu$ \[Back to [top](#toc)\]
# $$\label{psifour}$$
#
# Eq. 5.1 in [Baker, <NAME> (2001)](https://arxiv.org/pdf/gr-qc/0104063.pdf) writes $\psi_4$ (which is complex) as the contraction of each of the above terms with products of tetrad vectors:
#
# \begin{align}
# \psi_4 &= \left[ {R}_{ijkl}+2K_{i[k}K_{l]j}\right]
# {n}^i\bar{m}^j{n}^k\bar{m}^l \\
# & -8\left[ K_{j[k,l]}+{\Gamma }_{j[k}^pK_{l]p}\right]
# {n}^{[0}\bar{m}^{j]}{n}^k\bar{m}^l \\
# & +4\left[ {R}_{jl}-K_{jp}K_l^p+KK_{jl}\right]
# {n}^{[0}\bar{m}^{j]}{n}^{[0}\bar{m}^{l]},
# \end{align}
# where $\bar{m}^\mu$ is the complex conjugate of $m^\mu$, and $n^\mu$ is real. The third term is given by
# \begin{align}
# {n}^{[0}\bar{m}^{j]}{n}^{[0}\bar{m}^{l]}
# &= \frac{1}{2}({n}^{0}\bar{m}^{j} - {n}^{j}\bar{m}^{0} )\frac{1}{2}({n}^{0}\bar{m}^{l} - {n}^{l}\bar{m}^{0} )\\
# &= \frac{1}{4}({n}^{0}\bar{m}^{j} - {n}^{j}\bar{m}^{0} )({n}^{0}\bar{m}^{l} - {n}^{l}\bar{m}^{0} )\\
# &= \frac{1}{4}({n}^{0}\bar{m}^{j}{n}^{0}\bar{m}^{l} - {n}^{j}\bar{m}^{0}{n}^{0}\bar{m}^{l} - {n}^{0}\bar{m}^{j}{n}^{l}\bar{m}^{0} + {n}^{j}\bar{m}^{0}{n}^{l}\bar{m}^{0})
# \end{align}
#
# Only $m^\mu$ is complex, so we can separate the real and imaginary parts of $\psi_4$ by hand, defining $M^\mu$ to now be the real part of $\bar{m}^\mu$ and $\mathcal{M}^\mu$ to be the imaginary part. All of the above products are of the form ${n}^\mu\bar{m}^\nu{n}^\eta\bar{m}^\delta$, so let's evalute the real and imaginary parts of this product once, for all such terms:
#
# \begin{align}
# {n}^\mu\bar{m}^\nu{n}^\eta\bar{m}^\delta
# &= {n}^\mu(M^\nu - i \mathcal{M}^\nu){n}^\eta(M^\delta - i \mathcal{M}^\delta) \\
# &= \left({n}^\mu M^\nu {n}^\eta M^\delta -
# {n}^\mu \mathcal{M}^\nu {n}^\eta \mathcal{M}^\delta \right)+
# i \left(
# -{n}^\mu M^\nu {n}^\eta \mathcal{M}^\delta
# -{n}^\mu \mathcal{M}^\nu {n}^\eta M^\delta
# \right)
# \end{align}
#
#
# +
# mre4U = ixp.declarerank1("mre4U",DIM=4)
# mim4U = ixp.declarerank1("mim4U",DIM=4)
# n4U = ixp.declarerank1("n4U" ,DIM=4)
import BSSN.Psi4_tetrads as P4t
P4t.Psi4_tetrads()
mre4U = P4t.mre4U
mim4U = P4t.mim4U
n4U = P4t.n4U
def tetrad_product__Real_psi4(n,Mre,Mim, mu,nu,eta,delta):
return +n[mu]*Mre[nu]*n[eta]*Mre[delta] - n[mu]*Mim[nu]*n[eta]*Mim[delta]
def tetrad_product__Imag_psi4(n,Mre,Mim, mu,nu,eta,delta):
return -n[mu]*Mre[nu]*n[eta]*Mim[delta] - n[mu]*Mim[nu]*n[eta]*Mre[delta]
psi4_re = sp.sympify(0)
psi4_im = sp.sympify(0)
# First term:
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
psi4_re += rank4term1[i][j][k][l]*tetrad_product__Real_psi4(n4U,mre4U,mim4U, i+1,j+1,k+1,l+1)
psi4_im += rank4term1[i][j][k][l]*tetrad_product__Imag_psi4(n4U,mre4U,mim4U, i+1,j+1,k+1,l+1)
# Second term:
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
psi4_re += rank3term2[j][k][l] * \
sp.Rational(1,2)*(+tetrad_product__Real_psi4(n4U,mre4U,mim4U, 0,j+1,k+1,l+1)
-tetrad_product__Real_psi4(n4U,mre4U,mim4U, j+1,0,k+1,l+1) )
psi4_im += rank3term2[j][k][l] * \
sp.Rational(1,2)*(+tetrad_product__Imag_psi4(n4U,mre4U,mim4U, 0,j+1,k+1,l+1)
-tetrad_product__Imag_psi4(n4U,mre4U,mim4U, j+1,0,k+1,l+1) )
# Third term:
for j in range(DIM):
for l in range(DIM):
psi4_re += rank2term3[j][l] * \
(sp.Rational(1,4)*(+tetrad_product__Real_psi4(n4U,mre4U,mim4U, 0,j+1,0,l+1)
-tetrad_product__Real_psi4(n4U,mre4U,mim4U, j+1,0,0,l+1)
-tetrad_product__Real_psi4(n4U,mre4U,mim4U, 0,j+1,l+1,0)
+tetrad_product__Real_psi4(n4U,mre4U,mim4U, j+1,0,l+1,0)))
psi4_im += rank2term3[j][l] * \
(sp.Rational(1,4)*(+tetrad_product__Imag_psi4(n4U,mre4U,mim4U, 0,j+1,0,l+1)
-tetrad_product__Imag_psi4(n4U,mre4U,mim4U, j+1,0,0,l+1)
-tetrad_product__Imag_psi4(n4U,mre4U,mim4U, 0,j+1,l+1,0)
+tetrad_product__Imag_psi4(n4U,mre4U,mim4U, j+1,0,l+1,0)))
# -
# <a id='code_validation'></a>
#
# # Step 6: Code validation against BSSN.Psi4 NRPy+ module \[Back to [top](#toc)\]
# $$\label{code_validation}$$
#
# As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between
# 1. this tutorial and
# 2. the NRPy+ BSSN.Psi4 module.
#
# By default, we compare all quantities in Spherical coordinates, though other coordinate systems may be chosen.
# +
outCparams = "preindent=1,outCfileaccess=w,outCverbose=False,includebraces=False"
print("STARTING NEW")
fin.FD_outputC("Psi4_new.h", lhrh(lhs="psi4_real", rhs=psi4_im), outCparams)
print("FINISHED NEW")
gri.glb_gridfcs_list = []
import WeylScal4NRPy.WeylScalars_Cartesian as W4
W4.WeylScalars_Cartesian()
print("STARTING OLD")
fin.FD_outputC("Psi4_old.h", lhrh(lhs="psi4_real", rhs=W4.psi4i), outCparams)
print("FINISHED OLD")
# print("FullSimplify["+str(sp.mathematica_code(psi4_re-W4.psi4r))+"]")
# with open("math.txt","w") as file:
# file.write("FullSimplify["+str(sp.mathematica_code(psi4_re-W4.psi4r))+"]")
# # Call the BSSN_RHSs() function from within the
# # BSSN/BSSN_RHSs.py module,
# # which should do exactly the same as in Steps 1-16 above.
# print("vvv Ignore the minor warnings below. vvv")
# import BSSN.Psi4 as BP4
# BP4.Psi4()
# print("^^^ Ignore the minor warnings above. ^^^\n")
# print("Consistency check between this tutorial and BSSN.Psi4 NRPy+ module: ALL SHOULD BE ZERO.")
# print("psi4_im - BP4.psi4_im = " + str(psi4_im - BP4.psi4_im))
# print("psi4_re - BP4.psi4_re = " + str(psi4_re - BP4.psi4_re))
# -
# !gcc -O2 psi4_tester.c -o psi4_tester -lm
# !./psi4_tester 4 4 4
# <a id='latex_pdf_output'></a>
#
# # Step 7: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
# $$\label{latex_pdf_output}$$
# !jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-Psi4.ipynb
# !pdflatex -interaction=batchmode Tutorial-Psi4.tex
# !pdflatex -interaction=batchmode Tutorial-Psi4.tex
# !pdflatex -interaction=batchmode Tutorial-Psi4.tex
# !rm -f Tut*.out Tut*.aux Tut*.log
|
BSSN/Psi4Cartesianvalidation/Tutorial-Psi4-Cartesian_validation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.009986, "end_time": "2021-12-28T20:17:20.696532", "exception": false, "start_time": "2021-12-28T20:17:20.686546", "status": "completed"} tags=[]
# # "Dynamic Covid-19 Tracker"
#
# - badges: false
# - author: <NAME>
# + [markdown] papermill={"duration": 0.005138, "end_time": "2021-12-28T20:17:20.708806", "exception": false, "start_time": "2021-12-28T20:17:20.703668", "status": "completed"} tags=[]
# ##### <center>Hello!, This is a dynamic version of the dashboard, it updates once daily!</center>
# ##### <center>Data source is https://www.covid19india.org/ </center>
# + papermill={"duration": 1.828886, "end_time": "2021-12-28T20:17:22.543108", "exception": false, "start_time": "2021-12-28T20:17:20.714222", "status": "completed"} tags=[]
#collapse
from datetime import datetime
import pandas as pd
import numpy as np
import requests
import json
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib as mpl
from IPython.core.display import display,HTML
import pytz
# %matplotlib inline
dynamic_df = pd.read_csv("https://api.covid19india.org/csv/latest/state_wise_daily.csv")
dynamic_df.head()
ddf = dynamic_df[(dynamic_df.Status == "Confirmed")]
ddf
ddf1 = ddf.drop(columns = ["Status"])
ddf2 = dynamic_df[(dynamic_df.Status == "Deceased")]
ddf2 = ddf2.drop(columns = ["Status"])
ddf1["Date"] = ddf1["Date"].astype('datetime64[ns]')
update = dynamic_df.iloc[-1,0]
cases = ddf1.TT.sum()
new = ddf1.iloc[-1,1]
deaths = ddf2.TT.sum()
dnew = ddf2.iloc[-1,1]
overview = '''
<!-- ####### HTML!! #########-->
<h1 style="color: #5e9ca0; text-align: center;">India</h1>
<p style="text-align: center;">Last update: <strong>{update}</strong></p>
<p style="text-align: center;">Confirmed cases:</p>
<p style="text-align: center;font-size:24px;">{cases} (<span style="color: #ff0000;">+{new}</span>)</p>
<p style="text-align: center;">Confirmed deaths:</p>
<p style="text-align: center;font-size:24px;">{deaths} (<span style="color: #ff0000;">+{dnew}</span>)</p>
'''
html = HTML(overview.format(update=update, cases=cases,new=new,deaths=deaths,dnew=dnew))
display(html)
# + papermill={"duration": 0.035658, "end_time": "2021-12-28T20:17:22.585018", "exception": false, "start_time": "2021-12-28T20:17:22.549360", "status": "completed"} tags=[]
#hide_input
tz = pytz.timezone('Asia/Kolkata')
#now = datetime.now().time() # time object
now = datetime.now()
now = now.astimezone(tz)
print("This dashboard is last updated at (IST) =", now.strftime("%Y-%m-%d %H:%M"))
# + papermill={"duration": 0.019539, "end_time": "2021-12-28T20:17:22.611289", "exception": false, "start_time": "2021-12-28T20:17:22.591750", "status": "completed"} tags=[]
#collapse
ch_total = ddf1.CH.sum()
ch_new = ddf1.iloc[-1,7]
mh_total = ddf1.MH.sum()
mh_new = ddf1.iloc[-1,23]
dl_total = ddf1.DL.sum()
dl_new = ddf1.iloc[-1,11]
firstdata = '''
<!-- ####### HTML!! #########-->
<p style="text-align: center;font-size:18px;">--Important Places--</p>
<p style="text-align: left;font-size:18px;">Total Cases (<span style="color: #ff0000;">New Cases</span>)</p>
<p style="text-align: left;font-size:18px;">Chandigarh (Hometown): {ch_total} (<span style="color: #ff0000;">+{ch_new}</span>)</p>
<p style="text-align: left;font-size:18px;">Delhi (Second Home): {dl_total} (<span style="color: #ff0000;">+{dl_new}</span>)</p>
<p style="text-align: left;font-size:18px;">Maharashtra (Just Because..): {mh_total} (<span style="color: #ff0000;">+{mh_new}</span>)</p>
'''
html = HTML(firstdata.format(ch_total=ch_total, ch_new = ch_new, mh_total = mh_total, mh_new = mh_new, dl_total = dl_total, dl_new = dl_new ))
display(html)
# + papermill={"duration": 33.681468, "end_time": "2021-12-28T20:17:56.299629", "exception": false, "start_time": "2021-12-28T20:17:22.618161", "status": "completed"} tags=[]
#collapse
n = 10
st = ["TT", "MH", "TN", "DL", "KA", "UP", "BR", "WB", "TG", "CH"]
st_name = ["Daily Count for India", "Maharashta", "Tamil Nadu", "Delhi", "Karnataka", "Uttar Pradesh", "Bihar", "West Bengal", "Telangana", "Chandigarh (My hometown)"]
ax = []
fig = plt.figure(figsize = (16,30))
gs = fig.add_gridspec(n, 3)
for i in range(n):
ax1 = fig.add_subplot(gs[i, :])
ax1.bar(ddf1.Date,ddf1[st[i]],alpha=0.3,color='#007acc')
ax1.plot(ddf1.Date,ddf1[st[i]] , marker="o", color='#007acc')
ax1.xaxis.set_major_locator(mdates.WeekdayLocator())
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%b %d'))
ax1.text(0.02, 0.5,st_name[i], transform = ax1.transAxes, fontsize=25)
ax1.spines['right'].set_visible(False)
ax1.spines['top'].set_visible(False)
|
_notebooks/2020-08-06-dynamic.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
# %matplotlib inline
import matplotlib
import numpy as np
from collections import defaultdict
from lib.envs.blackjack import BlackjackEnv
from topics.mc.mc_methods import mc_off_policy_control_weighted
from topics.mc.common.policy import create_random_policy
from lib.plotting import plot_value_function
matplotlib.style.use('ggplot')
# -
env = BlackjackEnv()
random_policy = create_random_policy(env.action_space.n)
Q, policy = mc_off_policy_control_weighted(env, num_episodes=500000, behavior_policy=random_policy)
# For plotting: Create value function from action-value function
# by picking the best action at each state
V = defaultdict(float)
for state, action_values in Q.items():
action_value = np.max(action_values)
V[state] = action_value
plot_value_function(V, title="Optimal Value Function")
|
topics/mc/Off-Policy MC Control with Weighted Importance Sampling.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# 说明:
# 给定 S 和 T 两个字符串,当它们分别被输入到空白的文本编辑器后,判断二者是否相等,并返回结果。
# # 代表退格字符。
# 注意:
# 如果对空文本输入退格字符,文本继续为空。
#
# 示例 1:
# 输入:S = "ab#c", T = "ad#c"
# 输出:true
# 解释:S 和 T 都会变成 “ac”。
#
# 示例 2:
# 输入:S = "ab##", T = "c#d#"
# 输出:true
# 解释:S 和 T 都会变成 “”。
#
# 示例 3:
# 输入:S = "a##c", T = "#a#c"
# 输出:true
# 解释:S 和 T 都会变成 “c”。
#
# 示例 4:
# 输入:S = "a#c", T = "b"
# 输出:false
# 解释:S 会变成 “c”,但 T 仍然是 “b”。
#
# 提示:
# 1、1 <= S.length <= 200
# 2、1 <= T.length <= 200
# 3、S 和 T 只含有小写字母以及字符 '#'。
# -
class Solution:
def backspaceCompare(self, S: str, T: str) -> bool:
s_stack = self.helper(S)
t_stack = self.helper(T)
print(s_stack, t_stack)
return s_stack == t_stack
def helper(self, s):
stack = []
for a in s:
if a == '#':
if stack:
stack.pop()
else:
stack.append(a)
return stack
solution = Solution()
solution.backspaceCompare(S = "a#c", T = "b")
|
Two Pointer/1018/844. Backspace String Compare.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
'''
This is the notebook for finding mappings between exonic parts and transcripts
For new dataset
'''
from csv import reader
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
import pickle
# +
data_dir = '../data/data_last'
COPDGene_Freeze1_RNAseq_exonicParts = os.path.join(data_dir, 'COPDGene_Freeze3_RNAseq_exonicParts.csv')
COPDGene_Freeze1_RNAseq_genes = os.path.join(data_dir, 'COPDGene_Freeze3_RNAseq_genes.csv')
COPDGene_Freeze1_RNAseq_transcripts = os.path.join(data_dir, 'COPDGene_Freeze3_RNAseq_transcripts.csv')
exonicParts = pd.read_csv(COPDGene_Freeze1_RNAseq_exonicParts)
genes = pd.read_csv(COPDGene_Freeze1_RNAseq_genes)
trans_tx = pd.read_csv(COPDGene_Freeze1_RNAseq_transcripts)
mapping_dir = '../data/data_last'
IDmapping = os.path.join(mapping_dir, 'HMG-2016-JH-00187_Huan_Fig1SupplTable2_IDmapping.csv')
useful_gene = pd.read_csv(IDmapping)
# -
# this part is to generate exon, iso, gene mapping
with open('./COPD/data/mapping_data_last/exon_list.pickle', 'rb') as f:
exon_list_loaded = pickle.load(f)
selected_exon = exonicParts.loc[exon_list_loaded][['exon_name','tx_name','gene_id']]
useful_gene_new = useful_gene.rename(columns={'ensembl_gene_id': 'gene_id'})
useful_gene_new.gene_id.astype(str)
useful_gene_new
gg = selected_exon.join(useful_gene_new[['gene_id', 'external_gene_name']].set_index('gene_id'), on='gene_id')
gg.to_csv('exon_tx_gene_mapping.csv', index=False)
len(set(list(gg['external_gene_name'])))
useful_gene.head(5)
exonicParts
genes
trans_tx
important_gene_list = useful_gene['ensembl_gene_id'].tolist()
important_gene_list
print(len(important_gene_list))
print(len(set(important_gene_list)))
tx_rows_to_get = trans_tx.loc[trans_tx['gene_id'].isin(important_gene_list)]
tx_list = tx_rows_to_get.index.values.tolist()
print(len(tx_list))
#pickle.dump(tx_list, open(os.path.join('../mapping_data_last', 'tx_list.pickle'),'wb'))
gene_rows_to_get = genes.loc[genes['gene_id'].isin(important_gene_list)]
gene_list = gene_rows_to_get.index.values.tolist()
print(len(gene_list))
#pickle.dump(gene_list, open(os.path.join('../mapping_data_last', 'gene_list.pickle'),'wb'))
rows_to_get = exonicParts.loc[exonicParts['gene_id'].isin(important_gene_list)]
important_exon_list = rows_to_get.index.values.tolist()
print(len(important_exon_list))
#pickle.dump(important_exon_list, open(os.path.join('../mapping_data_last', 'exon_list.pickle'),'wb'))
# here I want to save the indices
# +
# Then find the linking matrix
tx_list_small = rows_to_get['tx_name'].tolist()
transcripts_small = []
for transcript in tx_list_small:
transcripts_small.extend(transcript.split('|'))
transcripts_distinct_small = set(transcripts_small)
num_transcript = len(transcripts_distinct_small)
print(num_transcript)
# +
# Then find the linking matrix
tx_list_small = rows_to_get['tx_name'].tolist()
transcripts_small = []
for transcript in tx_list_small:
transcripts_small.extend(transcript.split('|'))
transcripts_distinct_small = set(transcripts_small)
num_transcript = len(transcripts_distinct_small)
num_exon = len(important_exon_list)
linking_matrix = np.zeros((num_exon, num_transcript))
trans_id = {}
for i, item in enumerate(transcripts_distinct_small):
trans_id[item] = i
for i, transcript in enumerate(tx_list_small):
trans = transcript.split('|')
for tran in trans:
tran_id = trans_id[tran]
linking_matrix[i, tran_id] = 1
#pickle.dump(trans_id, open(os.path.join('../mapping_data_new', 'transcript_id.pickle'),'wb'))
#pickle.dump(linking_matrix, open(os.path.join('../mapping_data_new', 'linking_matrix.pickle'),'wb'))
# -
rows_to_get_tx = trans_tx.loc[trans_tx['transcript_id'].isin(trans_id.keys())]
important_tx_list_2 = rows_to_get_tx.index.values.tolist()
print(len(important_tx_list_2))
print(linking_matrix.shape)
# +
# map back to exonic ID
important_tx = rows_to_get_tx['transcript_id'].tolist()
important_tx_set = set(important_tx)
# find important exonic ID
# exonicParts.shape = (255124, 8)
index_list=[]
for i in range(254871):
tx_temp = exonicParts['tx_name'][i].split('|')
tx_temp = set(tx_temp)
if important_tx_set & tx_temp:
index_list.append(i)
# +
# now get the real mapping matrix again
transcripts_distinct_small = important_tx_set
num_transcript = len(transcripts_distinct_small)
num_exon = len(index_list)
linking_matrix = np.zeros((num_exon, num_transcript))
trans_id = {}
for i, item in enumerate(transcripts_distinct_small):
trans_id[item] = i
for i,real_id in enumerate(index_list):
trans = exonicParts['tx_name'][real_id].split('|')
for tran in trans:
if tran in transcripts_distinct_small:
tran_id = trans_id[tran]
linking_matrix[i, tran_id] = 1
# -
# this is for isoform-exon mapping
print(linking_matrix.shape)
save the new info somewhere
pickle.dump(important_tx_list_2, open(os.path.join('../mapping_data_last', 'tx_list.pickle'),'wb'))
pickle.dump(index_list, open(os.path.join('../mapping_data_last', 'exon_list.pickle'),'wb'))
pickle.dump(linking_matrix, open(os.path.join('../mapping_data_last', 'linking_matrix.pickle'),'wb'))
gg = exonicParts.loc[index_list]['gene_id'].values.astype(str)
gg_list = gene_rows_to_get['gene_id'].values.astype(str)
inverse_dict = {gg_list[i]:i for i in range(len(gg_list))}
eg_linking_matrix = np.zeros((len(index_list), len(important_gene_list)))
for i, g in enumerate(gg):
if g in inverse_dict:
eg_linking_matrix[i, inverse_dict[g]] = 1
eg_linking_matrix.shape
# this is for gene-exon mapping
pickle.dump(eg_linking_matrix, open(os.path.join('../data/mapping_data_last', 'eg_linking_matrix.pickle'),'wb'))
|
generate_mapping.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Implementation of Off-Policy MC Control with Weighted Importance Sampling
#
# follows denny britz's RL tutorial: https://github.com/dennybritz/reinforcement-learning
#
# <NAME>, 2018
# +
# %matplotlib inline
import gym
import matplotlib
import numpy as np
import sys
from collections import defaultdict
if "../" not in sys.path:
sys.path.append("../")
from lib.envs.blackjack import BlackjackEnv
from lib import plotting
matplotlib.style.use('ggplot')
# -
env = BlackjackEnv()
def create_random_policy(nA):
"""
Creates a random policy function.
Args:
nA: Number of actions in the environment.
Returns:
A function that takes an observation as input and returns a vector
of action probabilities
"""
A = np.ones(nA, dtype=float) / nA
def policy_fn(observation):
return A
return policy_fn
def create_greedy_policy(Q):
"""
Creates a greedy policy based on Q values.
Args:
Q: A dictionary that maps from state -> action values
Returns:
A function that takes an observation as input and returns a vector
of action probabilities.
"""
def policy_fn(observation):
nA = len(Q[observation])
greedy_A = np.argmax(Q[observation])
# greedy policy
action_probs = np.zeros(nA)
action_probs[greedy_A] = 1.0
return action_probs
return policy_fn
def mc_control_importance_sampling(env, num_episodes, behavior_policy, discount_factor=1.0):
"""
Monte Carlo Control Off-Policy Control using Weighted Importance Sampling.
Finds an optimal greedy policy.
Args:
env: OpenAI gym environment.
num_episodes: Number of episodes to sample.
behavior_policy: The behavior to follow while generating episodes.
A function that given an observation returns a vector of probabilities for each action.
discount_factor: Gamma discount factor.
Returns:
A tuple (Q, policy).
Q is a dictionary mapping state -> action values.
policy is a function that takes an observation as an argument and returns
action probabilities. This is the optimal greedy policy.
"""
# The final action-value function.
# A dictionary that maps state -> action values
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# A dictionary that maps state -> action importance weights
C = defaultdict(lambda: np.zeros(env.action_space.n))
# Our greedily policy we want to learn
target_policy = create_greedy_policy(Q)
for i_episode in range(1,num_episodes+1):
# monitor progress
if i_episode % 1000 == 0:
print ('\rEpisode {}/{}'.format(i_episode, num_episodes), end="")
sys.stdout.flush()
# generate an episode with the behavior policy
state = env.reset()
episode = []
for t in range(100):
action_probs = behavior_policy(state)
action = np.random.choice(np.arange(len(action_probs)), p=action_probs)
# take a step in env
next_state, reward, done, _ = env.step(action)
episode.append((state, action, reward))
if done: break;
state = next_state
# evaluate and generate target policy
W = 1.0
G = 0.0
for s,a,r in episode[::-1]:
G = discount_factor * G + r
C[s][a] += W
Q[s][a] += (W / C[s][a]) * (G - Q[s][a])
# update target policy
target_policy = create_greedy_policy(Q)
if a != np.argmax(target_policy(s)):
break
# update weight ratios of different policies
W *= 1 / behavior_policy(s)[a]
return Q, target_policy
random_policy = create_random_policy(env.action_space.n)
Q, policy = mc_control_importance_sampling(env, num_episodes=500000, behavior_policy=random_policy)
# For plotting: Create value function from action-value function
# by picking the best action at each state
V = defaultdict(float)
for state, action_values in Q.items():
action_value = np.max(action_values)
V[state] = action_value
plotting.plot_value_function(V, title="Optimal Value Function")
|
MC/Off-Policy MC Control with Weighted Importance Sampling.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Content with notebooks
#
# You can also create content with Jupyter Notebooks. This means that you can include
# code blocks and their outputs in your book.
#
# ## Markdown + notebooks
#
# As it is markdown, you can embed images, HTML, etc into your posts!
#
# 
#
# You an also $add_{math}$ and
#
# $$
# math^{blocks}
# $$
#
# or
#
# $$
# \begin{align*}
# \mbox{mean} la_{tex} \\ \\
# math blocks
# \end{align*}
# $$
#
# But make sure you \$Escape \$your \$dollar signs \$you want to keep!
#
# ## MyST markdown
#
# MyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, check
# out {doc}`syntax`, or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/).
#
# ## Code blocks and outputs
#
# Jupyter Book will also embed your code blocks and output in your book.
# For example, here's some sample Matplotlib code:
from matplotlib import rcParams, cycler
import matplotlib.pyplot as plt
import numpy as np
plt.ion()
# +
# Fixing random state for reproducibility
np.random.seed(19680801)
N = 10
data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)]
data = np.array(data).T
cmap = plt.cm.coolwarm
rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N)))
from matplotlib.lines import Line2D
custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4),
Line2D([0], [0], color=cmap(.5), lw=4),
Line2D([0], [0], color=cmap(1.), lw=4)]
fig, ax = plt.subplots(figsize=(10, 5))
lines = ax.plot(data)
ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']);
# -
# There is a lot more that you can do with outputs (such as including interactive outputs)
# with your book. For more information about this, see [the Jupyter Book documentation](https://executablebookproject.github.io/cli/start/overview.html)
|
jupyter_book/book_template/notebooks.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:Anaconda3]
# language: python
# name: conda-env-Anaconda3-py
# ---
# ## What is the purpose of programming?
#
# If you do much work on computers, eventually you find that there’s some task you’d like to automate.The goal of this course is to help you learn the basics of how to translate algorithmic or mathematical ideas you may have into instructions that your computer can understand and implement.
#
# Programming languages provide the interface that allows you to do so and each language does so in unique ways.
#
# While each language is different, the basic principles of programming translate across language and this is what we want you to understand.
#
# Python is one of the most popular languages in use today and is used by people from all fields.
# ## A Little Bit of Jupyter...
#
# As you can see in this notebook there are these boxes that you can type in- each box can store either code that python will try to "interpret" or text which will just display. BUT you need to tell the jupyter notebook what you intend to do by changing the type of the cell using the dropdown menu: Cell --> CellType --> Code/Markdown. To evaluate a cell use Shift-Enter
#
This is not a text cell but I am typing text
# This is a text cell and now the text is not interpreted.
# +
# If you want to put text in a code cell- use the # symbol before the text
# -
#
# Note you can insert or delete cells using these menus: Insert --> Cell Above/Below or Edit --> Delete Cells
# ## Functions and Arguments (a very brief intro.. )
# 
# ## Data Types
#
# We will talk more about this later but when you write things in Code, every item has a type... you can find out the type of something using the type() function
type(3) # this is an integer
type(3.0) # this is a float
type("3.0, abc") # this is a string
# Why did the above just print str when it should have printed float and then str? This is because jupyter notebook will evaluate the code in the most recent line and put that in the output. If you want to print something along the way there is a python command for that!
print(type(3.0))
print(type("3.0, abc"))
print("Hello World...")
#Printing two things on the same line
print("3", 4)
help(print)
# ### Casting
#What if I want to convert that float to an integer or a string?
int(3.0)
str(3.0)
print(type(str(3.0)))
# ## Python as a calculator!
# + pycharm={"name": "#%%\n"}
# Arithmetic operations
5 + 2
# + pycharm={"name": "#%%\n"}
5 - 2
# + pycharm={"name": "#%%\n"}
5 * 2
# -
# We can check the type of the output as before
type(5*2)
# + pycharm={"name": "#%%\n"}
# floating point by default
print(5 / 2)
print(type(5/2))
# + pycharm={"name": "#%%\n"}
5 // 2
# + pycharm={"name": "#%%\n"}
5 ** 2
# + pycharm={"name": "#%%\n"}
3 * * 4 #spaces matter if it changes the operation
# -
3*4 # here the lack of spaces doesnt change each item
# + pycharm={"name": "#%%\n"}
5 ** 0.5
# + pycharm={"name": "#%%\n"}
# order of operations
(5 + 4/2) ** 2
# -
# ## Python is a smart calculator ...
"3.0" + "4.0" # the addition operation can be used to combine strings
"3.0" - "4.0"
"3" + 4 # we can't add a string and int
# ### Comparisons
print(3 == 3)
print(1 == 3)
print("abc" == "abc")
print("abc" == "ABC")
print(abc == "abc")
# ### Basics of Conditionals
# We just saw comparisons- comparisons return True or False and this can be used to tell the computer what to do.
# For example if 'str1' == 'str2': print("Yes they are equal!"). Let's try this... remember after conditions like if you need a colon and indentation of what you want to do.
if "abc" == "abc":
print("Yes they are equal!")
# What if that is not true and we want to do something? We can use the "not" operator for this
# +
print("Comparing first...")
if not "abc" == "abc":
print("No they are not equal!")
print("Comparing second...")
if not "ABC" == "abc":
print("No they are not equal!")
# -
# We can combine these into one statement using the "else" idea.. just as in english the commands are staged if .. do this, else.. do something else.
# +
if "abc" == "abc":
print("Yes they are equal!")
else:
print("No they are not equal!")
# -
# ## Keeping things around to use later... Variables!
#
# All those expressions output something but how do we store them to use them later? Python uses the '=' to assign things to certain names. The name then becomes a container for whatever is on the right of the equals
# + pycharm={"name": "#%%\n"}
a = 10
b = 20
# -
a + b
# What is the type of a?
type(a)
# But we can't use anything as names..
10=20
print = 10 #bad!
print(6)
# How do I delete a variable? (Little discussion about state )
del print
print(6)
# + pycharm={"name": "#%%\n"}
quarter = 1/4
half = 2 * quarter
print(half)
print(quarter)
print(half == quarter) # we can compare variables to see if they are the same
# -
# If there are a lot of variables printing is tedious... let's use the command whos. If we type 'whos' it will show all variables.
whos
# The golden ratio is a special number for many reasons it can be defined as (1 + sqrt(5))/2. Store the golden ratio in a variable. Next convert this to a string and print one string that says: "This is the golden ratio (golden ratio)" where the parentheses refers to the string version of the golden ratio variable.
#
# Store another variable b that has value 1.
# Calculate 1 + 1/goldenratio ... what does this look like? Verify this in python using a comparison.
#
#
# +
goldenratio = (1 + 5 ** 0.5)/2
print("This is the golden ratio " + str(goldenratio))
print(1+ (1/goldenratio))
print(goldenratio == 1+1/goldenratio)
# -
# ## Sequences
#
# If we want to put a sequence of things into one item (a la an excel row) how do we do that? One way is through the data type
# list.
#
x = [1,2,3,4,5]
print(x)
# How do I grab an element? In python indexing is done through square brackets[] we will see more of this later, but for a list the brackets will require an integer to select an index. Python indexes starting at 0!
x[0]
x[1.0]
x[-1] #you can index the end through negatives
x[-2]
#The : allows you to get all elements after or before the index
print(x[1:])
print(x[:4])
#lists can store other types too
y = ['1', 2, '3', 4.0]
# ## Basic loops
# How do I go through each element of y and print it's type?
for element in y: #note the colon
print(type(element))
# What if I wanted to loop through y and just print the 3rd element?
#python uses the range() function to create a list of numbers
size_y = len(y)
print(range(size_y))
# +
for i in range(size_y):
#index 2 is the 3rd element uis a conditional!
if i == 2:
print(y[i])
#print index of elements that equal '3'
for i in range(size_y):
if y[i] == '3':
print("Index of element that equals '3':", i)
# -
# HARDER Example: What if we for some reason couldn't calculate the size of y and still want to print the 3rd element? We can use what is known as a while loop.
i = 0
while True:
print("Index: " , i)
if i == 2:
print(y[i])
break
i += 1
|
Day1/morning_python_intro.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example: Bringing Multiple Pythons To A Problem
#
#
# ## Setting The Stage With A Pretend Scenario
#
# For demoing I wanted to find an example of something non-trivial to simulate an actual coding situation where writing straight CPython is insufficient performance wise and you have to start looking for solutions with Python alternatives.
#
# ### Scenario
# Your favorite band, 2 Pegasus Tasks, just released a new single. However, in their usual style, they did it with a twist.
#
# Instead of distributing the song on Spotify or some other distribution platform, for its initial release they segmented the audio file binary into tiny substrings and then inserted them according to a secret pattern into a bunch of longer strings to create an internet scavenger hunt.
#
# Any group that is able to piece the song together gets a cash prize on the condition that they don't share it or how t hey got it with others. They have provided a preliminary list of 300 strings, 100 of which the've confirmed do contain a chunk of the song, and 200 of which don't.
#
# ### Your Mission
#
# You and a group of friends have managed to scrape down all known instances of these string sequences and tried some linear but you can't make heads or tails of what they mean. When you analyze the seque nces by hand, sometimes you think you've found a pattern but then another example breaks it. It definitely looks like the function for the pattern will be non-linear in nature.
#
# So you have the idea to use a Support Vector Machine based on a string kernel to learn a boundary between what is part of the pattern and what is not in higher dimensional space. You find a whitepaper about string subsequence kernels that looks like just what you need. You search through libraries online like Scikit Learn and NLTK but in this imaginary scenario, you can't find a decent implemntation of this kernel function anywhere, so you'll have to write your own.
#
# ### Example 1: Just CPython
#
# After pouring over the whitepaper and mayber some supplemental material to make sure you understand the concepts, you are able to create an implementation of the string-subsequence-kernel. You are ready to run it, let's simulate that by going to an older revision of our example code which is just written in CPython with some profiling lines to show us how fast it is.
import numpy as np
import timeit
# +
"""
this code is derivative of https://github.com/helq/python-ssk/commit/6acee597ff37f7e7e12dd8651421a4d34c5dad70
by https://github.com/helq, which is licensed under Creattive Commons Zero v1.0 Universal /
changes: removing lodhi assertions because we know it works, changing script handling into function,
adding timeit code to measure duration, changing inputs to exagerrate performance problem.
Do check that repo out if you actually want to learn about ssk/get a fast implementation for practical purposes.)
"""
# Kernel defined by Lodhi et al. (2002)
def ssk(s, t, n, lbda, accum=False):
dynamic = {}
def k_prim(s, t, i):
# print( "k_prim({},{},{})".format(s, t, i) )
if i == 0:
# print( "k_prim({},{},{}) => 1".format(s, t, i) )
return 1.
if min(len(s), len(t)) < i:
# print( "k_prim({},{},{}) => 0".format(s, t, i) )
return 0.
if (s,t,i) in dynamic:
return dynamic[(s,t,i)]
x = s[-1]
s_ = s[:-1]
indices = [i for i, e in enumerate(t) if e == x]
toret = lbda * k_prim(s_, t, i) \
+ sum( k_prim(s_, t[:j], i-1) * (lbda**(len(t)-j+1)) for j in indices )
# print( "k_prim({},{},{}) => {}".format(s, t, i, toret) )
dynamic[(s,t,i)] = toret
return toret
def k(s, t, n):
# print( "k({},{},{})".format(s, t, n) )
if n <= 0:
raise "Error, n must be bigger than zero"
if min(len(s), len(t)) < n:
# print( "k({},{},{}) => 0".format(s, t, n) )
return 0.
x = s[-1]
s_ = s[:-1]
indices = [i for i, e in enumerate(t) if e == x]
toret = k(s_, t, n) \
+ lbda**2 * sum( k_prim(s_, t[:j], n-1) for j in indices )
# print( "k({},{},{}) => {}".format(s, t, n, toret) )
return toret
if accum:
toret = sum( k(s, t, i) for i in range(1, min(n,len(s),len(t))+1) )
else:
toret = k(s, t, n)
# print( len(dynamic) )
return toret
def string_kernel(xs, ys, n, lbda):
if len(xs.shape) != 2 or len(ys.shape) != 2 or xs.shape[1] != 1 or ys.shape[1] != 1:
raise "The shape of the features is wrong, it must be (n,1)"
lenxs, lenys = xs.shape[0], ys.shape[0]
mat = np.zeros( (lenxs, lenys) )
for i in range(lenxs):
for j in range(lenys):
mat[i,j] = ssk(xs[i,0], ys[j,0], n, lbda, accum=True)
mat_xs = np.zeros( (lenxs, 1) )
mat_ys = np.zeros( (lenys, 1) )
for i in range(lenxs):
mat_xs[i] = ssk(xs[i,0], xs[i,0], n, lbda, accum=True)
for j in range(lenys):
mat_ys[j] = ssk(ys[j,0], ys[j,0], n, lbda, accum=True)
return np.divide(mat, np.sqrt(mat_ys.T * mat_xs))
def evaluate_ssk():
print("Testing...")
## code for the pretend scenario, long binary sequences
s1 = np.array(["101110010010111110101011100000101000010010111100101011010011011010110111", \
"101000010010111111111111110000010100001001011110010101101001101101011011", \
"101000010010111110101011100011111111101001011110010101101001101101011011", \
"10111111001011111010101110000010100001001011110010101101001101101011011"]).reshape((4,1))
s2 = np.array(["10100001001011111111111110000010100001001011110010101101001101101011011", \
"10100001001011111010101110000010100001001011110010111111111101101011011", \
"10100001001011111010101110000010100011101011110010101101001101101011011", \
"10100001001011111010101110110010100001001011110010101101001111111011011"]).reshape((4, 1))
# code for pretend scenario, we are looking for common substrings up to 8 chars in length, because that could be a byte
print( string_kernel(s1, s2, 11, 1.) )
print(f"Running String Kernel on the strings took: \
{timeit.timeit('evaluate_ssk()', setup='from __main__ import evaluate_ssk', number=1)} seconds", )
# -
# ### CPython Is Too Slow. . .
#
# Oh, no! Don't know what it shows on your machine but mine had ~16 seconds and that is no bueno. The real strings were much longer than that, and the SVM is goingo to need to pairwise compare thousands of them at maybe even longer subsequence values. If you want to be able to iterate and tune your SVM model quickly, we need to build a better mousetrap.
#
# What have we got? Well, Pypy is supposed to be a faster drop in replacement for CPython right? And even though we know it won't help us with the already optimized numpy operations, looks like a lot of the bottleneck is good old fashioned python for loops. Pypy should be able to figure out how to compile those down to machine code for us.
# ## Pypy To The Rescue?
#
# So For This Example, we are going to run the same code, but swap out our Python 3 kernel in jupyter for a Pypy one. here are the steps I followed to achieve this, which should hopefully work for you if you are following along with the conda-based setup I provided in the README. they are based on this answer in Stack Overflow by jadelord on May 24, 2019: https://stackoverflow.com/questions/33850577/is-it-possible-to-run-a-pypy-kernel-in-the-jupyter-notebook
#
# from your terminal:
# `conda install -c anaconda ipykernel`
# `ipython kernel install --user --name=PyPy3`
#
# verify that it shows up with:
# `jupyter kernelspec list`
#
# Then you can run the above cell with Pypy by going to the jupyter menu, Kernel -> Change Kernel -> Pypy3
#
# ### How did it go?
#
# For me, it was around the same amount of time as CPython, if not a bit slower. Hard to know why for sure just by looking at it if we aren't familiar with the exact things Pypy is better or worse at.
#
# Maybe it is because this isn't actually that much code, so the JIT compiler's warm up time eats up all the optimizations we get from it? Maybe because Pypy3 is actually slower to run some of the numpy functions? Maybe there is code in here that CPython's compiler optimizes down which the Pypy interpreter is missing. All interesting thoughts. Maybe PyPy3 would be faster if we were doing this more times or with longer sequences? Let's just keep it in our back pocket for now.
#
# ## OK, what about Cython? (Spoiler alert, helq's original solution so it probably works well)
#
# Ok, so at this point in our pretend scenario, our first optimization attempt hasn't worked out but we haven't lost hope. If these loops were running in C code that had been dynamically linked instead of going through the process of being bytecode processed through a loop, they'd probably be significantly faster. But we do need to rewrite the code a bit to get that optimization benefit. At least it is still python though.
#
# Also a note, there are other clever things done in this below code that you wouldn't necessarily have to do to get the speed up that comes with Cython, if you plan to peruse it yourself and determine how it is all working. Helq mentions they got a trick from the Shogun C++ implementation of string ssk as well.
#
# First, let the jupyter notebook know you are going to use you local installation of Cython for compilation with a magic string:
# %load_ext Cython
# and here is the new code complete with some cython annotations and cdef statements for the python types. Also notice another magic string that says we would like to compile this cell with cython
# + language="cython"
#
# """
# This cell and the one below it are derivative of https://github.com/helq/python-ssk
# by https://github.com/helq, which is licensed under Creattive Commons Zero v1.0 Universal /
# changes: consolidating the pyx and main.py modules into this notebook, moved module scoped code into functions and added
# timer code to measure duration.
# """
#
# import numpy as np
# cimport numpy as np
#
# from cpython cimport array
# import array
#
# cimport cython
#
# def ssk(s, t, int n, float lbda, accum=False):
# """s and t are strings, either numpy.str_ or python str, or a list of chars"""
# s_array = array.array('l', [ord(c) for c in s])
# t_array = array.array('l', [ord(c) for c in t])
# return ssk_array(s_array, t_array, n, lbda, accum)
#
# # Kernel defined by Lodhi et al. (2002)
# @cython.boundscheck(False) # turn off bounds-checking for entire function
# @cython.wraparound(False) # turn off negative index wrapping for entire function
# def ssk_array(array.array s_, array.array t_, int n, float lbda, accum=False):
# cdef int lens, lent
# cdef int i, sj, tk
# cdef float toret
# cdef long[:] s # this reduces the overhead 10x fold!!!
# cdef long[:] t
#
# s = s_ if s_.typecode == 'l' else array.array('l', [int(c) for c in s_])
# t = t_ if t_.typecode == 'l' else array.array('l', [int(c) for c in t_])
#
# lens, lent = len(s), len(t)
# #k_prim = (-1)*np.ones( (n+1, lens, lent) )
# cdef np.ndarray[np.float64_t, ndim=3] \
# k_prim = np.zeros((n, lens, lent), dtype=np.float64)
#
# k_prim[0,:,:] = 1
#
# for i in range(1,n):
# for sj in range(i,lens):
# toret = 0.
# for tk in range(i,lent):
# if s[sj-1]==t[tk-1]: # trick taken from shogun implemantion of SSK
# toret = lbda * (toret + lbda*k_prim[i-1,sj-1,tk-1])
# else:
# toret *= lbda
# k_prim[i,sj,tk] = toret + lbda * k_prim[i, sj-1, tk]
#
# cdef int start = 0 if accum else n-1
# cdef float k = 0.
#
# for i in range(n):
# for sj in range(i,lens):
# for tk in range(i,lent):
# if s[sj]==t[tk]:
# k += lbda*lbda*k_prim[i,sj,tk]
#
# # print( [len(list(i for (sj,tk,i) in k_prim if i==m-1)) for m in range(n)] )
# return k
#
# def string_kernel(xs, ys, n, lbda):
# """xs and ys are numpy arrays of strings or arrays of ints, n an integer and lbda a bool"""
# if len(xs.shape) != 2 or len(ys.shape) != 2 or xs.shape[1] != 1 or ys.shape[1] != 1:
# raise "The shape of the features is wrong, it must be (n,1)"
#
# cdef int lenxs, lenys
# cdef int i, j
# cdef np.ndarray[np.float64_t, ndim=2] mat, mat_xs, mat_ys
# lenxs, lenys = xs.shape[0], ys.shape[0]
#
# mat = np.zeros((lenxs, lenys))
#
# ssk_fun = ssk_array if xs.dtype == 'O' and isinstance(xs[0,0], array.array) else ssk
#
# # If both lists are equal, then the resulting matrix is symetric, there is no need to
# # calculate the hole thing
# if lenxs == lenys and np.array_equal(xs, ys):
# for i in range(lenxs):
# for j in range(i,lenys):
# mat[j,i] = mat[i,j] = ssk_fun(xs[i,0], ys[j,0], n, lbda, accum=True)
#
# mat_xs = mat_ys = mat.diagonal().reshape((lenxs, 1))
#
# else:
# for i in range(lenxs):
# for j in range(lenys):
# mat[i,j] = ssk_fun(xs[i,0], ys[j,0], n, lbda, accum=True)
#
# mat_xs = np.zeros((lenxs, 1))
# mat_ys = np.zeros((lenys, 1))
#
# for i in range(lenxs):
# mat_xs[i] = ssk_fun(xs[i,0], xs[i,0], n, lbda, accum=True)
# for j in range(lenys):
# mat_ys[j] = ssk_fun(ys[j,0], ys[j,0], n, lbda, accum=True)
#
# return np.divide(mat, np.sqrt(mat_ys.T * mat_xs))
# +
def evaluate_cython_ssk():
print("Testing...")
## code for the pretend scenario, long binary sequences
s1 = np.array(["101110010010111110101011100000101000010010111100101011010011011010110111", \
"101000010010111111111111110000010100001001011110010101101001101101011011", \
"101000010010111110101011100011111111101001011110010101101001101101011011", \
"10111111001011111010101110000010100001001011110010101101001101101011011"]).reshape((4,1))
s2 = np.array(["10100001001011111111111110000010100001001011110010101101001101101011011", \
"10100001001011111010101110000010100001001011110010111111111101101011011", \
"10100001001011111010101110000010100011101011110010101101001101101011011", \
"10100001001011111010101110110010100001001011110010101101001111111011011"]).reshape((4, 1))
# code for pretend scenario, we are looking for common substrings up to 8 chars in length, because that could be a byte
print( string_kernel(s1, s2, 11, 1.) )
print(f"Running String Kernel on the strings took: \
{timeit.timeit('evaluate_cython_ssk()', setup='from __main__ import evaluate_cython_ssk', number=1)} seconds", )
# -
# Ah, that's more like it. Much faster. Wow!
#
# #### Note, as long as you recompile the Cython cell, you can run this with either CPython3 or Pypy3
#
# That should be fast enough, now imaginary you can take this kernel function and build your Support Vector Machine and find the missing parts of your song!
#
# Thanks for trying this out! Hope it gave a sense for trying out different python variants in search of a solution.
|
Examples/python_variants.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/cxbxmxcx/Evolutionary-Deep-Learning/blob/main/EDL_4_ES.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ZWk8QVItlplD"
# Original Source: https://github.com/DEAP/deap/blob/master/examples/ga/onemax_numpy.py
#
# DEAP is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of
# the License, or (at your option) any later version.
#
# DEAP is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public
# License along with DEAP. If not, see <http://www.gnu.org/licenses/>.
# + id="ct-pwA_aHMMa" colab={"base_uri": "https://localhost:8080/"} outputId="744859a9-505a-445c-bf00-5fadcc1d6ecc"
#@title Install DEAP
# !pip install deap --quiet
# + id="epVL5qPDHCPW"
#@title Imports
import array
import random
import time
import numpy as np
from deap import algorithms
from deap import base
from deap import benchmarks
from deap import creator
from deap import tools
# %matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import clear_output
# + id="WPvkex5Fltin"
#@title ES Bounding Hyperparameters
IND_SIZE = 6
NGEN = 1000
MIN_VALUE = 4
MAX_VALUE = 5
MIN_STRATEGY = 0.5
MAX_STRATEGY = 3
CXPB = .6
MUTPB = .3
GEN_OUTPUT = 25
MAX_TIME = 5
# + colab={"base_uri": "https://localhost:8080/", "height": 286} id="KjINavFm1_Z1" outputId="efa73b64-c03e-4f0a-b17a-179c9fda2614"
#@title Generate regression data
equation_form = "polynomial" #@param ["polynomial", "abs", "step"]
X_START = -5
X_END = 5
X_STEP = 0.5
def equation(x):
if equation_form == "polynomial":
return (2*x + 3*x**2 + 4*x**3 + 5*x**4 + 6*x**5 + 10)
elif equation_form == "abs":
return abs(x)
else:
return np.where(x>1, 1, 0)
X = np.array([x for x in np.arange(X_START, X_END, X_STEP)])
Y = equation(X)
data = list(zip(X, Y))
plt.plot(X,Y)
# + id="s3r8TiOjHYyy"
#@title Create the Creator
creator.create("FitnessMin", base.Fitness, weights=(-1.0,))
creator.create("Individual", array.array, typecode="d", fitness=creator.FitnessMin, strategy=None)
creator.create("Strategy", array.array, typecode="d")
# + id="QPQmqP6il-o1"
#@title Individual generator and decorate functions
def generateES(icls, scls, size, imin, imax, smin, smax):
ind = icls(random.uniform(imin, imax) for _ in range(size))
ind.strategy = scls(random.uniform(smin, smax) for _ in range(size))
return ind
def checkStrategy(minstrategy):
def decorator(func):
def wrappper(*args, **kargs):
children = func(*args, **kargs)
for child in children:
for i, s in enumerate(child.strategy):
if s < minstrategy:
child.strategy[i] = minstrategy
return children
return wrappper
return decorator
# + id="zz0FP3ibmCQi" colab={"base_uri": "https://localhost:8080/"} outputId="ff3c3858-32f8-468c-85ed-4339f84862ca"
#@title Setup the Toolbox
print(checkStrategy(MIN_STRATEGY))
toolbox = base.Toolbox()
toolbox.register("individual", generateES, creator.Individual, creator.Strategy,
IND_SIZE, MIN_VALUE, MAX_VALUE, MIN_STRATEGY, MAX_STRATEGY)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
toolbox.register("mate", tools.cxESBlend, alpha=0.1)
toolbox.register("mutate", tools.mutESLogNormal, c=1.0, indpb=0.03)
toolbox.register("select", tools.selTournament, tournsize=3)
toolbox.decorate("mate", checkStrategy(MIN_STRATEGY))
toolbox.decorate("mutate", checkStrategy(MIN_STRATEGY))
# + id="zYiDae474ggD"
#@title Evaluation of fitness
def pred(ind, x):
y_ = 0.0
for i in range(1,IND_SIZE):
y_ += ind[i-1]*x**i
y_ += ind[IND_SIZE-1]
return y_
def fitness(ind, data):
mse = 0.0
for x, y in data:
y_ = pred(ind, x)
mse += (y - y_)**2
return mse/len(data),
# fitness eval
toolbox.register("evaluate", fitness, data=data)
# + id="WJ2qluCC7oSS"
#@title Plot Fitness
def plot_fitness(g, best, pop, logbook):
Y_ = np.array([pred(best, x) for x in X])
clear_output()
best = [round(b,1) for b in best]
print(f"Generation {g}, Best {best}")
print(logbook.stream)
fits = [f.fitness.values[0] for f in pop]
plt.hist(fits)
plt.show()
plt.scatter(X,Y)
plt.plot(X,Y_, 'r')
plt.show()
# + id="FC0-B2wAID9Z" colab={"base_uri": "https://localhost:8080/", "height": 602} outputId="ce8d8a53-b311-444a-e14c-a4db57fbd79a"
#@title Run the Evolution
random.seed(64)
MU, LAMBDA = 250, 1000
pop = toolbox.population(n=MU)
hof = tools.HallOfFame(1)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", np.mean)
stats.register("std", np.std)
stats.register("min", np.min)
stats.register("max", np.max)
start = time.time()
for g in range(NGEN):
pop, logbook = algorithms.eaMuCommaLambda(pop, toolbox, mu=MU, lambda_=LAMBDA,
cxpb=CXPB, mutpb=MUTPB, ngen=1, stats=stats, halloffame=hof, verbose=False)
if (g+1) % GEN_OUTPUT == 0:
plot_fitness(g, hof[0], pop, logbook)
end = time.time()
if end-start > MAX_TIME:
break
best = [round(i,1) for i in hof[0]]
print("Best individual is ", best, round(hof[0].fitness.values[0],2))
|
EDL_4_ES.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <font color='darkblue'>Understanding corecursion and recursion</font>
# <b>Corecursion is composing computing steps by using the output of one step as the input of the next one, starting with the first step. Recursion is the same operation, but starting with the last step</b>. In recursion, you have to delay evaluation until you encounter a base condition (<font color='brown'>corresponding to the first step of corecursion</font>). Let’s say you have only two instructions in your programming language: incrementation (<font color='brown'>adding 1 to a value</font>) and decrementation (<font color='brown'>subtracting 1 from a value</font>). As an example, you’ll implement addition by composing these instructions.
#
# ## <font color='darkgreen'>Exploring corecursive and recursive addition examples</font>
# To add two numbers, x and y, you can do the following:
# * If y = 0, return x.
# * Otherwise, increment x, decrement y, and start again.
#
# This can be written as follows:
# ```python
# def add(x, y):
# while y > 0:
# y -= 1
# x += 1
#
# return x
# ```
#
# The recursive version is trickier, but still simple:
# ```python
# def addRec(x, y):
# return x if y==0 else addRec(x+1, y-1)
# ```
#
# Both approaches seem to work, but if you try the recursive version with big numbers, you may have a surprise:
# +
def addRec(x, y):
return x if y == 0 else addRec(x+1, y-1)
addRec(1, 1001)
# -
# ## <font color='darkgreen'>Abstracting recursion</font>
# Actually, we can use heap space to store operation/method in heap space instead of call stack by:
# * Represent unevaluated method calls
# * Store them in a stack-like structure until you encounter a terminal condition
# * Evaluate the calls in “last in, first out” (LIFO) order
#
# Most examples of recursive methods use the <b><a href='https://en.wikipedia.org/wiki/Factorial'>factorial function</a></b>. Other examples use the <b><a href='https://en.wikipedia.org/wiki/Fibonacci_number'>Fibonacci series</a></b>. The factorial method presents no particular interest beside being recursive. The Fibonacci series is more interesting, and we’ll come back to it later. To start with, you’ll use the much simpler recursive addition method shown at the beginning of this chapter.
#
# <b>Recursive and corecursive functions are both functions where f(n) is a composition of f(n - 1), f(n - 2), f(n - 3), and so on, until a terminal condition is encountered</b> (<font color='brown'>generally f(0) or f(1)</font>). Remember that in traditional programming, composing generally means composing the results of an evaluation. This means that composing function f(a) and g(a) consists of evaluating g(a) and then using the result as input to f. But it doesn’t have to be done that way. In chapter 2, you developed a compose method to compose functions, and a higherCompose function to do the same thing. Neither evaluated the composed functions. They only produced another function that could be applied later.
|
Ch4_Recursion.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # La détection de langue
# Nous utilisons ici la librairie langid:
#
# https://pypi.org/project/langid/
# ## Imports
# +
import os
from collections import defaultdict
import langid
import pycountry
# -
# ## Forcer l'algorithme à ne détecter que du Français et du Néerlandais
# As it is known that the lenguage is FRENCH or DUTCH, it is referensed here to reduce "problems"
langid.set_languages(['fr', 'nl'])
# ## Lister tous les documents
root = "../data/txt/"
txts = [doc for doc in os.listdir(root) if "197" in doc]
print(f"{len(txts)} TXT files found")
# ## Détecter la langue pour tous les documents
# Nous allons lire chaque fichier, détecter la langue, et incrémenter `lang_dict` lorsqu'une langue est détectée.
#
# **Important** : pour détecter les langues sur tous les documents, mettez `limit = None` ci-dessous.
limit = 100
# limit = None
lang_dict = defaultdict(int)
txts = txts[:limit] if limit else texts
for i, txt in enumerate(sorted(txts)):
if txt.endswith("txt"):
if i % 50 == 0:
print(f'{i} document(s) processed...')
text = open(os.path.join(root, txt)).read()
text_length = len(text)
if text_length > 20:
lang, conf = langid.classify(text)
lang_dict[lang] += 1
else:
# If the text is lenght < 20 characters, print "treating as unknown"
print(f"{txt} contains only {text_length} characters, treating as unknown")
lang_dict['n/a'] += 1
print("Done")
# ## Afficher le nombre de documents par langue
for lang_code, nb_docs in lang_dict.items():
language = pycountry.languages.get(alpha_2=lang_code)
try:
lang_name = language.name
except AttributeError:
lang_name = language
print(f"{lang_name}\t{nb_docs}")
|
module5/s1_language_detection.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:torchenv]
# language: python
# name: conda-env-torchenv-py
# ---
# +
#default_exp reformer
# -
#hide
from nbdev.showdoc import *
#hide
# %load_ext autoreload
# %autoreload 2
# +
#export
from torch.autograd.function import Function
from torch.utils.checkpoint import get_device_states, set_device_states
from functools import wraps
from fastai.basics import *
from reformer_fastai.core import *
from reformer_fastai.layers import *
from reformer_fastai.attention import *
from reformer_fastai.transformer import LMMixin, EncDecMixin
# -
#hide
#skip
from torch.autograd import profiler
# # Reformer
# Credits to @lucidrains https://github.com/lucidrains/reformer-pytorch
# ## Helper classes
#export
class Chunk(Module):
"Applies fn to input chunked along dim"
def __init__(self, chunks:int, fn:Module, dim:int=-1):
store_attr()
def forward(self, x, **kwargs):
if self.chunks == 1:
return self.fn(x, **kwargs)
chunks = x.chunk(self.chunks, dim = self.dim)
return torch.cat([self.fn(c, **kwargs) for c in chunks], dim = self.dim)
#export
class ChunkedFeedForward(Module):
"Applies positionwise feed-forward layer to input chunced along dim"
def __init__(self, d:int, d_ff:int=None, chunks:int=1, dropout:float=0., dim:int=-1):
d_ff = default(d_ff, 4*d)
self.net = nn.Sequential(
nn.Linear(d, d_ff),
nn.GELU(),
nn.Dropout(dropout),
nn.Linear(d_ff, d),
nn.Dropout(dropout)
)
self.chunks = chunks
self.dim = dim
def forward(self, x, **kwargs):
if self.chunks == 1:
return self.net(x)
chunks = x.chunk(self.chunks, dim = self.dim)
return torch.cat([self.net(c) for c in chunks], dim = self.dim)
bs = 4
sl = 64
d = 128
x = torch.randn(bs, sl, d)
ff = ChunkedFeedForward(d, chunks=8, dim=1)
out = ff(x)
assert out.size() == (bs, sl, d)
#hide
ff = PostNorm(d, Residual(ChunkedFeedForward(d, chunks=8, dim=1)))
out = ff(x)
assert out.size() == (bs, sl, d)
# +
#hide
#skip
# with profiler.profile(record_shapes=True, profile_memory=True) as prof:
# with profiler.record_function("fwd"):
# out = ff(x)
# print(prof.key_averages(group_by_input_shape=True).table(sort_by="cpu_memory_usage", row_limit=10))
# -
#export
class Deterministic(Module):
"""
Wrapper module to ensure determinism for backward pass
following example for saving and setting rng here
https://pytorch.org/docs/stable/_modules/torch/utils/checkpoint.html
"""
def __init__(self, net:Module):
self.net = net
self.cpu_state = None
self.cuda_in_fwd = None
self.gpu_devices = None
self.gpu_states = None
def record_rng(self, *args):
self.cpu_state = torch.get_rng_state()
if torch.cuda._initialized:
self.cuda_in_fwd = True
self.gpu_devices, self.gpu_states = get_device_states(*args)
def forward(self, *args, record_rng = False, set_rng = False, **kwargs):
if record_rng:
self.record_rng(*args)
if not set_rng:
return self.net(*args, **kwargs)
rng_devices = []
if self.cuda_in_fwd:
rng_devices = self.gpu_devices
with torch.random.fork_rng(devices=rng_devices, enabled=True):
torch.set_rng_state(self.cpu_state)
if self.cuda_in_fwd:
set_device_states(self.gpu_devices, self.gpu_states)
return self.net(*args, **kwargs)
# +
#TODO add tests here
# -
#export
# heavily inspired by https://github.com/RobinBruegger/RevTorch/blob/master/revtorch/revtorch.py
class ReversibleBlock(Module):
"Applies f and g in reversible manner. Avoids storing outputs for backpropagation"
def __init__(self, f:Module, g:Module, depth=None, send_signal=False):
store_attr('depth, send_signal')
self.f = Deterministic(f)
self.g = Deterministic(g)
def forward(self, x, f_args = {}, g_args = {}):
x1, x2 = torch.chunk(x, 2, dim=2)
y1, y2 = None, None
if self.send_signal:
f_args['_reverse'] = g_args['_reverse'] = False
f_args['_depth'] = g_args['_depth'] = self.depth
with torch.no_grad():
y1 = x1 + self.f(x2, record_rng=self.training, **f_args)
y2 = x2 + self.g(y1, record_rng=self.training, **g_args)
return torch.cat([y1, y2], dim=2)
def backward_pass(self, y, dy, f_args = {}, g_args = {}):
y1, y2 = torch.chunk(y, 2, dim=2)
del y
dy1, dy2 = torch.chunk(dy, 2, dim=2)
del dy
if self.send_signal:
f_args['_reverse'] = g_args['_reverse'] = True
f_args['_depth'] = g_args['_depth'] = self.depth
with torch.enable_grad():
y1.requires_grad = True
gy1 = self.g(y1, set_rng=True, **g_args)
torch.autograd.backward(gy1, dy2)
with torch.no_grad():
x2 = y2 - gy1
del y2, gy1
dx1 = dy1 + y1.grad
del dy1
y1.grad = None
with torch.enable_grad():
x2.requires_grad = True
fx2 = self.f(x2, set_rng=True, **f_args)
torch.autograd.backward(fx2, dx1, retain_graph=True)
with torch.no_grad():
x1 = y1 - fx2
del y1, fx2
dx2 = dy2 + x2.grad
del dy2
x2.grad = None
x = torch.cat([x1, x2.detach()], dim=2)
dx = torch.cat([dx1, dx2], dim=2)
return x, dx
bs = 4
sl = 64
d = 128
x = torch.randn(bs, sl, d)
# revblock is called on twin x
x2 = torch.cat([x, x], dim=-1)
attn = Attention(d)
ff = ChunkedFeedForward(d, chunks=8, dim=-2)
revblock = ReversibleBlock(attn, ff)
out = revblock(x2)
assert out.size() == (bs, sl, d*2)
# no grads are stored
out = torch.stack(out.chunk(2, dim=-1)).mean(dim=0)
try: out.mean().backward()
except RuntimeError as e: print(e)
# +
#hide
#skip
# with profiler.profile(record_shapes=True, profile_memory=True) as prof:
# with profiler.record_function("fwd"):
# out = revblock(x2)
# print(prof.key_averages(group_by_input_shape=True).table(sort_by="cpu_memory_usage", row_limit=10))
# -
#export
class IrreversibleBlock(Module):
"Mimics ReversibleBlock computation but gradients are computed as ussual"
def __init__(self, f, g):
store_attr()
def forward(self, x, f_args={}, g_args={}):
x1, x2 = torch.chunk(x, 2, dim=2)
y1 = x1 + self.f(x2, **f_args)
y2 = x2 + self.g(y1, **g_args)
return torch.cat([y1, y2], dim=2)
attn = Attention(d)
ff = ChunkedFeedForward(d, chunks=8, dim=-2)
irrevblock = IrreversibleBlock(attn, ff)
out = irrevblock(x2)
assert out.size() == (bs, sl, d*2)
# +
#hide
#skip
# with profiler.profile(record_shapes=True, profile_memory=True) as prof:
# with profiler.record_function("fwd"):
# out = irrevblock(x2)
# print(prof.key_averages(group_by_input_shape=True).table(sort_by="cpu_memory_usage", row_limit=10))
# -
# ### Reversible Half-Residual
class RevSwap(Module):
def forward(self, x1, x2):
return x2, x1
class RevHalfResidual(Module):
"Reversible Half-Residual"
def __init__(self, submodule:Module, depth=None, send_signal=False):
store_attr('depth, send_signal')
self.submodule = Deterministic(submodule)
def forward(self, x):
return x
def backward(self, )
#export
class _ReversibleFunction(Function):
@staticmethod
def forward(ctx, x, blocks, kwargs):
ctx.kwargs = kwargs
for block in blocks:
x = block(x, **kwargs)
ctx.y = x.detach()
ctx.blocks = blocks
return x
@staticmethod
def backward(ctx, dy):
y = ctx.y
kwargs = ctx.kwargs
for block in ctx.blocks[::-1]:
y, dy = block.backward_pass(y, dy, **kwargs)
return dy, None, None
#export
class ReversibleSequence(Module):
"""
Stack of ReversibleBlocks constructed from blocks.Applies ReversibleBlocks if
sequence length is > rev_thres or else IrreversibleBlocks.
"""
def __init__(self, blocks, rev_thres = 0, send_signal = False):
self.rev_thres = rev_thres # uses revblocks if seq_len else irrev_blocks
self.blocks = nn.ModuleList([ReversibleBlock(f, g, depth, send_signal) for depth, (f, g) in enumerate(blocks)])
self.irrev_blocks = nn.ModuleList([IrreversibleBlock(f=f, g=g) for f, g in blocks])
def forward(self, x, arg_route = (True, True), **kwargs):
reverse = x.shape[1] > self.rev_thres
blocks = self.blocks if reverse else self.irrev_blocks
f_args, g_args = map(lambda route: kwargs if route else {}, arg_route)
block_kwargs = {'f_args': f_args, 'g_args': g_args}
if not reverse:
for block in blocks:
x = block(x, **block_kwargs)
return x
return _ReversibleFunction.apply(x, blocks, block_kwargs)
bs = 4
sl = 64
d = 128
x = torch.randn(bs, sl, d)
x2 = torch.cat([x, x], dim=-1)
blocks = []
for i in range(2):
f = PreNorm(d, Attention(d))
g = PreNorm(d, FeedForward(d))
blocks.append(nn.ModuleList([f, g]))
layers = ReversibleSequence(nn.ModuleList(blocks))
out = layers(x2)
assert out.size() == (bs, sl, 2*d)
bs = 4
sl = 64
d = 128
x = torch.randn(bs, sl, d)
x2 = torch.cat([x, x], dim=-1)
blocks = []
for i in range(2):
f = PreNorm(d, LSHSelfAttention(d, bucket_size=16))
g = PreNorm(d, FeedForward(d))
blocks.append(nn.ModuleList([f, g]))
layers = ReversibleSequence(nn.ModuleList(blocks))
out = layers(x2, arg_route=(True, False), _reverse=False, _depth=1)
assert out.size() == (bs, sl, 2*d)
try: out.mean().backward()
except RuntimeError as e: print(e)
# +
#hide
#skip
# with profiler.profile(record_shapes=True, profile_memory=True) as prof:
# with profiler.record_function("fwd"):
# out = layers(x2)
# print(prof.key_averages(group_by_input_shape=True).table(sort_by="cpu_memory_usage", row_limit=10))
# -
# ## ReversibleTransformer
#export
class ReversibleEncoder(Module):
"Stack of ReversibleBlocks"
def __init__(self,
d_model:int,
n_layers:int=6,
n_heads:int = 8,
max_seq_len:int = 512,
ff_chunks:int = 1,
causal:bool = False,
attn_dropout:float = 0.,
post_attn_dropout:float = None,
attn_bias:bool=False,
ff_dropout:float = 0.,
d_ff:int = None,
prenorm:bool=True,
final_norm:Module=None,
rev_thres:int=0):
# store_attr()
blocks = []
norm_wrapper = PreNorm if prenorm else PostNorm
for ind in range(n_layers):
attn = Attention(d_model, n_heads, causal=causal, dropout=attn_dropout, out_dropout=post_attn_dropout, bias=attn_bias)
ff = ChunkedFeedForward(d_model, d_ff, chunks=ff_chunks, dropout=ff_dropout, dim=1)
f = norm_wrapper(d_model, attn)
g = norm_wrapper(d_model, ff)
blocks.append(nn.ModuleList([f, g]))
self.norm = final_norm(d_model) if exists(final_norm) else None
# send_signal is not implemented for now
self.layers = ReversibleSequence(nn.ModuleList(blocks), rev_thres=rev_thres, send_signal=False)
def forward(self, x, **kwargs):
x = torch.cat([x, x], dim = -1)
arg_route = (False, False)
# pdb.set_trace()
x = self.layers(x, arg_route = arg_route, **kwargs)
x = torch.stack(x.chunk(2, dim=-1)).mean(dim=0)
if exists(self.norm): x = self.norm(x)
return x
x = torch.randn(bs, sl, d)
m = ReversibleEncoder(d)
out = m(x)
assert (out.size() == (bs, sl, d))
out.shape
#export
class ReversibleDecoder(Module):
"Stack of ReversibleBlocks. Uses AdditiveAttention."
def __init__(self,
d_model,
n_layers = 6,
heads = 8,
max_seq_len = 512,
d_head = None,
bucket_size = 64,
n_hashes = 8,
ff_chunks = 100,
attn_chunks = None, # ??
attn_dropout = 0.,
post_attn_dropout = None,
attn_bias:bool=False,
ff_dropout = 0.,
d_ff = None,
prenorm=True,
final_norm:Module=None,
rev_thres = 0,
):
store_attr('d_model,n_layers')
get_attn = lambda: AdditiveAttention(d_model, heads, causal=True, dropout=attn_dropout, out_dropout=post_attn_dropout, bias=attn_bias)
get_ff = lambda: ChunkedFeedForward(d_model, d_ff, chunks=ff_chunks, dropout=ff_dropout, dim=1)
norm_wrapper = PreNorm if prenorm else PostNorm
blocks = []
for ind in range(n_layers):
f = norm_wrapper(d_model, get_attn())
g = norm_wrapper(d_model, get_ff())
blocks.append(nn.ModuleList([f, g]))
self.norm = final_norm(d_model) if exists(final_norm) else None
# send_signal is not implemented for now
self.layers = ReversibleSequence(nn.ModuleList(blocks), rev_thres=rev_thres, send_signal=False)
def forward(self, x, **kwargs):
x = torch.cat([x, x], dim = -1)
arg_route = (True, False)
# pdb.set_trace()
x = self.layers(x, arg_route = arg_route, **kwargs)
x = torch.stack(x.chunk(2, dim=-1)).mean(dim=0)
if exists(self.norm): x = self.norm(x)
return x
x = torch.randn(bs, sl, d)
m = ReversibleDecoder(d)
out = m(x)
assert (out.size() == (bs, sl, d))
out.shape
#export
class ReversibleLM(Module, LMMixin):
"""
Reversible Transformer for language modelling
Parameters:
* vocab_sz: int
* d_model: int - inner dimension of the model
* n_layers: int (default: 6)
* n_heads: int (default: 8)
* d_ff: int - inner dimension of the pointwise FeedForward net, if None defaults to 4*d_model
* attn_dropout: float - attention dropout
* ff_dropout: float - feed-forward dropout
* emb_dropout: float - embedding dropout
* causal: bool (default: True) - if True does causal masking automatically
* max_seq_len: int (default: 512)
* tie_weights: bool - if True target embedding weights are used for computation output projection
* prenorm: bool - wether to use PreNorm or PostNorm
* attn_bias: bool - if True projection layers attention modules will have bias
* pad_idx: int - padding token id, required for autogeneration of padding mask
* pos_enc: str from {'absolute', 'fixed', 'axial'} - type of positional encoding to use
* axial_shape: tuple - required if 'axial' positional encoding are used, should be factors of
max_seq_len
* axial_emb_dims: tuple - [optional] axial embedding components, should sum to d_model
* rev_thres: int - if (seq_len < rev_thres) applies irreversible blocks
Inputs:
* x - input ids, shape [bs, sl]
* mask - optional boolean mask, shape [bs, sl]
Returns:
* logits - target token logits, shape [bs, sl, vocab_sz]
"""
def __init__(self,
vocab_sz:int,
d_model:int,
n_layers:int=6,
n_heads:int=8,
d_ff:int=None,
attn_dropout:float=0.1,
ff_dropout:float=0.1,
emb_dropout:float=0.1,
tie_weights:bool=True,
causal:bool=True,
pos_enc:str='absolute',
max_seq_len:int=512,
axial_shape=None,
axial_emb_dims=None,
pad_idx:int=None,
prenorm:bool=False,
attn_bias:bool=False,
rev_thres:int=0):
store_attr()
self.emb = TransformerEmbedding(vocab_sz, d_model, max_seq_len, dropout=emb_dropout,
pos_enc=pos_enc, axial_shape=axial_shape,
axial_emb_dims=axial_emb_dims)
self.encoder = ReversibleEncoder(d_model, n_layers, n_heads, causal=causal, d_ff=d_ff,
attn_dropout=attn_dropout, ff_dropout=ff_dropout,
prenorm=prenorm, attn_bias=attn_bias,
final_norm=nn.LayerNorm, rev_thres=rev_thres)
self.proj = nn.Linear(d_model, vocab_sz)
if tie_weights: self.proj.weight = self.emb.emb.weight
def forward(self, x, mask=None):
x = self.emb(x)
x = self.encoder(x, mask=mask)
return self.proj(x)
bs = 4
sl = 128
d = 64
vocab_sz = 256
x = torch.randint(vocab_sz, (bs, sl))
model = ReversibleLM(vocab_sz, d, n_layers=2, causal=False)
out = model(x)
assert (out.size() == (bs, sl, vocab_sz))
out.shape
#export
#TODO test weight tying
# Note on weight tying: it's done like here in fastai AWD_LSTM model
# Lucidrains does it with custom MatrixMultiply module https://github.com/lucidrains/reformer-pytorch/blob/master/reformer_pytorch/reformer_pytorch.py#L106
#TODO: update docstrings
class ReversibleTransformer(Module):
"""
Basic Transformer Encoder-Decoder model
Parameters:
* enc_vocab_sz: int - source vocab size
* dec_vocab_sz: int - target vocab size
* d_model: int - inner dimension of the model
* n_enc_layers: int (default: 6)
* n_dec_layers: int (default: 6)
* heads: int (default: 8)
* d_ff: int - inner dimension of the pointwise FeedForward net, if None defaults to 4*d_model
* attn_dropout: float - attention dropout
* ff_dropout: float - feed-forward dropout
* emb_dropout: float - embedding dropout
* max_seq_len: int (default: 512)
* prenorm: bool - whether to use PreNorm or PostNorm
* attn_bias: bool - whether to allow biases in attention projection layers
* pad_idx: int - padding token id, if pad_idx is provided, and no mask/context_mask are
passed to forward method will be used to generate padding masks
* tie_weights: bool - if True target embedding weights are used for computation output projection
* shared_emb: bool - if True encoder and decoder will use shared embedding layer
* pos_enc: str from {'absolute', 'fixed', 'axial'} - type of positional encoding to use
* axial_shape: tuple - required if 'axial' positional encoding are used, should be factors of
max_seq_len
* axial_emb_dims: tuple - [optional] axial embedding components, should sum to d_model
Inputs:
* src - source input ids, shape [bs, src_sl]
* tgt - target input ids, shape [bs, tgt_sl]
* src_mask - optional boolean source mask, shape [bs, src_sl]
* tgt_mask - optional boolean target mask, shape [bs, tgt_sl]
Returns:
* logits - target token logits, shape [bs, tgt_sl, tgt_vocab_sz]
"""
def __init__(self,
enc_vocab_sz,
dec_vocab_sz,
d_model,
n_layers:int=6,
n_enc_layers=None,
n_dec_layers=None,
heads=8,
d_ff=None,
pad_idx=None,
tie_weights=True,
shared_emb = False,
attn_dropout=0.1,
ff_dropout=0.1,
emb_dropout=0.1,
prenorm=False,
attn_bias=False,
comb_attn=False,
pos_enc='absolute',
max_seq_len=512,
axial_shape=None,
axial_emb_dims=None):
store_attr()
n_enc_layers = ifnone(n_enc_layers, n_layers)
n_dec_layers = ifnone(n_dec_layers, n_layers)
self.enc_emb = TransformerEmbedding(enc_vocab_sz, d_model, max_seq_len, dropout=emb_dropout, pos_enc=pos_enc,
axial_shape=axial_shape, axial_emb_dims=axial_emb_dims)
if shared_emb:
assert (enc_vocab_sz == dec_vocab_sz), "Encoder and decoder vocab size doesn't match"
self.dec_emb = self.emc_emb
else:
self.dec_emb = TransformerEmbedding(dec_vocab_sz, d_model, max_seq_len, dropout=emb_dropout, pos_enc=pos_enc,
axial_shape=axial_shape, axial_emb_dims=axial_emb_dims)
self.encoder = ReversibleEncoder(d_model, n_enc_layers, heads, d_ff=d_ff, attn_dropout=attn_dropout, ff_dropout=ff_dropout,
prenorm=prenorm, attn_bias=attn_bias, final_norm=nn.LayerNorm, causal=False)
self.decoder = ReversibleDecoder(d_model, n_dec_layers, heads, d_ff=d_ff, attn_dropout=attn_dropout, ff_dropout=ff_dropout,
prenorm=prenorm, attn_bias=attn_bias, final_norm=nn.LayerNorm)
self.proj = nn.Linear(d_model, dec_vocab_sz)
if tie_weights: self.proj.weight = self.dec_emb.emb.weight
def forward(self, src, tgt, src_mask=None, tgt_mask=None):
src_mask = default(src_mask, self.get_padding_mask(src))
tgt_mask = default(tgt_mask, self.get_padding_mask(tgt))
enc = self.encoder(self.enc_emb(src), mask=src_mask)
out = self.decoder(self.dec_emb(tgt), context=enc, mask=tgt_mask, context_mask=src_mask)
return self.proj(out)
def get_padding_mask(self, x):
if self.pad_idx is None: return None
return (x != self.pad_idx)
bs = 4
src_sl = 70
tgt_sl = 80
d = 64
src_vocab_sz = 256
tgt_vocab_sz = 256
src = torch.randint(src_vocab_sz, (bs, src_sl))
tgt = torch.randint(tgt_vocab_sz, (bs, tgt_sl))
model = ReversibleTransformer(src_vocab_sz, tgt_vocab_sz, d, n_enc_layers=2, n_dec_layers=2)
out = model(src, tgt)
assert (out.size() == (bs, tgt_sl, tgt_vocab_sz))
out.shape
# ## Transformer with LSH attention
#export
class LSHEncoderBlock(Module):
"Encoder block using ReformerAttention"
def __init__(self,
d_model:int,
n_heads:int = 8,
d_ff:int = None,
attn_dropout:float = 0.1,
ff_dropout:float = 0.1,
causal:bool = False,
attn_bias:bool = False,
prenorm:bool=False,
use_lsh:bool=True,
n_hashes:int=8,
bucket_size:int=64,
seed:int=None):
store_attr('attn_dropout') # mb separate argument attn_post_dropout
if prenorm:
self.attn = Residual(PreNorm(d_model, ReformerAttentionV2(d_model, n_heads=n_heads, causal=causal,
dropout=attn_dropout, bias=attn_bias, use_lsh=use_lsh,
n_hashes=n_hashes, bucket_size=bucket_size,
seed=seed)))
self.ff = Residual(PreNorm(d_model, FeedForward(d_model, d_ff=d_ff, dropout=ff_dropout)))
else:
self.attn = PostNorm(d_model, Residual(ReformerAttentionV2(d_model, n_heads=n_heads, causal=causal,
dropout=attn_dropout, bias=attn_bias, use_lsh=use_lsh,
n_hashes=n_hashes, bucket_size=bucket_size,
seed=seed)))
self.ff = PostNorm(d_model, Residual(FeedForward(d_model, d_ff=d_ff, dropout=ff_dropout)))
def forward(self, x, mask=None):
out = self.attn(x, mask=mask)
return self.ff(out)
bs = 4
sl = 128
d = 64
x = torch.randn(bs, sl, d)
m = LSHEncoderBlock(d)
out = m(x)
assert (out.size() == (bs, sl, d))
out.shape
m = LSHEncoderBlock(d, use_lsh=False)
out = m(x)
assert (out.size() == (bs, sl, d))
#export
class LSHEncoder(Module):
"""Stack of TransformerEncoderBlocks"""
def __init__(self,
d_model,
n_layers=6,
n_heads=8,
d_ff=None,
ff_dropout=0.1,
attn_dropout=0.1,
attn_bias=False,
causal=False,
prenorm=False,
use_lsh:bool=True,
final_norm=None,
n_hashes:int=8,
bucket_size:int=64,
seed:int=None):
store_attr('d_model')
self.layers = nn.ModuleList([])
for _ in range(n_layers):
self.layers.append(LSHEncoderBlock(d_model, n_heads, causal=causal,
d_ff=d_ff, attn_dropout=attn_dropout, ff_dropout=ff_dropout,
prenorm=prenorm, attn_bias=attn_bias, use_lsh=use_lsh,
n_hashes=n_hashes, bucket_size=bucket_size,
seed=seed))
self.norm = None if final_norm is None else final_norm(d_model)
def forward(self, x, mask=None):
for layer in self.layers: x = layer(x, mask=mask)
if self.norm is not None: x = self.norm(x)
return x
bs = 4
sl = 128
d = 64
x = torch.randn(bs, sl, d)
m = LSHEncoder(d, n_layers=2)
out = m(x)
assert (out.size() == (bs, sl, d))
out.shape
m = LSHEncoder(d, n_layers=2, n_heads=4, use_lsh=False)
out = m(x)
assert (out.size() == (bs, sl, d))
#export
class LSHLM(Module, LMMixin):
"""
Transformer for language modelling with LSH attention
Parameters:
* vocab_sz: int
* d_model: int - inner dimension of the model
* n_layers: int (default: 6)
* heads: int (default: 8)
* d_ff: int - inner dimension of the pointwise FeedForward net, if None defaults to 4*d_model
* attn_dropout: float - attention dropout
* ff_dropout: float - feed-forward dropout
* emb_dropout: float - embedding dropout
* causal: bool (default: True) - if True does causal masking automatically
* max_seq_len: int (default: 512)
* tie_weights: bool - if True target embedding weights are used for computation output projection
* prenorm: bool - wether to use PreNorm or PostNorm
* attn_bias: bool - wether to allow biases in attention projection layers
* pad_idx: int - padding token id, required for autogeneration of padding mask
* pos_enc: str from {'absolute', 'fixed', 'axial'} - type of positional encoding to use
* axial_shape: tuple - required if 'axial' positional encoding are used, should be factors of
max_seq_len
* axial_emb_dims: tuple - [optional] axial embedding components, should sum to d_model
* use_slh: bool - parameter to switch between LSH and full attention
* n_hashes: int - number of hashing rounds for LSH
* bucket_size: int - input sequence length should be divisible by 2*bucket_size
* seed: int - for LSHAttention module
Inputs:
* x - input ids, shape [bs, sl]
* mask - optional boolean mask, shape [bs, sl]
Returns:
* logits - target token logits, shape [bs, sl, vocab_sz]
"""
def __init__(self,
vocab_sz:int,
d_model:int,
n_layers:int=6,
n_heads:int=8,
d_ff:int=None,
attn_dropout:float=0.1,
ff_dropout:float=0.1,
emb_dropout:float=0.1,
tie_weights:bool=True,
causal:bool=True,
pos_enc:str='absolute',
max_seq_len:int=512,
axial_shape:tuple=None,
axial_emb_dims:tuple=None,
pad_idx:int=None,
prenorm:bool=False,
attn_bias:bool=False,
use_lsh:bool=True,
n_hashes:int=8,
bucket_size:int=64,
seed:int=None):
store_attr()
self._use_lsh = use_lsh
self._n_hashes = n_hashes
self.emb = TransformerEmbedding(vocab_sz, d_model, max_seq_len, dropout=emb_dropout,
pos_enc=pos_enc, axial_shape=axial_shape,
axial_emb_dims=axial_emb_dims)
final_norm = nn.LayerNorm if prenorm else None
self.encoder = LSHEncoder(d_model, n_layers, n_heads, causal=causal, d_ff=d_ff,
attn_dropout=attn_dropout, ff_dropout=ff_dropout,
prenorm=prenorm, attn_bias=attn_bias, use_lsh=use_lsh,
final_norm=final_norm, n_hashes=n_hashes, bucket_size=bucket_size,
seed=seed)
self.proj = nn.Linear(d_model, vocab_sz)
if tie_weights: self.proj.weight = self.emb.emb.weight
def forward(self, x, mask=None):
x = self.emb(x)
x = self.encoder(x, mask=mask)
return self.proj(x)
@property
def use_lsh(self):
return self._use_lsh
@use_lsh.setter
def use_lsh(self, val):
self._use_lsh = val
for c in self.children():
for m in c.modules():
if hasattr(m, 'use_lsh'): m.use_lsh=val
@property
def n_hashes(self):
return self._n_hashes
@n_hashes.setter
def n_hashes(self, val):
self._n_hashes = val
for c in self.children():
for m in c.modules():
if hasattr(m, 'n_hashes'): m.n_hashes=val
bs = 4
sl = 128
d = 64
vocab_sz = 256
x = torch.randint(vocab_sz, (bs, sl))
model = LSHLM(vocab_sz, d, n_layers=2, causal=False)
out = model(x)
assert (out.size() == (bs, sl, vocab_sz))
out.shape
#slow
model.use_lsh = True
out = model(x)
assert (out.size() == (bs, sl, vocab_sz))
# %timeit model(x)
#slow
model.use_lsh = False
out = model(x)
assert (out.size() == (bs, sl, vocab_sz))
# %timeit model(x)
#hide
bs = 4
sl = 100
d = 64
vocab_sz = 256
x = torch.randint(vocab_sz, (bs, sl))
model = LSHLM(vocab_sz, d, n_layers=2, causal=False, use_lsh=False)
out = model(x)
assert (out.size() == (bs, sl, vocab_sz))
# ## Reformer
#export
class ReformerEncoder(Module):
"Stack of ReversibleBlocks"
def __init__(self,
d_model:int,
n_layers:int=6,
n_heads:int = 8,
max_seq_len:int = 512,
ff_chunks:int = 1,
causal:bool = False,
attn_dropout:float = 0.,
post_attn_dropout:float = None,
attn_bias:bool=False,
ff_dropout:float = 0.,
d_ff:int = None,
prenorm:bool=True,
final_norm:Module=None,
rev_thres:int=0,
use_lsh:bool=True,
n_hashes:int=8,
bucket_size:int=64,
seed:int=None):
# store_attr()
blocks = []
norm_wrapper = PreNorm if prenorm else PostNorm
for ind in range(n_layers):
attn = ReformerAttentionV2(d_model, n_heads=n_heads, causal=causal, dropout=attn_dropout,
bias=attn_bias, use_lsh=use_lsh, n_hashes=n_hashes, bucket_size=bucket_size,
seed=seed)
ff = ChunkedFeedForward(d_model, d_ff, chunks=ff_chunks, dropout=ff_dropout, dim=1)
f = norm_wrapper(d_model, attn)
g = norm_wrapper(d_model, ff)
blocks.append(nn.ModuleList([f, g]))
self.norm = final_norm(d_model) if exists(final_norm) else None
self.layers = ReversibleSequence(nn.ModuleList(blocks), rev_thres=rev_thres, send_signal=True)
def forward(self, x, **kwargs):
x = torch.cat([x, x], dim = -1)
arg_route = (True, False)
# pdb.set_trace()
x = self.layers(x, arg_route = arg_route, **kwargs)
x = torch.stack(x.chunk(2, dim=-1)).mean(dim=0)
if exists(self.norm): x = self.norm(x)
return x
bs = 4
sl = 128
d = 64
x = torch.randn(bs, sl, d)
m = ReformerEncoder(d, n_layers=2)
out = m(x)
assert (out.size() == (bs, sl, d))
out.shape
#export
class ReformerLM(Module, LMMixin):
"""
Reformer for language modelling. Uses LSH or full sharedQK attention
Parameters:
* vocab_sz: int
* d_model: int - inner dimension of the model
* n_layers: int (default: 6)
* heads: int (default: 8)
* d_ff: int - inner dimension of the pointwise FeedForward net, if None defaults to 4*d_model
* attn_dropout: float - attention dropout
* ff_dropout: float - feed-forward dropout
* emb_dropout: float - embedding dropout
* causal: bool (default: True) - if True does causal masking automatically
* max_seq_len: int (default: 512)
* tie_weights: bool - if True target embedding weights are used for computation output projection
* prenorm: bool - wether to use PreNorm or PostNorm
* attn_bias: bool - wether to allow biases in attention projection layers
* pad_idx: int - padding token id, required for autogeneration of padding mask
* pos_enc: str from {'absolute', 'fixed', 'axial'} - type of positional encoding to use
* axial_shape: tuple - required if 'axial' positional encoding are used, should be factors of
max_seq_len
* axial_emb_dims: tuple - [optional] axial embedding components, should sum to d_model
* rev_thres: int - if (seq_len < rev_thres) applies irreversible blocks
* use_slh: bool - parameter to switch between LSH and full attention
* n_hashes: int - number of hashing rounds for LSH
* bucket_size: int - input sequence length should be divisible by 2*bucket_size
* seed: int - for LSHAttention module
Inputs:
* x - input ids, shape [bs, sl]
* mask - optional boolean mask, shape [bs, sl]
Returns:
* logits - target token logits, shape [bs, sl, vocab_sz]
"""
def __init__(self,
vocab_sz:int,
d_model:int,
n_layers:int=6,
n_heads:int=8,
d_ff:int=None,
ff_chunks:int=1,
attn_dropout:float=0.1,
ff_dropout:float=0.1,
emb_dropout:float=0.1,
tie_weights:bool=True,
causal:bool=True,
pos_enc:str='absolute',
max_seq_len:int=512,
axial_shape:tuple=None,
axial_emb_dims:tuple=None,
pad_idx:int=None,
prenorm:bool=False,
attn_bias:bool=False,
use_lsh:bool=True,
n_hashes:int=8,
bucket_size:int=64,
rev_thres:int=0,
seed:int=None):
store_attr()
self._use_lsh = use_lsh
self._n_hashes = n_hashes
self.emb = TransformerEmbedding(vocab_sz, d_model, max_seq_len, dropout=emb_dropout,
pos_enc=pos_enc, axial_shape=axial_shape,
axial_emb_dims=axial_emb_dims)
final_norm = nn.LayerNorm if prenorm else None
self.encoder = ReformerEncoder(d_model, n_layers, n_heads, causal=causal, d_ff=d_ff,
attn_dropout=attn_dropout, ff_dropout=ff_dropout,
prenorm=prenorm, attn_bias=attn_bias, use_lsh=use_lsh,
final_norm=final_norm, n_hashes=n_hashes, bucket_size=bucket_size,
ff_chunks=ff_chunks, rev_thres=rev_thres, seed=seed)
self.proj = nn.Linear(d_model, vocab_sz)
if tie_weights: self.proj.weight = self.emb.emb.weight
def forward(self, x, mask=None):
x = self.emb(x)
x = self.encoder(x, mask=mask)
return self.proj(x)
@property
def use_lsh(self):
return self._use_lsh
@use_lsh.setter
def use_lsh(self, val):
self._use_lsh = val
for c in self.children():
for m in c.modules():
if hasattr(m, 'use_lsh'): m.use_lsh=val
@property
def n_hashes(self):
return self._n_hashes
@n_hashes.setter
def n_hashes(self, val):
self._n_hashes = val
for c in self.children():
for m in c.modules():
if hasattr(m, 'n_hashes'): m.n_hashes=val
#hide
bs = 4
sl = 100
d = 64
vocab_sz = 256
x = torch.randint(vocab_sz, (bs, sl))
model = ReformerLM(vocab_sz, d, n_layers=2, causal=False, use_lsh=False)
out = model(x)
assert (out.size() == (bs, sl, vocab_sz))
bs = 4
sl = 128
d = 64
vocab_sz = 256
x = torch.randint(vocab_sz, (bs, sl))
model = ReformerLM(vocab_sz, d, n_layers=2, causal=False)
out = model(x)
assert (out.size() == (bs, sl, vocab_sz))
out.shape
# Check cached buckets:
#hide_input
for m in model.modules():
if isinstance(m, LSHAttention):
print(m._cache)
for v in m._cache.values():
print(v.shape)
loss = out.sum()
loss.backward()
# `LSHAttention` execution time depends on number of hashing rounds
#slow
print(f'Number of hashing rounds {model._n_hashes}')
# %timeit model(x)
#slow
model.n_hashes = 1
print(f'Number of hashing rounds {model.n_hashes}')
# %timeit model(x)
#export
def reformer_lm_splits(model):
"Splits ReformerLM `model` into groups for differential learning rates."
groups = L([model.emb] + [l for l in model.encoder.layers.blocks] + [model.proj])
return groups.map(params)
assert len(reformer_lm_splits(model)) == 2+2
# +
#skip
#hide
# class ReformerDecoder(nn.Module):
# def __init__(self,
# d_model,
# n_layers = 6,
# heads = 8,
# max_seq_len = 512,
# d_head = None,
# bucket_size = 64,
# n_hashes = 8,
# ff_chunks = 100,
# attn_chunks = None, # ??
# causal = False,
# weight_tie = False, # weight sharing option do we need to keep this?
# attn_dropout = 0.,
# post_attn_dropout = 0.,
# ff_dropout = 0.,
# d_ff = None,
# layer_dropout = 0.,
# prenorm=True,
# rev_thres = 0,
# ):
# super().__init__()
# self.d_model = d_model
# self.n_layers = n_layers
# # use regular attention for now
# get_attn = lambda: DecoderAttention(d_model, heads, causal=causal, dropout=attn_dropout)
# get_ff = lambda: ChunkedFeedForward(d_model, d_ff, chunks=ff_chunks, dropout=ff_dropout, dim=1)
# norm_wrapper = PreNorm if prenorm else PostNorm
# blocks = []
# for ind in range(n_layers):
# layer_num = ind + 1
# f = norm_wrapper(d_model, get_attn())
# g = norm_wrapper(d_model, get_ff())
# blocks.append(nn.ModuleList([f, g]))
# # send_signal is not implemented for now
# self.layers = ReversibleSequence(nn.ModuleList(blocks), layer_dropout=layer_dropout, rev_thres=rev_thres, send_signal=False)
# def forward(self, x, **kwargs):
# x = torch.cat([x, x], dim = -1)
# arg_route = (True, False)
# # pdb.set_trace()
# x = self.layers(x, arg_route = arg_route, **kwargs)
# return torch.stack(x.chunk(2, dim=-1)).mean(dim=0)
# +
#skip
#hide
# class ReformerEncDec(nn.Module):
# """
# Basic Transformer Encoder-Decoder model
# Parameters:
# * enc_vocab_sz: int - source vocab size
# * dec_vocab_sz: int - target vocab size
# * d_model: int - inner dimension of the model
# * n_enc_layers: int (default: 6)
# * n_dec_layers: int (default: 6)
# * heads: int (default: 8)
# * d_ff: int - inner dimension of the FeedForward net, if None defaults to 4*d_model
# * attn_dropout: float - attention dropout
# * ff_dropout: float - feed-forward dropout
# * emb_dropout: float - embedding dropout
# * max_seq_len: int (default: 512)
# * prenorm: bool - whether to use PreNorm or PostNorm
# * attn_bias: bool - whether to allow biases in attention projection layers
# * pad_idx: int - padding token id, if pad_idx is provided, and no mask/context_mask are passed to
# forward method will be used to generate padding masks
# * tie_weights: bool - if True target embedding weights are used for computation output projection
# * shared_emb: bool - if True encoder and decoder will use shared embedding layer
# * pos_enc: str from {'absolute', 'fixed', 'axial'} - type of positional encoding to use
# * axial_shape: tuple - required if 'axial' positional encoding are used, should be factors of
# max_seq_len
# * axial_emb_dims: tuple - [optional] axial embedding components, should sum to d_model
# Inputs:
# * src - source input ids, shape [bs, src_sl]
# * tgt - target input ids, shape [bs, tgt_sl]
# * src_mask - optional boolean source mask, shape [bs, src_sl]
# * tgt_mask - optional boolean target mask, shape [bs, tgt_sl]
# Returns:
# * logits - target token logits, shape [bs, tgt_sl, tgt_vocab_sz]
# """
# def __init__(self,
# enc_vocab_sz,
# dec_vocab_sz,
# d_model,
# n_layers=6,
# heads=8,
# max_seq_len=512,
# pad_idx=None,
# tie_weights=True,
# emb_dropout=0.1,
# attn_dropout=0.1,
# ff_dropout=0.1,
# pos_enc='absolute',
# d_ff=None,
# prenorm=False,
# axial_shape=None,
# axial_emb_dims=None,
# comb_attn=False,
# rev_thres=0):
# super().__init__()
# self.max_seq_len = max_seq_len
# self.n_layers = n_layers
# self.pad_idx = pad_idx
# self.enc_emb = TransformerEmbedding(enc_vocab_sz, d_model, max_seq_len, dropout=emb_dropout,
# axial_shape=axial_shape, axial_emb_dims=axial_emb_dims)
# self.dec_emb = TransformerEmbedding(dec_vocab_sz, d_model, max_seq_len, dropout=emb_dropout,
# axial_shape=axial_shape, axial_emb_dims=axial_emb_dims)
# self.encoder = ReformerEncoder(d_model, n_layers, heads, d_ff=d_ff, attn_dropout=attn_dropout, ff_dropout=ff_dropout, prenorm=prenorm, rev_thres=rev_thres)
# self.decoder = ReformerDecoder(d_model, n_layers, heads, d_ff=d_ff, attn_dropout=attn_dropout, ff_dropout=ff_dropout, prenorm=prenorm, rev_thres=rev_thres)
# self.proj = nn.Linear(d_model, dec_vocab_sz)
# if tie_weights: self.proj.weight = self.dec_emb.emb.weight
# def forward(self, src, tgt, src_mask = None, tgt_mask = None):
# src_mask = default(src_mask, self.get_padding_mask(src))
# tgt_mask = default(tgt_mask, self.get_padding_mask(tgt))
# enc = self.encoder(self.enc_emb(src), mask = src_mask)
# out = self.decoder(self.dec_emb(tgt), context=enc, mask=tgt_mask, context_mask=src_mask)
# return self.proj(out)
# def get_padding_mask(self, x):
# if self.pad_idx is None: return None
# return (x != self.pad_idx)
# #TODO add beam search and refactor
# @torch.no_grad()
# def generate(self, src,
# src_mask=None,
# max_len=50,
# temperature=1.,
# method = 'top_k',
# top_k = 20,
# top_p = 0.9,
# early_stopping=False,
# bos_idx=2, # TODO change to match future usecases
# eos_idx=None):
# self.to(src.device) #TODO test for potential problems
# self.eval()
# thresh = top_k if method=='top_k' else top_p
# sampler = _sampler[method]
# src = expand_dim1(src)
# bs = src.size(0)
# inp = src.new_full((bs, 1), bos_idx) #start with bos tokens
# pdb.set_trace()
# src_mask = default(src_mask, self.get_padding_mask(src))
# enc = self.encoder(self.enc_emb(src), mask = src_mask)
# out = inp
# for _ in range(max_len):
# x = out[:, -self.max_seq_len:]
# dec = self.decoder(self.dec_emb(out), context=enc)
# logits = self.proj(dec)[:, -1, :]
# if method == 'greedy':
# sample = sampler(logits)
# else:
# filtered_logits = sampler(logits)
# probs = F.softmax(filtered_logits / temperature, dim=-1)
# sample = torch.multinomial(probs, 1)
# out = torch.cat((out, sample), dim=-1)
# if (early_stopping and
# ((sample == eos_idx).all() or
# (sample == self.pad_idx).all())):
# break
# #TODO mb output cleanup
# return out
# def store_attention(self, layer_ids=None, store_encoder=False, store_decoder=True):
# #defaults to storing attention for all layers
# layer_ids = default(layer_ids, list(range(self.n_layers)))
# for module in self.children():
# if issubclass(type(module), TransformerEncoder) and store_encoder:
# for i, l in enumerate(module.layers):
# if i in layer_ids:
# for m in l.modules():
# if issubclass(type(m), (Attention)):
# m.store_attention = True
# elif issubclass(type(module), TransformerDecoder) and store_decoder:
# for i, l in enumerate(module.layers):
# if i in layer_ids:
# for m in l.modules():
# if issubclass(type(m), (Attention)):
# m.store_attention = True
# #TODO mb separate encoder and decoder attention
# def get_attention_matrix(self, get_encoder=False, get_decoder=True):
# res = []
# if get_encoder:
# for m in self.encoder.modules():
# if issubclass(type(m), (Attention)):
# attention = getattr(m, 'attention', None)
# if attention is not None:
# res.append(attention)
# # reset stored attention
# m.attention = None
# m.store_attention = False
# if get_decoder:
# for m in self.decoder.modules():
# if issubclass(type(m), (Attention)):
# attention = getattr(m, 'attention', None)
# if attention is not None:
# res.append(attention)
# # reset stored attention
# m.attention = None
# m.store_attention = False
# return res
# -
#export
MODELS = (LSHLM, ReversibleLM, ReversibleTransformer, ReformerLM)
@patch(cls_method=True)
def from_config(cls:MODELS, config):
return cls(**config.dict())
#hide
from nbdev.export import notebook2script; notebook2script()
|
nbs/_04_reformer.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NEXUS tool: case study for the Souss-Massa basin - Butane phase out strategies
# In this notebook a case study for the Souss-Massa basin is covered using the `nexustool` and other packages. This part of the analysis will consider three phase out scenarios for Butane phaseout as follows:
#
# 1. Business as usual scenario (None): No phase out which means the current share of butane use to continue into the future
# 2. Late Phase out (2040): which assumes compelte phase out of butane by 2040.
# 3. Early Phase out (2030): which assumes compelte phase out of butane by 2030.
#
# For all the three main scenarios, we will explore three levels of PV share in the region.
#
# a. Low PV Share (10): which assumes the current level of PV share of 10% to continue in the future
# b. Medium PV Share (20): which assumes an increase of PV share from 10% to 20% by 2040.
# c. High PV Share (50): which assumes an increase of PV share from 10% to 50% by 2040.
#
# First import the package by running the following block:
import sys
sys.path.append("..") #this is to add the above folder to the package directory
import os
import nexustool
from nexustool.weap_tools import create_learning_curve, create_tech_dist
from nexustool.gis_tools import disaggregate_data
import pandas as pd
import numpy as np
from dashboard.scripts.plotting import pv_installed_capacity, energy_demand_ag, emissions_ag, costs_plot
# ## Step 1:Reading the input files
#
# The analysis in this phase will depend on the outcomes of WEAP and the energy demand calculations. The main two input files are:
# 1) Cropland file: which shows the spatial distribution of irrigated cropland area.
#
# 2) WEAP file: which is shows the energy demand results fir each of the WEAP scenarios. In the following steps the naming of some of the provinces is also corrected.
#Define the path to read the cropland and builtup are data and reads it in
folder_path = os.path.join('data')
maincropland_path = os.path.join(folder_path,'cropland' ,'cropland.gz')
maincropland = pd.read_csv(maincropland_path)
weap = pd.read_csv(os.path.join('dashboard', 'data', 'Desalination', 'Climate Change', 'results.gz'))
maincropland.head()
# ## Step2: Cleaning and reorganizing the cropland dataframe
# In this step we take the `WEAP` and distribute the energy scenario results into the `cropland` data. This way we can correlate agricultural pumping energy demans with spatially distributed irrigated area and solar irradiation (PV potential).
cropland = disaggregate_data(weap, maincropland)
cropland.rename(columns={'sswd':'water_demand(m3)',
'pwd':'peak_water(m3)',
'pp_e':'peak_load(KW)',
'pa_e':'energy_demand(KWh)'}, inplace=True)
cropland.tail()
# ## Step3: Define butane phase out year and PV share
#
# In this step we introduce a new function that will model the change in the technologies disctributions over years. To have a better understanding of this step, check the scenarios section under Butane calculations in the Energy Training material.
# +
df = cropland
butane_phaseout = 2030 #specify the end year for butane phase out
pv_level = 50 #Specify the PV level adoption
pv_year = 2040
discount_rate = 0.05
# -
if butane_phaseout:
butane_share = 1
year = butane_phaseout
else:
butane_share = 0
year = 2050
# ## Step4: Defining technologies distributions over coming years
#
# In this step we introduce a new function that allows us to simulate the drop in PV capital cost due to learning curve. In this analysis we assume that the PV capital cost we drop to 40% of the currrent level in 2020.
# +
pv_share = 1 - pv_level / 10
om_cost = 0.01
efficiency = 1
butane_share1 = 0.2
pv_share1 = 0.1
df.start_year = 2020
df.end_year = year
df.mid_year = ((df.start_year+df.end_year)/2)
bt_rates = {df.start_year: 0, df.mid_year: butane_share*0.5, df.end_year: butane_share}
pv_rates = {df.start_year: 0, 2030: pv_share*0.5, pv_year: pv_share, 2050: pv_share*1.2}
butane_dist = create_tech_dist(df.start_year, df.end_year, butane_share1, bt_rates, 'polynomial')
pv_dist = create_tech_dist(df.start_year, pv_year, pv_share1, pv_rates, 'polynomial')
pv_rate = 0.4 #Assuming 40% drop in PV capex by 2040.
cc = 7 #current capital cost of pv in MAD/wat. source: energy experts in SM
pv_life = 15
df.start_year = 2020
df.end_year= 2050
rates = {2020: 0, 2030: pv_rate*0.3, 2040: pv_rate, 2050: pv_rate*1.3}
cc_solar_curve = create_learning_curve(df.start_year, df.end_year, cc, rates, 'polynomial')
# -
butane_dist.plot()
pv_dist.plot()
cc_solar_curve.plot()
# +
# Intorducing a new dataframe to show the change in butane share, pv share and pv capital cost (cc)
dist = pd.DataFrame({'butane_share':pd.Series(butane_dist),
'pv_share':pd.Series(pv_dist),
'pv_cc':pd.Series(cc_solar_curve)})
dist['butane_share'].fillna(method='ffill',inplace=True) #This method fills any missing data with the previous value.
dist['pv_share'].fillna(method='ffill',inplace=True)
dist['pv_cc'].fillna(method='ffill',inplace=True)
dist['grid_share']= 1-(dist['pv_share']+dist['butane_share'])
# +
# merging the distribution dataframe with the original dataframe that includes weap outputs, the merging is based on 'Year'
souss_massa = pd.merge(df, dist, on='Year')
# -
# ## Step5: Calculating the monthly electricity demand for each technology
#
# Three main technologies used for groundwater pumping: Butane pumps, electric pumps taking electricty from the national grid and solar pumps. Since we do not have detailed information on the exact location where each technology is being used, we assumed that in each area there is a mix of the three technologies and the total pumping demand is split between the three technologies based on the share of each technology (Grid:70%, Butane: 20% and PV:10%).
# In this step we calculate the monthly demand for each technology in each irrigated area
# +
# Introducing technologies characteristics: such as costs, emission factors, pumps effiencies ..etc
butane_req = 1/12.58 # LHV = 12.58 KWh/Kg (amount of butane (kg) required to produce 1 KWh)
butane_em = 6.67 #kg CO"/Gallon
gallon2liter = 3.78541 #l/Gallon
butane_density = 573 #kg/m3
butane_ef = butane_em / (gallon2liter/1000 * butane_density) #kgCO2/kg
butane_em_fac = butane_ef * 1000 ##kgCO2/ton
butane_em_fac = 3079.5
bpump_eff = 0.2 #efficiency of butane
epump_eff = 0.45 #assumption
conv_fac = 1000000000 # to convert emissions from kgCO2 to Million meteric tons of CO2 MtCO2
grid_em_fac = 0.7 #kgco2/kwh , This is based on data from IEA, Morocco 2019
grid_cost = 0.57 # MAD/KWh, Assumption and to be updated
# -
souss_massa['energy_int(KWh/m2)'] = souss_massa['energy_demand(KWh)'] / souss_massa['area_m2']
souss_massa['cap_per_area(KW/m2)'] = souss_massa['peak_load(KW)'] / souss_massa['area_m2']
souss_massa['pv_area(m2)'] = souss_massa['area_m2'] * souss_massa['pv_share']
souss_massa['pv_load(KW)'] = souss_massa['pv_area(m2)'] * souss_massa['cap_per_area(KW/m2)']
souss_massa['pv_demand(KWh)'] = souss_massa['pv_area(m2)'] * souss_massa['energy_int(KWh/m2)']
souss_massa['butane_area(m2)'] = souss_massa['area_m2'] * souss_massa['butane_share']
souss_massa['butane_demand(KWh)'] = souss_massa['butane_area(m2)'] * souss_massa['energy_int(KWh/m2)']
souss_massa['grid_area(m2)'] = souss_massa['area_m2'] * souss_massa['grid_share']
souss_massa['grid_demand(KWh)'] = souss_massa['grid_area(m2)'] * souss_massa['energy_int(KWh/m2)']
# +
souss_massa['pv_elec(KWh)'] = souss_massa['pv_demand(KWh)']/epump_eff
souss_massa['grid_elec(KWh)'] = souss_massa['grid_demand(KWh)']/epump_eff
souss_massa['butane_cons(kg)'] = (souss_massa['butane_demand(KWh)']*butane_req)/bpump_eff
souss_massa['butane_cons(tonnes)'] = souss_massa['butane_cons(kg)']/1000
# -
# ## Step6: Calculating the costs and emissions for butane pumps and the grid
#
# After calculating the share of each technology, we move to calculate the cost of using each technology.
# For Butane we are calculating three types of costs: what farmers pay, the subsidy level and the toatl cost. The most important one is the second cost (subsidy) which will be used in the next steps.
# In this step we also compute the emissions released due to the use of butane or the grid. knowing that the national grid is heavily dependant on fossil fuels.
# +
# Estimating butance costs and emissions:
souss_massa['butane_emissions(MtCO2)'] = souss_massa['butane_cons(tonnes)'] * butane_em_fac/conv_fac
souss_massa['butane_FARcost(mMAD)'] = (souss_massa['butane_cons(kg)']*(40/12))/1000000 #in million MAD, this is what farmers pay
souss_massa['butane_ACTcost(mMAD)'] = (souss_massa['butane_cons(kg)']*(120/12))/1000000 #in million MAD, this is what farmers pay
souss_massa['butane_Subsidy(mMAD)'] = (souss_massa['butane_cons(kg)']*(80/12))/1000000 #in million MAD, this is the total subsidy cost
# Estimating grid costs and emissions:
souss_massa['grid_emissions(MtCO2)'] = souss_massa['grid_elec(KWh)']*grid_em_fac/conv_fac
souss_massa['grid_cost(mMAD)'] = (souss_massa['grid_elec(KWh)']*grid_cost)/1000000
souss_massa['Total_emissions(MtCO2)'] = (souss_massa['grid_emissions(MtCO2)'] + souss_massa['butane_emissions(MtCO2)'])
# -
# ## Step7: Calculate the required PV capacity for pumping (for each location in each month)
#
# Now let us move to the PV part of the calculations. We first use the solar radiation map to estimate the capacity factor at each location in each month. This will then be used to calculate the required capacity of PV in each month of the year.
# +
# Estimating the required monthly capacity of pv NOTE: 1 kWh = 3600 kJ
souss_massa['cf'] = souss_massa['srad'] / (24*60*60) #This will give the cf in solar rad: (kJ/(m2.day))*30.day/month*1h/(24h*60m*60s) =kWh/(m2.month)*30/(60*60)
souss_massa['cap_m(MW)'] = souss_massa['pv_load(KW)'] / souss_massa['cf']/1000 #to convert to MW, check the units
# -
# ## Step 8: Calculate the annual required PV capacity (new installation, reinvestment and total capacity)
#
# When we talk about investments we are usually interested in annual investment not monthly. Therefore we aggregate the dataframe to show the annual values for each type of inputs. The `groupby` method is used to group everything for each `demand point` and for each `Year` as shown here:
souss_massa1 = souss_massa.groupby(['Demand point','Year']).agg({'water_demand(m3)': 'sum','energy_demand(KWh)': 'sum',
'pv_elec(KWh)': 'sum', 'grid_elec(KWh)': 'sum', 'cap_m(MW)': 'max',
'butane_cons(tonnes)': 'sum', 'butane_FARcost(mMAD)': 'sum',
'butane_ACTcost(mMAD)': 'sum','butane_Subsidy(mMAD)': 'sum',
'butane_emissions(MtCO2)': 'sum','grid_emissions(MtCO2)': 'sum',
'Total_emissions(MtCO2)': 'sum','grid_cost(mMAD)': 'sum',
'pv_demand(KWh)': 'sum', 'butane_demand(KWh)': 'sum', 'grid_demand(KWh)': 'sum'})
# +
# Introducing additional attributes to the dataframe:
souss_massa1['GWdepth'] = (souss_massa.groupby(['Demand point','Year'])['wtd'].mean())
souss_massa1['srad'] = (souss_massa.groupby(['Demand point','Year'])['srad'].mean())
souss_massa1['cap_mean(MW)'] = (souss_massa.groupby(['Demand point','Year'])['cap_m(MW)'].mean())
souss_massa1['cap_max(MW)'] = (souss_massa.groupby(['Demand point','Year'])['cap_m(MW)'].max())
souss_massa1['pv_cc'] = (souss_massa.groupby(['Demand point','Year'])['pv_cc'].mean())
del souss_massa
# -
# Here we set the annual capacity of PV to be the maximum monthly capacity at each demand point.
pv_installed_cap = pd.Series(dtype=float) #inicialize a pandas series that will be populated with the cumulative max of the max capacity of each point group
for index, group in souss_massa1.reset_index().groupby('Demand point'): # loops through each demand point set of data
group_pv_cap = pd.Series(group['cap_m(MW)'].cummax().values, index=group.reset_index().set_index(['Demand point','Year']).index)
pv_installed_cap = pv_installed_cap.append(group_pv_cap) # calculates the cummmax() for the demand point and append the values to the pv_installed_capacity
# In this step we introduce the `new capcity` which is the additional capacity of PV required in each year compare to the previous year. Also we intorduce the `reinvest capacity` which is the second investment required in certain locations after the lifetime of the pannels
#souss_massa1.reset_index(inplace=True)
souss_massa1['PV_installed_cap(MW)'] = pv_installed_cap
souss_massa1['PV_new_cap(MW)'] = souss_massa1['PV_installed_cap(MW)'] - souss_massa1['PV_installed_cap(MW)'].shift(1)
souss_massa1.reset_index(inplace=True)
souss_massa1.loc[souss_massa1['Year']==2020, 'PV_new_cap(MW)'] = 0
souss_massa1['reinv_cap(MW)'] = souss_massa1['PV_new_cap(MW)'].shift(pv_life).fillna(0)
souss_massa1.loc[souss_massa1['Year']<(2020+pv_life), 'reinv_cap(MW)'] = 0
# ## Step 9: Calculate PV CAPEX and OPEX
#
# After calculating the required capacity of PV, it is time to calculate PV capital cost (CAPEX) which is based in the annual capacity and the annual price of PV pannels given the annual drop due to learning curve. The Operating and Maintenance cost of PV (OPEX) are simply a fraction of the annual CAPEX.
# +
# Calculating PV CAPEX and OPEX:
souss_massa1['PV_Capex(mMAD)']=(souss_massa1['PV_new_cap(MW)']+souss_massa1['reinv_cap(MW)'])*souss_massa1['pv_cc']
souss_massa1['PV_Opex(mMAD)']=(souss_massa1['PV_Capex(mMAD)']*om_cost)
# -
#Calculating the required area for PV installations:
souss_massa1['PV_area(ha)'] = souss_massa1['PV_installed_cap(MW)'] * 0.9 # Since the area required to install 1 MW of PV = 1 ha or 10m2/1KW
# +
#NPV calculations:
souss_massa1['time'] = souss_massa1['Year']-2020
souss_massa1['PV_Capex_NPV(mMAD)'] = souss_massa1['PV_Capex(mMAD)']/((1+discount_rate)**(souss_massa1['time']))
souss_massa1['PV_Opex_NPV(mMAD)'] = souss_massa1['PV_Opex(mMAD)']/((1+discount_rate)**(souss_massa1['time']))
souss_massa1['butane_Subsidy_NPV(mMAD)'] = souss_massa1['butane_Subsidy(mMAD)']/((1+discount_rate)**(souss_massa1['time']))
souss_massa1['grid_cost_NPV(mMAD)'] = souss_massa1['grid_cost(mMAD)']/((1+discount_rate)**(souss_massa1['time']))
souss_massa1['PV_Total_NPV(mMAD)'] = souss_massa1['PV_Capex_NPV(mMAD)'] + souss_massa1['PV_Opex_NPV(mMAD)']
# -
# ## Step 10: Summary dataframe
#
# Here we construct a summary dataframe that includes the final results that we would like to explore.
# +
souss_massa1_summary = souss_massa1.groupby(['Year'])[['water_demand(m3)','energy_demand(KWh)', 'cap_max(MW)', 'pv_elec(KWh)',
'grid_elec(KWh)','butane_cons(tonnes)', 'butane_FARcost(mMAD)',
'PV_new_cap(MW)','PV_installed_cap(MW)','PV_area(ha)','reinv_cap(MW)',
'butane_emissions(MtCO2)','grid_emissions(MtCO2)','Total_emissions(MtCO2)',
'butane_Subsidy(mMAD)','butane_Subsidy_NPV(mMAD)','grid_cost(mMAD)','grid_cost_NPV(mMAD)',
'PV_Capex(mMAD)','PV_Capex_NPV(mMAD)','PV_Opex(mMAD)','PV_Opex_NPV(mMAD)', 'PV_Total_NPV(mMAD)',
'pv_demand(KWh)', 'butane_demand(KWh)', 'grid_demand(KWh)']].sum()
souss_massa1_summary.reset_index(inplace=True)
# -
# ## Step 11: Save results files
#
# The results can then be saved into a defined output folder `butane_results_folder`:
# +
# Step: Saving Results
butane_results_folder = os.path.join('results', 'Desalination', 'Climate Change',
'Butane results', f'phaseout_{butane_phaseout}', f'{pv_level}_PV')
os.makedirs(butane_results_folder, exist_ok = True)
# souss_massa1.to_csv(os.path.join(output_folder, 'results.csv'), index=False)
souss_massa1_summary.to_csv(os.path.join(butane_results_folder, 'butane.gz'))
# -
# ## Visualizing results
# Using some functions imported from the visualization tool, we can plot some general results for the scenario:
# ### Installed PV capacity
pv_installed_capacity(souss_massa1_summary, {})
# ### Energy demand for agricultural pumping
energy_demand_ag(souss_massa1_summary, {})
# ### CO2 emissions from agricultural pumping
emissions_ag(souss_massa1_summary, {})
# ### Total annual costs for agricultural pumping system
costs_plot(souss_massa1_summary,'')
# # end
|
Morocco model/Butane calculations.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + jupyter={"outputs_hidden": false}
# # %matplotlib inline
import pandas as pd
# -
# ### Download the data and load it to Pandas.
#
# You can find them [here](https://drive.google.com/file/d/1NY6cmF9Shjw-dD7BD6bNmfcIVz-kQcFR/view?usp=sharing).
# + jupyter={"outputs_hidden": false}
titles = pd.read_csv('data/titles.csv')
titles.head()
# -
# ### How many movies are listed in the titles dataframe?
titles['title'].value_counts().sum()
# ### What are the earliest two films listed in the titles dataframe?
titles.sort_values(by=['year']).head(2)
# ### How many movies have the title "Hamlet"?
filter_hamlet = (titles['title'] == 'Hamlet')
titles[filter_hamlet]['title'].count()
# ### How many movies are titled "North by Northwest"?
filter_north = (titles['title'] == 'North by Northwest')
titles[filter_north]['title'].count()
# ### When was the first movie titled "Hamlet" made?
titles[filter_hamlet].sort_values(by='year').head(1)
# ### List all of the "Treasure Island" movies from earliest to most recent.
filter_treasureisland = (titles['title'] == 'Treasure Island')
titles[filter_treasureisland].sort_values(by='year')
# ### How many movies were made in the year 1950?
filter_1950 = (titles['year'] == 1950)
titles[filter_1950]['title'].count()
# ### How many movies were made in the year 1960?
filter_1960 = (titles['year'] == 1960)
titles[filter_1960]['title'].count()
# ### How many movies were made from 1950 through 1959?
filter_50to59 = (titles['year'] >= 1950) & (titles['year'] <= 1959)
titles[filter_50to59]['title'].count()
# ### In what years has a movie titled "Batman" been released?
filter_batman = titles['title'] == 'Batman'
titles[filter_batman]['year']
# ### How many roles were there in the movie "Inception"?
cast = pd.read_csv('data/cast.csv', sep=",")
cast.head()
filter_inception = cast['title'] == 'Inception'
cast[filter_inception]['type'].count()
# ### How many roles in the movie "Inception" are NOT ranked by an "n" value?
filter_inception = (cast['title'] == 'Inception') & (cast['n'].isnull())
cast[filter_inception]['type'].count()
# ### But how many roles in the movie "Inception" did receive an "n" value?
filter_inception = (cast['title'] == 'Inception') & (cast['n'] > 0)
cast[filter_inception]['type'].count()
# ### Display the cast of "North by Northwest" in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value.
filter_north = (cast['title'] == 'North by Northwest') & (cast['n'] > 0)
cast[filter_north].sort_values(by='n')
# ### Display the entire cast, in "n"-order, of the 1972 film "Sleuth".
filter_sleuth = (cast['title'] == 'Sleuth') & (cast['year'] == 1972) & (cast['n'] > 0)
cast[filter_sleuth].sort_values(by='n')
# ### Now display the entire cast, in "n"-order, of the 2007 version of "Sleuth".
filter_sleuth = (cast['title'] == 'Sleuth') & (cast['year'] == 2007) & (cast['n'] > 0)
cast[filter_sleuth].sort_values(by='n')
# ### How many roles were credited in the silent 1921 version of Hamlet?
filter_hamlet = (cast['title'] == 'Hamlet') & (cast['year'] == 1921) & (cast['n'] > 0)
cast[filter_hamlet]['type'].count()
# ### How many roles were credited in Branagh’s 1996 Hamlet?
filter_hamlet = (cast['title'] == 'Hamlet') & (cast['year'] == 1996) & (cast['n'] > 0)
cast[filter_hamlet]['type'].count()
# ### How many "Hamlet" roles have been listed in all film credits through history?
filter_hamlet = (cast['title'] == 'Hamlet') & (cast['n'] > 0)
cast[filter_hamlet]['type'].count()
# ### How many people have played an "Ophelia"?
filter_ophelia = cast['character'] == 'Ophelia'
cast[filter_ophelia].count()
# ### How many people have played a role called "The Dude"?
filter_dude = cast['character'] == 'The Dude'
cast[filter_dude].count()
# + jupyter={"outputs_hidden": true}
# -
# ### How many people have played a role called "The Stranger"?
filter_stranger = cast['character'] == 'The Stanger'
cast[filter_stranger].count()
# ### How many roles has <NAME> played throughout his career?
filter_poitier = cast['name'] == '<NAME>'
cast[filter_poitier]['type'].count()
# ### How many roles has <NAME> played?
filter_dench = cast['name'] == '<NAME>'
cast[filter_dench]['type'].count()
# ### List the supporting roles (having n=2) played by <NAME> in the 1940s, in order by year.
filter_grant = (cast['name'] == '<NAME>') & (cast['year'] >= 1940) & (cast['year'] <= 1949) & (cast['n'] == 2)
cast[filter_grant].sort_values(by='year')
# ### List the leading roles that <NAME> played in the 1940s in order by year.
filter_grant = (cast['name'] == '<NAME>') & (cast['year'] >= 1940) & (cast['year'] <= 1949) & (cast['n'] == 1)
cast[filter_grant].sort_values(by='year')
# ### How many roles were available for actors in the 1950s?
filter_actors = (cast['type'] == 'actor') & (cast['year'] >= 1950) & (cast['year'] <= 1959)
cast[filter_actors].count()
# ### How many roles were avilable for actresses in the 1950s?
filter_actress = (cast['type'] == 'actress') & (cast['year'] >= 1950) & (cast['year'] <= 1959)
cast[filter_actress].count()
# ### How many leading roles (n=1) were available from the beginning of film history through 1980?
filter_lead = (cast['n'] == 1) & (cast['year'] <= 1980)
cast[filter_lead]['n'].count()
# ### How many non-leading roles were available through from the beginning of film history through 1980?
filter_nonlead = (cast['n'] > 1) & (cast['year'] <= 1980)
cast[filter_nonlead]['n'].count()
# ### How many roles through 1980 were minor enough that they did not warrant a numeric "n" rank?
filter_norank = (cast['n'].isnull()) & (cast['year'] <= 1980)
cast[filter_norank]['character'].count()
|
Pandas/5.1 Exercises-1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# You are given a one dimensional array that may contain both positive and negative integers, find the sum of contiguous subarray of numbers which has the largest sum.
#
# For example, if the given array is `[-2, -5, 6, -2, -3, 1, 5, -6]`, then the maximum subarray sum is 7 (6,-2,-3,1,5).
#
# Using Divide and Conquer approach, we can find the maximum subarray sum in O(nLogn) time. Following is the Divide and Conquer algorithm.
#
# 1) Divide the given array in two halves
# 2) Return the maximum of following three
# a) Maximum subarray sum in left half (Make a recursive call)
# b) Maximum subarray sum in right half (Make a recursive call)
# c) Maximum subarray sum such that the subarray crosses the midpoint
# +
def crosSum(nums,l,m,r):
# Find the maximum sum of subarray which pass through the midpoint
max_right = -2**31
max_left = -2**31
sumval=0
for i in range(m+1,r+1):# From midpoint to the right
sumval+=nums[i]
max_right = max(max_right,sumval)
sumval=0
for j in range(m,l-1,-1): # From midpoint to the left, reverse the sequence!!!
sumval+=nums[j]
max_left = max(max_left,sumval)
return max_right+max_left
def maxSubarrSum(nums,l=0,r=-1):
if r==-1:
r = len(nums)-1
if l==r:
return nums[l]
else:
m = l+(r-l)//2
return max(maxSubarrSum(nums,l,m),maxSubarrSum(nums,m+1,r), crosSum(nums,l,m,r))
# -
maxSubarrSum([-2, -5, 6, -2, -3, 1, 5, -6])
|
max_subarray_conquer_divide.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Using some more complex (but still simple) operations, we can approximate the price of an ATMF straddle.
# $$ STRADDLE_{ATMF} \approx \frac{2}{\sqrt{2\pi}} S \times \sigma \sqrt(T) $$
vol = 0.2
time = 1.
2. * ((1 / (2*3.14) ** 0.5 ) * vol * time ** 0.5)
# This is a lot to type all the time which is really annoying and error prone. Let's define a function for this so that we can use it over and over
def straddlePricer(vol, time):
return 2. * ((1. / (2*3.14) ** 0.5 ) * vol * time ** 0.5)
# Notice this doesn't immediately return anything to the output area. Rest assured the function is defined and we can begin using it.
print(straddlePricer(0.2, 1.0))
print(2. * ((1. / (2*3.14) ** 0.5 ) * vol * time ** 0.5))
# Input order doesn't matter as long as we let the function know what we're using as inputs
straddlePricer(time=1.0, vol=0.2)
# This is nice, but what if I want to default to certain inputs? By setting the initial inputs below we're implictly calling each of these arguments "optional".
def straddlePricer(vol=0.2, time=1.0):
return 2. * ((1 / (2*3.14) ** 0.5 ) * vol * time ** 0.5)
# In other words, we don't need to pass these arguments to call teh function. It will use 0.2 for vol and 1.0 for time.
straddlePricer()
straddlePricer(0.22)
# There's π in the denominator, but the value we used above is an approximation. Is there a more precise definition? yes, we can use a library called `numpy`. Let's import it first below.
import numpy
# You can access functions of numpy by entering `np.xxxxx`, where `xxxxx` is the function you would like to use. `numpy`'s implementation of `pi` is simply `numpy.pi`.
numpy.pi
# Typing `numpy` over and over again can get pretty tedious. Let's make it easier for ourselves by abbreviating the name. Python convention for `numpy` abbreviation is `np`.
import numpy as np
import pandas as pd
import datetime as dt
np.pi
# `numpy` also has a handy square root function (`np.sqrt`)
np.sqrt(4)
# Let's incorporate `np.pi` and `np.sqrt` into our simple straddle pricer to make things a little more precise and easier to read.
# +
def straddlePricer(vol=0.2, time=1.0):
return 2. * ((1/np.sqrt(2*np.pi)) * vol * np.sqrt(time))
straddlePricer()
# -
straddlePricer() - 2. * ((1 / (2*3.14) ** 0.5 ) * vol * time ** 0.5)
# In this case, the difference in precision and readability isn't huge, but that difference can be valuable at times. In addition to these, `numpy` can do a lot of other things. For instance, we can generate some random numbers.
np.random.rand()
# Is there a way to see what functions are available? Yes, just tab after `np.`
np.
# Continuing with the prior example of pricing our straddle, we can also price the straddle using the Monte Carlo method. We need to generate a normally distributed set of random numbers to simulate the asset's movement through time.
# +
def straddlePricerMC(vol=0.2, time=1.0, mcPaths=100):
dailyVol = vol / np.sqrt(252.)
resultSum = 0
for p in range(mcPaths):
resultSum += np.abs(np.prod((np.random.normal(0, dailyVol, int(round(time*252))) + 1)) - 1)
return resultSum / mcPaths
straddlePricerMC()
# -
# There's a lot of new things going on here. Let's unpack it one line at a time.
# We know the variance scales linearly with time, so we can either
#
# 1. divide the variance by time and take the square root to get a vol, or
# 2. take the square root of variance and divide by the root of time
#
# Generally the latter is clearer and simpler to understand since we typically think in vol terms, but you are free to use whichever method you want.
np.sqrt(vol**2 / 252)
vol = 0.2
var = vol ** 2
sqrtVarOverTime = np.sqrt(var/252)
volOverSqrtTime = vol / np.sqrt(252)
valuesEqual = np.isclose(sqrtVarOverTime, volOverSqrtTime)
print(f'sqrtVarOverTime = {sqrtVarOverTime}\nvolOverSqrtTime={volOverSqrtTime}\nAre they close? {valuesEqual}')
# The next line isnt super exciting, but we set the default value of our cumulative sum to be 0. So we're just defining resultSum and setting it equal to 0.
resultSum = 0
# Next we have a loop. There are different types of loops we can use. Here we use a for loop, which says "iterate over each element in `range(mcPaths)`". But wait...what's `range(mcPaths)`? `range` is a native python function that will return an iterator over a list of ints starting at 0 and going to x-1.
range10 = range(10)
lst = list(range10)
print(lst)
print(len(lst))
# In our case, we don't really want to do anything with `p`, so it is good practice to set it to `_`. We just want to iterate through the loop `mcPaths` times. In the default case, the function runs through the loop 100 times.
# +
def straddlePricerMC(vol=0.2, time=1.0, mcPaths=100):
dailyVol = vol / np.sqrt(252.)
resultSum = 0
for p in range(mcPaths):
resultSum += np.abs(np.prod((np.random.normal(0, dailyVol, int(round(time*252))) + 1)) - 1)
return resultSum / mcPaths
straddlePricerMC()
# -
# To unpack what the function does at each iteration of the loop, let's unpack this one step at a time. We start with the innermost function call and work backwards from there. Let's ask for help to see what the `np.random.normal` method actually does. Thankfully, there are two handy ways to see a function's documentation.
#
# 1. help
# 2. ?
# +
help(np.random.normal)
# # np.random.normal?
# -
# Ok, so we know from the help function that the `np.random.normal` method takes three optional inputs: mean, standard deviation, and size of the array to generate multiple random numbers. It defaults to a distribution with a mean of zero and a standard deviation of 1, returning only 1 random number.
np.random.normal()
# Below we're going to call this method with a mean of zero (no drift) and a standard deviation of our daily vol, so that we can generate multiple days of returns. Specifically, we ask to generate the number of days to maturity.
# +
time = 1
nDays = time * 252
dailyVol = vol / np.sqrt(252.)
print(nDays)
np.random.normal(0, dailyVol, nDays)
# -
# Now, given we have an asset return timeseries, how much is a straddle worth? We're interested in the terminal value of the asset and because we assume the straddle is struck ATM, we can just take the absolute value of the asset's deviation from the initial value (in this case, 1)
# +
np.random.seed(42) # guarantee the same result from the two random series
returns = np.random.normal(0, dailyVol, time*252)
priceAtMaturity = np.prod(returns + 1)
changeAtMaturity = priceAtMaturity - 1
absChangeAtMaturity = np.abs(changeAtMaturity)
print(absChangeAtMaturity)
# all together in one line
np.random.seed(42)
print(np.abs(np.prod((np.random.normal(0, dailyVol, time * 252)+1))-1))
# -
# Let's take a closer look at what we did above. This time, we're going to utilize another two libraries called pandas and perspective to make our life a little easier.
# +
import pandas as pd
from perspective import psp
simulatedAsset = pd.DataFrame(np.random.normal(0, dailyVol, time*252) + 1, columns=['return'])
simulatedAsset['price'] = (1 * simulatedAsset['return']).cumprod()
psp(simulatedAsset)
# -
# The `for` loop ultimately just does the above for `mcPaths` number of times, and we ultimately take the average of the paths to find the expected value of the straddle.
mcPaths = 100
resultSum = 0.
for _ in range(mcPaths):
resultSum += np.abs(np.prod(np.random.normal(0., dailyVol, time*252)+1)-1)
print(resultSum/mcPaths)
# This price is pretty close to the price from our original pricer. More paths should help get us even closer.
straddlePricerMC(mcPaths=2000)
# 2000 paths is a lot, but it looks like we;re still not converging to the original price. If we add more paths there is a tradeoff with compute time. Luckily, Jupyter has made it really easy to see how fast our function is.
# %timeit straddlePricerMC(mcPaths=2000)
# That's pretty fast. we can do a lot more paths.
print(f"1 path: {straddlePricerMC(mcPaths=1)}")
print(f"2000 path: {straddlePricerMC(mcPaths=2000)}")
print(f"5000 path: {straddlePricerMC(mcPaths=5000)}")
print(f"10000 path: {straddlePricerMC(mcPaths=10000)}")
print(f"100000 path: {straddlePricerMC(mcPaths=100000)}")
print(f"Closed form approximation: {straddlePricer()}")
# Can we improve the above MC implementation? Of course! We can generate our random asset series in one go. Remember the `size` argument of the `np.random.normal` function
# +
nDays = time * 252
size = (nDays, mcPaths)
simulatedAsset = pd.DataFrame(np.random.normal(0, dailyVol, size))
simulatedAsset = (1 + simulatedAsset).cumprod()
simulatedAsset.tail()
# -
# Cool!...Let's visualize by plotting it with matplotlib.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
fig = plt.figure(figsize=(8,6))
ax = plt.axes()
_ = ax.plot(simulatedAsset)
# -
# So let's incorporate that into a `pandas` version of the MC pricer.
# +
def straddlePricerMCWithPD(vol=0.2, time=1, mcPaths=100000):
dailyVol = vol / (252 ** .5)
randomPaths = pd.DataFrame(np.random.normal(0, dailyVol, (time*252, mcPaths)))
price = ((1+randomPaths).prod() - 1).abs().sum() / mcPaths
return price
straddlePricerMCWithPD()
|
notebooks/2_straddle.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# +
f1 = 'WorkforceUtilizationReport-Apprentice.csv'
f2 = 'WorkforceUtilizationReport-Journeymen.csv'
df = pd.read_csv(f1)
temp_df = pd.read_csv(f2)
df = pd.concat([df, temp_df], join='inner')
# +
craft_levels = ['Journeymen', 'Apprentice']
levels = df.groupby(df.CRAFT_LEVEL)
summary = pd.DataFrame(columns=['LEVEL', 'CAUCASIAN', 'AFRICAN_AMERICAN', 'HISPANIC', 'ASIAN', 'NATIVE_AMERICAN',
'OTHER', 'FEMALE'])
for i in range(len(craft_levels)):
lvl = levels.get_group(craft_levels[i])
summary.loc[i] = [craft_levels[i], lvl['CAUCASIAN'].sum()/df['CAUCASIAN'].sum(),
lvl['AFRICAN_AMERICAN'].sum()/df['AFRICAN_AMERICAN'].sum(),
lvl['HISPANIC'].sum()/df['HISPANIC'].sum(), lvl['ASIAN'].sum()/df['ASIAN'].sum(),
lvl['NATIVE_AMERICAN'].sum()/df['NATIVE_AMERICAN'].sum(),
lvl['OTHER'].sum()/df['OTHER'].sum(), lvl['TOTAL_FEMALE'].sum()/df['TOTAL_FEMALE'].sum()]
# -
summary
plt.bar(['CAUCASIAN', 'AFRICAN_AMERICAN', 'HISPANIC', 'ASIAN', 'NATIVE_AMERICAN',
'OTHER', 'FEMALE'], summary.loc[0][1:], color = 'blue')
plt.xticks(rotation=90)
plt.title('Percentage of Journeymen Per Minority Group')
plt.ylabel('Percentage')
plt.xlabel('Minority Groups')
plt.savefig('Journeymen_Per_MinorityGroup.png')
plt.bar(['CAUCASIAN', 'AFRICAN_AMERICAN', 'HISPANIC', 'ASIAN', 'NATIVE_AMERICAN',
'OTHER', 'FEMALE'], summary.loc[1][1:], color = 'blue')
plt.xticks(rotation=90)
plt.title('Percentage of Apprentices Per Minority Group')
plt.ylabel('Percentage')
plt.xlabel('Minority Groups')
plt.savefig('Apprentices_Per_MinorityGroup.png')
# + tags=[]
bar_width = 0.35
index = np.arange(7)
fig, ax = plt.subplots()
j = ax.bar(index, summary.loc[0][1:], bar_width,
label="Journeymen Hours per Minority Group")
a = ax.bar(index+bar_width, summary.loc[1][1:],
bar_width, label="Apprentice Hours Per Minority Group")
ax.set_xlabel('Group')
ax.set_ylabel('Percentage of Total Hours Worked')
ax.set_title('Journeymen vs. Apprentices Working Hours per Minority Group')
ax.set_xticks(index + bar_width / 2)
ax.set_xticklabels(['CAUCASIAN', 'AFRICAN AMERICAN', 'HISPANIC', 'ASIAN', 'NATIVE AMERICAN',
'OTHER', 'FEMALE'])
ax.legend(bbox_to_anchor=(1.7, 1), loc='upper right')
plt.xticks(rotation=90)
plt.show()
# -
|
WGBH-DCAMM/DCAMM/Hour_Analysis_By_Craft_Level.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# +
# %matplotlib inline
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
plt.style.use(['seaborn-ticks', 'seaborn-talk'])
import seaborn as sns
import cartopy.crs as ccrs
# -
# # Setup
# +
from dask.distributed import Client, progress
from dask_kubernetes import KubeCluster
cluster = KubeCluster(n_workers=4)
cluster.adapt(minimum=1, maximum=40)
cluster
# -
client = Client(cluster)
client
# Load data
# +
import intake
import util
if util.is_ncar_host():
print("On NCAR")
col = intake.open_esm_datastore("../catalogs/glade-cmip6.json")
else:
print("On GCP")
col = intake.open_esm_datastore("../catalogs/pangeo-cmip6.json")
# +
from collections import namedtuple
model_id = namedtuple('model_id', ['institution_id', 'source_id'])
models = [
model_id('MIROC', 'MIROC6'),
model_id('NCAR', 'CESM2-WACCM'),
model_id('CCCma', 'CanESM5'),
model_id('CAMS', 'CAMS-CSM1-0'),
model_id('MRI', 'MRI-ESM2-0'),
model_id('MOHC', 'UKESM1-0-LL'),
model_id('MIROC', 'MIROC-ES2L'),
model_id('NCAR', 'CESM2'),
model_id('BCC', 'BCC-CSM2-MR')
]
# +
def get_data(variable_id, model):
institution_id = model.institution_id
source_id = model.source_id
cat = col.search(
experiment_id=['piControl', 'historical', 'ssp585'],
variable_id=variable_id,
source_id=[source_id, ]
)
dset_dict = cat.to_dataset_dict(
zarr_kwargs={'consolidated': True,
'decode_times': True},
cdf_kwargs={},
)
pi = f'CMIP.{institution_id}.{source_id}.piControl.Amon.gn'
hist = f'CMIP.{institution_id}.{source_id}.historical.Amon.gn'
fut = f'ScenarioMIP.{institution_id}.{source_id}.ssp585.Amon.gn'
pi_ds = dset_dict[pi]
hist_ds = dset_dict[hist]
fut_ds = dset_dict[fut]
return pi_ds, hist_ds, fut_ds
def fit_hs(pi_ds, hist_ds, fut_ds, model):
model_id = f'{model.institution_id}.{model.source_id}'
hs = HawkinsSutton2012(
pi_ds, hist_ds, fut_ds, variable_id, model_id
)
# hs.fit()
return hs
# -
toe_dict = {}
for m in models:
print(m)
pi, hist, fut = get_data('tas', m)
hs = fit_hs(pi, hist, fut, m)
hs.fit()
toe = hs.time_of_emergence(2)
toe_dict[f'{m.institution_id}.{m.source_id}'] = toe
print("done")
len(toe_dict)
_t.lon
# +
from tools import shift_lons, shift_roll
nrows, ncols = 3, 3
size, aspect = 4., 1.8
width, height = size*aspect*ncols, size*nrows
figsize = (width, height)
fig, axs = plt.subplots(nrows, ncols, figsize=figsize,
subplot_kw=dict(
projection=ccrs.PlateCarree(),
aspect='auto'
))
cmap_kws = dict(
vmin=2005, vmax=2065, extend='max', levels=13,
cmap="RdYlBu"
)
for ax, m in zip(axs.flatten(), models):
model_name = f'{m.institution_id}.{m.source_id}'
print(model_name)
ax.set_title(f'{m.institution_id} - {m.source_id}', loc='left')
_t = toe_dict[model_name]['TOE'].copy()
# _t = shift_lons(_t)
# nlon = len(_t.lon)
# _t = _t.roll(lon=)
cf = _t.plot.pcolormesh('lon', 'lat',
ax=ax, transform=ccrs.PlateCarree(),
add_colorbar=False,
**cmap_kws)
ax.coastlines()
ax.set_aspect('auto', 'box')
# break
cb = plt.colorbar(cf, ax=axs, label='Year of Emergence')
# plt.tight_layout()
plt.savefig("toe.composite.png", dpi=200, transparent=True, bbox_inches='tight')
# -
# ---
|
notebooks/hawkins_automate_example.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Compare the positions, sizes, position angles, and aspect ratios of the 100 largest galaxies in the [WXSC](https://vislab.idia.ac.za/research) against the quantities in Hyperleda.
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from astropy.table import Table
from astrometry.libkd.spherematch import match_radec
import LSLGA.io
import LSLGA.misc
sns, _ = LSLGA.misc.plot_style()
# %matplotlib inline
tiles = LSLGA.io.read_desi_tiles(verbose=True)
leda = LSLGA.io.read_hyperleda(verbose=True)
leda['IN_DESI'] = LSLGA.misc.is_point_in_desi(tiles, leda['RA'], leda['DEC']).astype(bool)
# Read the WXSC catalog.
wxscfile = os.path.join(os.getenv('CATALOGS_DIR'), 'wxsc-largest-galaxies', 'Table_1.tbl')
names = ['wxsc_id', 'galaxy', 'ra', 'dec', 'ba', 'pa',
'W1_radius', 'W1_mag', 'W1_mag_err',
'W2_radius', 'W2_mag', 'W2_mag_err',
'W3_radius', 'W3_mag', 'W3_mag_err',
'W4_radius', 'W4_mag', 'W4_mag_err',
'W1_W2', 'W1_W2_err', 'W2_W3', 'W2_W3_err']
dtype = {'wxsc_id': str, 'galaxy': str, 'ra': np.float64, 'dec': np.float64,
'ba': np.float32, 'pa': np.float32,
'W1_radius': str, 'W1_mag': str, 'W1_mag_err': str,
'W2_radius': str, 'W2_mag': str, 'W2_mag_err': str,
'W3_radius': str, 'W3_mag': str, 'W3_mag_err': str,
'W4_radius': str, 'W4_mag': str, 'W4_mag_err': str,
'W1_W2': str, 'W1_W2_err': str, 'W2_W3': str, 'W2_W3_err': str}
data = pd.read_csv(wxscfile, delimiter='|', comment='#', na_values='null', skiprows=0,
names=names, dtype=dtype, skip_blank_lines=True,
warn_bad_lines=True, error_bad_lines=False)
data
m1, m2, d12 = match_radec(data['ra'].values, data['dec'].values, leda['RA'], leda['DEC'],
10.0/3600.0, nearest=False)
srt = np.argsort(m1)
mdata = data.iloc[m1[srt]]
mleda = leda[m2[srt]]
mdata
# Compare bulk properties.
# +
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
for thisax, label, xx, yy in zip(
ax, (r'$\log_{10}$(Diameter) (arcmin)', 'PA (deg)', 'b/a'),
(np.log10(mdata['W1_radius'].values.astype('f4')),
mdata['pa'].values.astype('f4'),
mdata['ba'].values.astype('f4')),
(np.log10(mleda['D25']), mleda['PA'], mleda['BA'])):
thisax.scatter(xx[mleda['IN_DESI']], yy[mleda['IN_DESI']],
label='In Footprint', alpha=0.7, edgecolor='k')
thisax.scatter(xx[~mleda['IN_DESI']], yy[~mleda['IN_DESI']], marker='s',
label='Outside Footprint', alpha=0.7)
thisax.set_xlabel('{} [WXSC-100]'.format(label))
thisax.set_ylabel('{} [LSLGA]'.format(label))
mx = np.nanmax((xx, yy))
thisax.plot([-0.1, mx], [-0.1, mx], color='k', lw=2, alpha=0.8)
ax[0].legend(fontsize=12)
plt.subplots_adjust(wspace=0.3)
# -
# Find the largest discrepancies.
# +
d_wxsc = np.log10(mdata['W1_radius'].values.astype('f4'))
d_lslga = np.log10(mleda['D25'])
ddist = d_wxsc/d_lslga-1
cut = ddist > 0.05
srt = np.argsort(ddist[cut])[::-1]
diff = Table()
diff['LSLGA_GALAXY'] = mleda['GALAXY'][srt]
diff['WXSC_GALAXY'] = mdata.iloc[srt]['galaxy']
diff['RA'] = mleda['RA'][srt]
diff['DEC'] = mleda['DEC'][srt]
diff['DIAM_WXSC'] = d_wxsc[srt]
diff['DIAM_LSLGA'] = d_lslga[srt]
diff['PA_WXSC'] = mdata['pa'].values[srt]
diff['PA_LSLGA'] = mleda['PA'][srt]
diff['BA_WXSC'] = mdata['ba'].values[srt]
diff['BA_LSLGA'] = mleda['BA'][srt]
diff['IN_DESI'] = mleda['IN_DESI'][srt]
# -
print('In DESI footprint')
diff[diff['IN_DESI']]
print('Outside DESI footprint')
diff[~diff['IN_DESI']]
# Which galaxies don't match.
missing = np.arange(len(data))
missing = np.delete(missing, m1)
print(missing)
data.iloc[missing]
|
doc/compare-wxsc-largest-galaxies.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3-azureml
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# # Run Experiments
#
# You can use the Azure Machine Learning SDK to run code experiments that log metrics and generate outputs. This is at the core of most machine learning operations in Azure Machine Learning.
#
# ## Connect to your workspace
#
# All experiments and associated resources are managed within your Azure Machine Learning workspace. In most cases, you should store the workspace configuration in a JSON configuration file. This makes it easier to reconnect without needing to remember details like your Azure subscription ID. You can download the JSON configuration file from the blade for your workspace in the Azure portal, but if you're using a Compute Instance within your wokspace, the configuration file has already been downloaded to the root folder.
#
# The code below uses the configuration file to connect to your workspace.
#
# > **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
# + gather={"logged": 1612254514078}
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
# -
# ## Run an experiment
#
# One of the most fundamentals tasks that data scientists need to perform is to create and run experiments that process and analyze data. In this exercise, you'll learn how to use an Azure ML *experiment* to run Python code and record values extracted from data. In this case, you'll use a simple dataset that contains details of patients that have been tested for diabetes. You'll run an experiment to explore the data, extracting statistics, visualizations, and data samples. Most of the code you'll use is fairly generic Python, such as you might run in any data exploration process. However, with the addition of a few lines, the code uses an Azure ML *experiment* to log details of the run.
# + gather={"logged": 1612254727143}
from azureml.core import Experiment
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-diabetes")
# Start logging data from the experiment, obtaining a reference to the experiment run
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the data from a local file
data = pd.read_csv('data/diabetes.csv')
# Count the rows and log the result
row_count = (len(data))
run.log('observations', row_count)
print('Analyzing {} rows of data'.format(row_count))
# Plot and log the count of diabetic vs non-diabetic patients
diabetic_counts = data['Diabetic'].value_counts()
fig = plt.figure(figsize=(6,6))
ax = fig.gca()
diabetic_counts.plot.bar(ax = ax)
ax.set_title('Patients with Diabetes')
ax.set_xlabel('Diagnosis')
ax.set_ylabel('Patients')
plt.show()
run.log_image(name='label distribution', plot=fig)
# log distinct pregnancy counts
pregnancies = data.Pregnancies.unique()
run.log_list('pregnancy categories', pregnancies)
# Log summary statistics for numeric columns
med_columns = ['PlasmaGlucose', 'DiastolicBloodPressure', 'TricepsThickness', 'SerumInsulin', 'BMI']
summary_stats = data[med_columns].describe().to_dict()
for col in summary_stats:
keys = list(summary_stats[col].keys())
values = list(summary_stats[col].values())
for index in range(len(keys)):
run.log_row(col, stat=keys[index], value = values[index])
# Save a sample of the data and upload it to the experiment output
data.sample(100).to_csv('sample.csv', index=False, header=True)
run.upload_file(name='outputs/sample.csv', path_or_stream='./sample.csv')
# Complete the run
run.complete()
# -
# ## View run details
#
# In Jupyter Notebooks, you can use the **RunDetails** widget to see a visualization of the run details.
# + gather={"logged": 1612254816843}
from azureml.widgets import RunDetails
RunDetails(run).show()
# -
# ### View more details in Azure Machine Learning studio
#
# Note that the **RunDetails** widget includes a link to **view run details** in Azure Machine Learning studio. Click this to open a new browser tab with the run details (you can also just open [Azure Machine Learning studio](https://ml.azure.com) and find the run on the **Experiments** page). When viewing the run in Azure Machine Learning studio, note the following:
#
# - The **Details** tab contains the general properties of the experiment run.
# - The **Metrics** tab enables you to select logged metrics and view them as tables or charts.
# - The **Images** tab enables you to select and view any images or plots that were logged in the experiment (in this case, the *Label Distribution* plot)
# - The **Child Runs** tab lists any child runs (in this experiment there are none).
# - The **Outputs + Logs** tab shows the output or log files generated by the experiment.
# - The **Snapshot** tab contains all files in the folder where the experiment code was run (in this case, everything in the same folder as this notebook).
# - The **Explanations** tab is used to show model explanations generated by the experiment (in this case, there are none).
# - The **Fairness** tab is used to visualize predictive performance disparities that help you evaluate the fairness of machine learning models (in this case, there are none).
# ### Retrieve experiment details using the SDK
#
# The **run** variable in the code you ran previously is an instance of a **Run** object, which is a reference to an individual run of an experiment in Azure Machine Learning. You can use this reference to get information about the run and its outputs:
# + gather={"logged": 1612254931555}
import json
# Get logged metrics
print("Metrics:")
metrics = run.get_metrics()
for metric_name in metrics:
print(metric_name, ":", metrics[metric_name])
# Get output files
print("\nFiles:")
files = run.get_file_names()
for file in files:
print(file)
# -
# You can download the files produced by the experiment, either individually by using the **download_file** method, or by using the **download_files** method to retrieve multiple files. The following code downloads all of the files in the run's **output** folder:
# + gather={"logged": 1612255006843}
import os
download_folder = 'downloaded-files'
# Download files in the "outputs" folder
run.download_files(prefix='outputs', output_directory=download_folder)
# Verify the files have been downloaded
for root, directories, filenames in os.walk(download_folder):
for filename in filenames:
print (os.path.join(root,filename))
# -
# If you need to troubleshoot the experiment run, you can use the **get_details** method to retrieve basic details about the run, or you can use the **get_details_with_logs** method to retrieve the run details as well as the contents of log files generated during the run:
# + gather={"logged": 1612255038276}
run.get_details_with_logs()
# -
# Note that the details include information about the compute target on which the experiment was run, the date and time when it started and ended. Additionally, because the notebook containing the experiment code (this one) is in a cloned Git repository, details about the repo, branch, and status are recorded in the run history.
#
# In this case, note that the **logFiles** entry in the details indicates that no log files were generated. That's typical for an inline experiment like the one you ran, but things get more interesting when you run a script as an experiment; which is what we'll look at next.
# ## Run an experiment script
#
# In the previous example, you ran an experiment inline in this notebook. A more flexible solution is to create a separate script for the experiment, and store it in a folder along with any other files it needs, and then use Azure ML to run the experiment based on the script in the folder.
#
# First, let's create a folder for the experiment files, and copy the data into it:
# + gather={"logged": 1612255164830}
import os, shutil
# Create a folder for the experiment files
folder_name = 'diabetes-experiment-files'
experiment_folder = './' + folder_name
os.makedirs(folder_name, exist_ok=True)
# Copy the data file into the experiment folder
shutil.copy('data/diabetes.csv', os.path.join(folder_name, "diabetes.csv"))
# -
# Now we'll create a Python script containing the code for our experiment, and save it in the experiment folder.
#
# > **Note**: running the following cell just *creates* the script file - it doesn't run it!
# +
# %%writefile $folder_name/diabetes_experiment.py
from azureml.core import Run
import pandas as pd
import os
# Get the experiment run context
run = Run.get_context()
# load the diabetes dataset
data = pd.read_csv('diabetes.csv')
# Count the rows and log the result
row_count = (len(data))
run.log('observations', row_count)
print('Analyzing {} rows of data'.format(row_count))
# Count and log the label counts
diabetic_counts = data['Diabetic'].value_counts()
print(diabetic_counts)
for k, v in diabetic_counts.items():
run.log('Label:' + str(k), v)
# Save a sample of the data in the outputs folder (which gets uploaded automatically)
os.makedirs('outputs', exist_ok=True)
data.sample(100).to_csv("outputs/sample.csv", index=False, header=True)
# Complete the run
run.complete()
# -
# This code is a simplified version of the inline code used before. However, note the following:
# - It uses the `Run.get_context()` method to retrieve the experiment run context when the script is run.
# - It loads the diabetes data from the folder where the script is located.
# - It creates a folder named **outputs** and writes the sample file to it - this folder is automatically uploaded to the experiment run
# Now you're almost ready to run the experiment. To run the script, you must create a **ScriptRunConfig** that identifies the Python script file to be run in the experiment, and then run an experiment based on it.
#
# > **Note**: The ScriptRunConfig also determines the compute target and Python environment. If you don't specify these, a default environment is created automatically on the local compute where the code is being run (in this case, where this notebook is being run).
#
# The following cell configures and submits the script-based experiment.
# + gather={"logged": 1612255590646}
import os
import sys
from azureml.core import Experiment, ScriptRunConfig
from azureml.widgets import RunDetails
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_experiment.py')
# submit the experiment
experiment = Experiment(workspace=ws, name='mslearn-diabetes')
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
# -
# As before, you can use the widget or the link to the experiment in [Azure Machine Learning studio](https://ml.azure.com) to view the outputs generated by the experiment, and you can also write code to retrieve the metrics and files it generated:
# + gather={"logged": 1612255641508}
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
# -
# Note that this time, the run generated some log files. You can view these in the widget, or you can use the **get_details_with_logs** method like we did before, only this time the output will include the log data.
# + gather={"logged": 1612255650085}
run.get_details_with_logs()
# -
# Although you can view the log details in the output above, it's usually easier to download the log files and view them in a text editor.
# + gather={"logged": 1612255663799}
import os
log_folder = 'downloaded-logs'
# Download all files
run.get_all_logs(destination=log_folder)
# Verify the files have been downloaded
for root, directories, filenames in os.walk(log_folder):
for filename in filenames:
print (os.path.join(root,filename))
# -
# ## View experiment run history
#
# Now that you've run the same experiment multiple times, you can view the history in [Azure Machine Learning studio](https://ml.azure.com) and explore each logged run. Or you can retrieve an experiment by name from the workspace and iterate through its runs using the SDK:
# +
from azureml.core import Experiment, Run
diabetes_experiment = ws.experiments['mslearn-diabetes']
for logged_run in diabetes_experiment.get_runs():
print('Run ID:', logged_run.id)
metrics = logged_run.get_metrics()
for key in metrics.keys():
print('-', key, metrics.get(key))
# -
# ## Use MLflow
#
# MLflow is an open source platform for managing machine learning processes. It's commonly (but not exclusively) used in Databricks environments to coordinate experiments and track metrics. In Azure Machine Learning experiments, you can use MLflow to track metrics as an alternative to the native log functionality.
#
# To take advantage of this capability, you'll need the **mlflow** and **azureml-mlflow** packages, so let's ensure they are installed.
# !pip show mlflow azureml-mlflow
# ### Use MLflow with an inline experiment
#
# To use MLflow to track metrics for an inline experiment, you must set the MLflow *tracking URI* to the workspace where the experiment is being run. This enables you to use **mlflow** tracking methods to log data to the experiment run.
# +
from azureml.core import Experiment
import pandas as pd
import mlflow
# Set the MLflow tracking URI to the workspace
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name='mslearn-diabetes-mlflow')
mlflow.set_experiment(experiment.name)
# start the MLflow experiment
with mlflow.start_run():
print("Starting experiment:", experiment.name)
# Load data
data = pd.read_csv('data/diabetes.csv')
# Count the rows and log the result
row_count = (len(data))
mlflow.log_metric('observations', row_count)
print("Run complete")
# -
# Now let's look at the metrics logged during the run
# +
# Get the latest run of the experiment
run = list(experiment.get_runs())[0]
# Get logged metrics
print("\nMetrics:")
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
# Get a link to the experiment in Azure ML studio
experiment_url = experiment.get_portal_url()
print('See details at', experiment_url)
# -
# After running the code above, you can use the link that is displayed to view the experiment in Azure Machine Learning studio. Then select the latest run of the experiment and view its **Metrics** tab to see the logged metric.
#
# ### Use MLflow in an experiment script
#
# You can also use MLflow to track metrics in an experiment script.
#
# Run the following two cells to create a folder and a script for an experiment that uses MLflow.
# +
import os, shutil
# Create a folder for the experiment files
folder_name = 'mlflow-experiment-files'
experiment_folder = './' + folder_name
os.makedirs(folder_name, exist_ok=True)
# Copy the data file into the experiment folder
shutil.copy('data/diabetes.csv', os.path.join(folder_name, "diabetes.csv"))
# +
# %%writefile $folder_name/mlflow_diabetes.py
from azureml.core import Run
import pandas as pd
import mlflow
# start the MLflow experiment
with mlflow.start_run():
# Load data
data = pd.read_csv('diabetes.csv')
# Count the rows and log the result
row_count = (len(data))
print('observations:', row_count)
mlflow.log_metric('observations', row_count)
# -
# When you use MLflow tracking in an Azure ML experiment script, the MLflow tracking URI is set automatically when you start the experiment run. However, the environment in which the script is to be run must include the required **mlflow** packages.
# +
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Create a Python environment for the experiment
mlflow_env = Environment("mlflow-env")
# Ensure the required packages are installed
packages = CondaDependencies.create(conda_packages=['pandas','pip'],
pip_packages=['mlflow','azureml-mlflow'])
mlflow_env.python.conda_dependencies = packages
# Create a script config
script_mlflow = ScriptRunConfig(source_directory=experiment_folder,
script='mlflow_diabetes.py',
environment=mlflow_env)
# submit the experiment
experiment = Experiment(workspace=ws, name='mslearn-diabetes-mlflow')
run = experiment.submit(config=script_mlflow)
RunDetails(run).show()
run.wait_for_completion()
# -
# As usual, you can get the logged metrics from the experiment run when it's finished.
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
# > **More Information**: To find out more about running experiments, see [this topic](https://docs.microsoft.com/azure/machine-learning/how-to-manage-runs) in the Azure ML documentation. For details of how to log metrics in a run, see [this topic](https://docs.microsoft.com/azure/machine-learning/how-to-track-experiments). For more information about integrating Azure ML experiments with MLflow, see [this topic](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow).
|
04 - Run Experiments.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_amazonei_mxnet_p36
# language: python
# name: conda_amazonei_mxnet_p36
# ---
# # Mailout dataset: EDA & pre-processing steps
# # 00. Packages
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm
# %matplotlib inline
### my utils
from eda_utils import *
# -
# # 01. Load data
# %%time
train = pd.read_csv('../arvato_data/Udacity_MAILOUT_052018_TRAIN.csv', sep=';', low_memory = False, index_col = 'LNR')
train
# %%time
test = pd.read_csv('../arvato_data/Udacity_MAILOUT_052018_TEST.csv', sep=';', low_memory = False, index_col = 'LNR')
test
# # 02. Features
#
# ## 02.01. Intro
# %%time
empty_cells_train = pd.isnull(train).sum()*100.0/train.shape[0]
empty_cells_train = empty_cells_train.sort_values(ascending=False)
empty_cells_train[:50].plot(figsize=(20,3),kind='bar',title='TRAIN dataset') # bar plot of first 50 most missing features
plt.hist(empty_cells_train,bins=15)
plt.plot()
# %%time
empty_cells_test = pd.isnull(test).sum()*100.0/test.shape[0]
empty_cells_test = empty_cells_test.sort_values(ascending=False)
empty_cells_test[:50].plot(figsize=(20,3),kind='bar',title='TEST dataset') # bar plot of first 50 most missing features
plt.hist(empty_cells_test,bins=15)
plt.plot()
missingness_threshold_percentage=30
print(np.sum(empty_cells_train>=missingness_threshold_percentage)
,np.sum(empty_cells_test>=missingness_threshold_percentage))
# ## 02.02. Undefined features
metadata_attributes = r"../arvato_data/DIAS Attributes - Values 2017.xlsx"
train,missing_metadata_annotations_train,not_present_features_train = \
unknown_to_nan(train, metadata_attributes , 'unknown|no transactions known')
print("{} TRAIN dataset features lacking metadata information, while there are {} features not used by provided dataset. ".\
format( len(missing_metadata_annotations_train), len(not_present_features_train) ) )
### metadata_attributes = r"../arvato_data/DIAS Attributes - Values 2017.xlsx"
test,missing_metadata_annotations_test,not_present_features_test = \
unknown_to_nan(test, metadata_attributes , 'unknown|no transactions known')
print("{} TEST dataset features lacking metadata information, while there are {} features not used by provided dataset. ".\
format( len(missing_metadata_annotations_test), len(not_present_features_test) ) )
# in "TRAIN" we have one additional column which lacks metadata information, but we know what is that column, it is a `RESPONSE` column, that states whether or not a person became a customer of the company following the marketing campaign.
# ## 02.03. Missing features: correction
# +
metadata_attributes = r"../arvato_data/DIAS Attributes - Values 2017.xlsx"
train,missing_metadata_annotations_train,not_present_features_train = \
unknown_to_nan(train, metadata_attributes , 'unknown|no transactions known',
rename_columns='azdias_corrected_features.tsv')
# +
test,missing_metadata_annotations_test,not_present_features_test = \
unknown_to_nan(test, metadata_attributes , 'unknown|no transactions known',
rename_columns='azdias_corrected_features.tsv')
# +
print("{} TRAIN dataset features lacking metadata information, while there are {} features not used by provided dataset. ".\
format( len(missing_metadata_annotations_train), len(not_present_features_train) ) )
# -
print("{} TEST dataset features lacking metadata information, while there are {} features not used by provided dataset. ".\
format( len(missing_metadata_annotations_test), len(not_present_features_test) ) )
# ## 02.04. Feature missingness filtering
# +
# %%time
empty_cells_train = pd.isnull(train).sum()*100.0/train.shape[0]
empty_cells_train = empty_cells_train.sort_values(ascending=False)
empty_cells_test = pd.isnull(test).sum()*100.0/test.shape[0]
empty_cells_test = empty_cells_test.sort_values(ascending=False)
# +
# %%time
features_to_drop_train =list( (empty_cells_train[empty_cells_train>missingness_threshold_percentage]).index.values)
train.drop(labels=features_to_drop_train,inplace=True,axis='columns')
print("Dropped {} TRAIN columns".format(len(features_to_drop_train)))
# +
# %%time
empty_cells_train = pd.isnull(train).sum()*100.0/train.shape[0]
empty_cells_train = empty_cells_train.sort_values(ascending=False)
empty_cells_train[:50].plot(figsize=(20,3),kind='bar',title='TRAIN dataset') # bar plot of first 50 most missing features
# +
# %%time
features_to_drop_test =list( (empty_cells_test[empty_cells_test>missingness_threshold_percentage]).index.values)
test.drop(labels=features_to_drop_test,inplace=True,axis='columns')
print("Dropped {} TEST columns".format(len(features_to_drop_test)))
# -
# %%time
empty_cells_test = pd.isnull(test).sum()*100.0/test.shape[0]
empty_cells_test = empty_cells_test.sort_values(ascending=False)
empty_cells_test[:50].plot(figsize=(20,3),kind='bar',title='TEST dataset') # bar plot of first 50 most missing features
# +
assert len(features_to_drop_test) == len(features_to_drop_train)
assert len(set(features_to_drop_test).intersection((set(features_to_drop_train)))) == len(features_to_drop_test)
# -
# We see that both for train and test we **removed 31 columns**, and these columns are THE SAME for both datasets
# ---
# # 03. Missing rows
# %%time
empty_rows_train = pd.isnull(train).sum(axis=1)*100.0/train.shape[1]
empty_rows_train = empty_rows_train.sort_values(ascending=False)
empty_rows_train[:50].plot(figsize=(20,3),kind='bar') # bar plot of first 50 most missing rows
# %%time
empty_rows_test = pd.isnull(test).sum(axis=1)*100.0/test.shape[1]
empty_rows_test = empty_rows_test.sort_values(ascending=False)
empty_rows_test[:50].plot(figsize=(20,3),kind='bar') # bar plot of first 50 most missing rows
row_missingness_threshold_percentage = 90
# How many percent of rows/entries/subjects were removed?
sum(empty_rows_train>row_missingness_threshold_percentage)*100/train.shape[0]
# How many percent of rows/entries/subjects were removed?
sum(empty_rows_test>row_missingness_threshold_percentage)*100/test.shape[0]
# We see that we have approximately 18% of subjects in the dataset that have missingness above defined threshold of `50%`. I might **re-visit** this threshold later, depending on the overall score of the subsequent classifier.
#
# **Re-visit!**: Because the training dataset looses relative a big chunk of information due to missingness, and subsequent model suffers from loss of positive cases (which are imbalanced), I decided to increase the threshold to 90%, which in fact **doesn't filter for missing rows**, as all samples have at least 10% columns present. Subsequent Bayesian imputation might address this problem.
#
# ## We cannot remove `test` cases, final predictions have to be made for ALL 42833 TEST CASES!!!
# +
fig,axs = plt.subplots(1,2,figsize=(15,5))
axs[0].hist(empty_rows_train,bins=15,label="TRAIN",log=True)
axs[0].set_ylim(1e0,5e4)
axs[0].legend()
axs[0].set_xlabel("% missingness of features per person")
axs[0].set_ylabel("Counts of samples")
axs[1].hist(empty_rows_test,bins=15,label="TEST",log=True)
axs[1].set_ylim(1e0,5e4)
axs[1].legend()
axs[1].set_xlabel("% missingness of features per person")
axs[1].set_ylabel("Counts of samples")
plt.plot()
# +
# %%time
samples_to_drop_train =list( (empty_rows_train[empty_rows_train>row_missingness_threshold_percentage]).index.values)
try:
train.drop(labels=samples_to_drop_train,inplace=True,axis='index')
except KeyError:
print("Already deleted")
print("{} TRAIN samples dropped".format(len(samples_to_drop_train)))
# -
assert test.shape[0]==42833
# # 04. Data types and & feature encoding
#
# I am going to apply the SAME logic as in the previous notebooks 01 and 02. Except here I have additional column only for TRAIN dataset `RESPONSE` that is assumed to be a binary variable, and is already encoded as 0/1 (see below)
# +
metadata_attributes = r"../arvato_data/DIAS Attributes - Values 2017.xlsx"
train,missing_metadata_annotations_train,not_present_features_train = \
unknown_to_nan(train, metadata_attributes , 'unknown|no transactions known',
rename_columns='azdias_corrected_features.tsv')
test,missing_metadata_annotations_test,not_present_features_test = \
unknown_to_nan(test, metadata_attributes , 'unknown|no transactions known',
rename_columns='azdias_corrected_features.tsv')
# +
print("{} TRAIN dataset features lacking metadata information, while there are {} features not used by provided dataset. ".\
format( len(missing_metadata_annotations_train), len(not_present_features_train) ) )
print("{} TEST dataset features lacking metadata information, while there are {} features not used by provided dataset. ".\
format( len(missing_metadata_annotations_test), len(not_present_features_test) ) )
# -
#
# ## a. concatenating train/test
# Our data still has lots of missing entries. Analogically to a notebook in `Unsupervised/Unsupervised_Customer_Segmentation.ipynb` I am going to use a IterativeImputer in an attempt to ?
# +
train['dataset']='train'
test['dataset']='test'
joint_dataset = pd.concat([train,test])
metadata_attributes = r"../arvato_data/DIAS Attributes - Values 2017.xlsx"
joint_dataset,missing_metadata_annotations_joint,not_present_features_joint = \
unknown_to_nan(joint_dataset, metadata_attributes , 'unknown|no transactions known',
rename_columns='azdias_corrected_features.tsv')
# +
print("{} JOINT dataset features lacking metadata information, while there are {} features not used by provided dataset. ".\
format( len(missing_metadata_annotations_joint), len(not_present_features_joint) ) )
# +
missing_metadata_feature_info = {}
missing_metadata_feature_info['Attribute'] = []
missing_metadata_feature_info['unique_counts'] = []
missing_metadata_feature_info['unique_vals'] = []
for f in missing_metadata_annotations_joint:
unique_vals = np.unique(joint_dataset[f].dropna() )
missing_metadata_feature_info['Attribute'].append(f)
missing_metadata_feature_info['unique_counts'].append(len(unique_vals))
missing_metadata_feature_info['unique_vals'].append(unique_vals)
missing_metadata_feature_info = pd.DataFrame.from_dict(missing_metadata_feature_info)
missing_metadata_feature_info.sort_values('unique_counts',ascending=False, inplace=True)
missing_metadata_feature_info_joint = missing_metadata_feature_info
missing_metadata_feature_info_joint
# -
# As with `AZDIAS` and `Customers` dataset, as here we need to remove `EINGEFUEGT_AM` column, and encode `ANZ_STATISTISCHE_HAUSHALTE` as numerical and the remaining features to be encoded as categorical.
# +
missing_metadata_feature_info_joint['Type']= 'categorical' # categorical as default
##### dropping EINGEFUEGT_AM_idx
### from `missing_metadata_feature_info`
EINGEFUEGT_AM_idx = missing_metadata_feature_info_joint.loc[missing_metadata_feature_info_joint['Attribute']=='EINGEFUEGT_AM'].index.values
missing_metadata_feature_info_joint.drop(axis='index', inplace=True, index = EINGEFUEGT_AM_idx )
# Manually set specific features to numerical upon inspection
missing_metadata_feature_info_joint.loc[missing_metadata_feature_info_joint.Attribute=='ANZ_STATISTISCHE_HAUSHALTE', 'Type'] = 'numerical'
missing_metadata_feature_info_joint.loc[missing_metadata_feature_info_joint.Attribute=='VERDICHTUNGSRAUM', 'Type'] = 'numerical'
missing_metadata_feature_info_joint.loc[missing_metadata_feature_info_joint.Attribute=='EINGEZOGENAM_HH_JAHR', 'Type'] = 'numerical'
### from joint DATASET remove following columns: EINGEFUEGT_AM
try:
joint_dataset.drop(columns='EINGEFUEGT_AM',inplace=True)
except KeyError:
print("EINGEFUEGT_AM already deleted from train")
missing_metadata_feature_info_joint.head()
# +
missing_metadata_feature_info_joint = missing_metadata_feature_info_joint[['Attribute','Type']]
all_feature_types = pd.read_csv('metadata_feature_types.tsv',sep="\t").dropna()
all_feature_types = pd.concat([all_feature_types,missing_metadata_feature_info_joint])
# must be in joint dataset
all_feature_types.index = all_feature_types['Attribute']
# features that have filtered AZDIAS entry
all_feature_types = all_feature_types.loc[ set(all_feature_types['Attribute'] ).intersection( (set(joint_dataset.columns ))) ]
plt.hist(all_feature_types['Type'])
# -
# **NOTE**: below I am copying some code from `EDA/03_Concatenate_and_Impute.ipynb`
#
# It is basically a boilerplate to replace values for categories that in fact code for the same thing
# ## b. standardizing selected features
# +
print("Before standarization we have classes: {}".format(np.unique(joint_dataset['LP_FAMILIE_GROB'].dropna())))
joint_dataset['LP_FAMILIE_GROB'].replace(4,3,inplace=True)
joint_dataset['LP_FAMILIE_GROB'].replace(5,3,inplace=True)
joint_dataset['LP_FAMILIE_GROB'].replace(7,6,inplace=True)
joint_dataset['LP_FAMILIE_GROB'].replace(8,6,inplace=True)
joint_dataset['LP_FAMILIE_GROB'].replace(10,9,inplace=True)
joint_dataset['LP_FAMILIE_GROB'].replace(11,9,inplace=True)
joint_dataset['LP_FAMILIE_GROB'].replace(0,np.nan,inplace=True)
print("After standarization we have classes: {}".format(np.unique(joint_dataset['LP_FAMILIE_GROB'].dropna())))
# +
print("Before standarization we have classes: {}".format(np.unique(joint_dataset['LP_STATUS_GROB'].dropna())))
joint_dataset['LP_STATUS_GROB'].replace(2,1,inplace=True)
joint_dataset['LP_STATUS_GROB'].replace(4,3,inplace=True)
joint_dataset['LP_STATUS_GROB'].replace(5,3,inplace=True)
joint_dataset['LP_STATUS_GROB'].replace(7,6,inplace=True)
joint_dataset['LP_STATUS_GROB'].replace(9,8,inplace=True)
print("After standarization we have classes: {}".format(np.unique(joint_dataset['LP_STATUS_GROB'].dropna())))
# +
joint_dataset['PRAEGENDE_JUGENDJAHRE_SIM']= joint_dataset['PRAEGENDE_JUGENDJAHRE']
print("Before standarization we have classes: {}".format(np.unique(joint_dataset['PRAEGENDE_JUGENDJAHRE'].dropna())))
joint_dataset['PRAEGENDE_JUGENDJAHRE_SIM'].replace(2,1,inplace=True)
joint_dataset['PRAEGENDE_JUGENDJAHRE_SIM'].replace(4,3,inplace=True)
joint_dataset['PRAEGENDE_JUGENDJAHRE_SIM'].replace(6,5,inplace=True)
joint_dataset['PRAEGENDE_JUGENDJAHRE_SIM'].replace(7,5,inplace=True)
joint_dataset['PRAEGENDE_JUGENDJAHRE_SIM'].replace(9,8,inplace=True)
joint_dataset['PRAEGENDE_JUGENDJAHRE_SIM'].replace(11,10,inplace=True)
joint_dataset['PRAEGENDE_JUGENDJAHRE_SIM'].replace(12,10,inplace=True)
joint_dataset['PRAEGENDE_JUGENDJAHRE_SIM'].replace(13,10,inplace=True)
joint_dataset['PRAEGENDE_JUGENDJAHRE_SIM'].replace(15,14,inplace=True)
print("After standarization we have classes: {}".format(np.unique(joint_dataset['PRAEGENDE_JUGENDJAHRE_SIM'].dropna())))
# +
joint_dataset['CAMEO_DEUINTL_2015_SIM'] = joint_dataset['CAMEO_DEUINTL_2015']
print("Before standarization we have classes: {}".format(np.unique(joint_dataset['CAMEO_DEUINTL_2015_SIM'].dropna())))
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('12','11',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('13','11',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('14','11',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('15','11',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('22','21',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('23','21',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('24','21',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('25','21',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('32','31',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('33','31',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('34','31',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('35','31',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('42','41',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('43','41',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('44','41',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('45','41',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('52','51',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('53','51',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('54','51',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('55','51',inplace=True)
joint_dataset['CAMEO_DEUINTL_2015_SIM'].replace('XX',np.nan,inplace=True)
print("After standarization we have classes: {}".format(np.unique(joint_dataset['CAMEO_DEUINTL_2015_SIM'].dropna())))
# -
# ## c. encoding categorical values
# +
categorical_features = list(all_feature_types[all_feature_types['Type']=='categorical']['Attribute'])
categorical_features.append('dataset')
len(categorical_features)
# +
categorical_features_count = {'feature': [], 'counts':[]}
for f in categorical_features:
categorical_features_count['feature'].append(f)
categorical_features_count['counts'].append(len(np.unique(joint_dataset[f].dropna()) ) )
categorical_features_count = pd.DataFrame(categorical_features_count)
categorical_features_to_one_hot = list( categorical_features_count[categorical_features_count['counts']>2]['feature'])
print(len(categorical_features_to_one_hot))
plt.hist(categorical_features_count['counts'])
# -
print(joint_dataset.shape)
joint_dataset = pd.get_dummies(joint_dataset,columns=categorical_features_to_one_hot)
print(joint_dataset.shape)
# ## d. encode non-numerical (categorical and ordinal) features
numerical_features = list(all_feature_types[all_feature_types['Type']=='numerical']['Attribute'])
not_numerical_features = [f for f in joint_dataset.columns if f not in numerical_features]
# +
from sklearn.preprocessing import LabelEncoder
LabelEncoders = { x: LabelEncoder() for x in not_numerical_features }
for idx,feature in enumerate(not_numerical_features):
print("{}/{}: {}".format(idx+1,len(not_numerical_features),feature ))
# NA mask - label encoder cannot handle missing values (i.e. NAs)
not_na_mask = joint_dataset[feature].notnull() # FALSE for NA, TRUE for some value
joint_dataset[feature][not_na_mask] = LabelEncoders[feature].fit_transform( joint_dataset[feature][not_na_mask] )
# -
# ## e. Impute features
#
# Here, the `RESPONSE` column is also going to be imputed for TEST dataset. It is going to be one possible solution to the supervised learning problem.
#
# I am going to evaluate such predictions, and compare them with other explicit models
# +
# %%time
import sklearn
from sklearn.experimental import enable_iterative_imputer
print('The scikit-learn version is {}.'.format(sklearn.__version__))
from sklearn.impute import IterativeImputer
ie = IterativeImputer(n_nearest_features=100) # 100 takes 2h 20min!, 75 is fine, takes around 30-45 minutes on my computer
joint_dataset_imputed = ie.fit_transform(joint_dataset) # ~ 30 min, returns numpy array
joint_dataset_imputed_df = pd.DataFrame(joint_dataset_imputed, columns= joint_dataset.columns, index = joint_dataset.index )
# -
for i,feature in enumerate(not_numerical_features):
if(feature=='RESPONSE'):
print("Omitting response rounding")
continue
#print("{}/{} {}".format(i+1,len(not_numerical_features), feature))
joint_dataset_imputed_df[feature] = joint_dataset_imputed_df[feature].round()
# +
imputed_features_counts = {}
imputed_features_counts['Attribute'] =[]
imputed_features_counts['Before_imputation'] =[]
imputed_features_counts['After_imputation'] =[]
for idx,f in enumerate(not_numerical_features):
if(f=='RESPONSE'):
print("Omitting response, as we want to keep probabilities")
continue
#print("{}/{}: {}".format(idx+1,len(not_numerical_features),f))
imputed_features_counts['Attribute'].append(f)
uniq_before = len(np.unique(joint_dataset[f].dropna() ))
imputed_features_counts['Before_imputation'].append(uniq_before)
uniq_after = len(np.unique(joint_dataset_imputed_df[f].dropna() ))
imputed_features_counts['After_imputation'].append(uniq_after)
imputed_features_counts = pd.DataFrame.from_dict(imputed_features_counts)
imputed_features_counts.sort_values('Before_imputation',ascending=False, inplace=True)
# show different number of classes before and aver imputation
imputed_features_counts[imputed_features_counts['Before_imputation']!=imputed_features_counts['After_imputation']]
# -
# replace seemingly new classes with NANs
for idx,feature in enumerate(not_numerical_features):
if(f=='RESPONSE'):
print("Omitting response, as we want to keep probabilities")
continue
#print("{}/{}: {}".format(idx+1,len(not_numerical_features),feature))
imputed_clusters = np.unique(joint_dataset_imputed_df[feature].dropna())
original_clusters = np.unique(joint_dataset[feature].dropna())
unobserved_classes = set(imputed_clusters).difference( set(original_clusters) )
for uclass in unobserved_classes:
joint_dataset_imputed_df[feature].replace(uclass,np.nan, inplace=True)
# +
imputed_features_counts = {}
imputed_features_counts['Attribute'] =[]
imputed_features_counts['Before_imputation'] =[]
imputed_features_counts['After_imputation'] =[]
for idx,f in enumerate(not_numerical_features):
if(f=='RESPONSE'):
print("Omitting response, as we want to keep probabilities")
continue
#print("{}/{}: {}".format(idx+1,len(not_numerical_features),f))
imputed_features_counts['Attribute'].append(f)
uniq_before = len(np.unique(joint_dataset[f].dropna() ))
imputed_features_counts['Before_imputation'].append(uniq_before)
uniq_after = len(np.unique(joint_dataset_imputed_df[f].dropna() ))
imputed_features_counts['After_imputation'].append(uniq_after)
imputed_features_counts = pd.DataFrame.from_dict(imputed_features_counts)
imputed_features_counts.sort_values('Before_imputation',ascending=False, inplace=True)
imputed_features_counts[imputed_features_counts['Before_imputation']!=imputed_features_counts['After_imputation']]
# +
# %%time
empty_cells = pd.isnull(joint_dataset_imputed_df).sum()*100.0/joint_dataset_imputed_df.shape[0]
empty_cells = empty_cells.sort_values(ascending=False)
empty_cells[:50].plot(figsize=(20,3),kind='bar') # bar plot of first 50 most missing features
empty_cells = pd.DataFrame({'attribute':empty_cells.index, 'missingness':empty_cells})
#empty_cells = empty_cells[empty_cells['attribute']!='RESPONSE']
empty_cells
# -
# **a second round of imputation with frequency imputer**|
# +
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy= 'most_frequent')
original_responses = joint_dataset_imputed_df['RESPONSE']
joint_dataset_imputed_df = pd.DataFrame(imputer.fit_transform(joint_dataset_imputed_df),columns= joint_dataset_imputed_df.columns, index = joint_dataset_imputed_df.index)
joint_dataset_imputed_df['RESPONSE'] = original_responses
# -
# %%time
empty_cells = pd.isnull(joint_dataset_imputed_df).sum()*100.0/joint_dataset_imputed_df.shape[0]
empty_cells = empty_cells.sort_values(ascending=False)
empty_cells[:50].plot(figsize=(20,3),kind='bar') # bar plot of first 50 most missing features
joint_dataset_imputed_df.to_csv("../arvato_data_processed/MAILOUTimputed.csv")
# # RESPONSE variable
# Before attempting any supervised learning method it might be worthwhile to notice the distribution of the target binary variable `RESPONSE`. We see that the feature is imbalanced, and this will influence subsequent modelling approaches, that abandon `accuracy` and focus on `ROC-AUC` approaches.
plt.hist(joint_dataset_imputed_df['RESPONSE'],log=True)
|
EDA/04_MAILOUT.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ##### An example of using foresee from jupyter notebook
import warnings
warnings.filterwarnings("ignore")
# +
import pandas as pd
import numpy as np
from io import StringIO
import importlib_resources
# import main from foresee.scripts
from foresee.scripts.main import prepare_fit_report
# +
# upload sample time-series dataframe
ts_df_txt = importlib_resources.files('foresee.data').joinpath('basic_time_series_data.csv').read_text()
ts_df = pd.read_csv(StringIO(ts_df_txt))
ts_df.head()
# +
# available forecasting models
model_list = ['ewm_model', 'fft', 'holt_winters', 'prophet', 'sarimax']
result, _ = prepare_fit_report(ts_df.copy(), model_list=model_list)
|
foresee/examples/basic tutorial [ one time series as a single column dataframe ].ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from numpy import *
import numpy as np
from numpy.core.umath import multiply
import matplotlib.pyplot as plt
# +
# 辅助工具
import random
# 加载数据
def loadDataSet(filename: object) -> object:
dataMat = []
labelMat = []
file = open(filename)
for line in file.readlines():
lineArr = line.strip().split('\t')
# 坑! 要用float()将dataMat、labelMat中的成员转换成浮点类型
# 否则默认类型是U9,即无符号int 9bit类型,这种类型无法参与之后的multiply()运算,
# 会抛出错误:ufunc 'multiply' did not contain a loop with signature matching types dtype('<U32')
# dtype('<U32') dtype('<U32')
dataMat.append([float(lineArr[0]), float(lineArr[1])])
labelMat.append(float(lineArr[2]))
return dataMat, labelMat
# 随机返回一个不等于i的J
def randomSelectJ(i, m):
j = i
while j == i:
# 坑!random.uniform()返回变量类型是float类型,无法用于之后的切片操作,需要转换为int
j = int(random.uniform(0, m))
return j
# 把aj限制在[L,H]范围内
def clipAlpha(aj, H, L):
if aj > H:
aj = H
if L > aj:
aj = L
return aj
# -
dataMat, labelMat = loadDataSet('MLiA/SVM/testSet.txt')
dataArray = np.array(dataMat)
plt.scatter(dataArray[:, 0], dataArray[:, 1])
# #简化版SMO实现
def simpleSMO(dataMatIn, classLabels, C, toler, maxIter):
# 将输入的特征矩阵化,shape为100*2;标签矩阵化后转置,shape为100*1
dataMat = mat(dataMatIn)
labelMat = mat(classLabels).transpose()
# 初始化b,样本数量m,特征数量n
b = 0
m, n = shape(dataMat)
# 初始化参数α为全0向量 m*1 即100*1
alphas = mat(zeros((m, 1)))
# 初始化当前遍历次数iter为0 ,当iter == maxIter时,结束循环
iter = 0
while iter < maxIter:
alphaPairsChanged = 0
for i in range(m):
# f(xi) = w.T * x + b
fxi = float(multiply(alphas, labelMat).T * (dataMat * dataMat[i, :].T)) + b
# 误差Ei = f(xi) - yi,用于之后的计算
Ei = fxi - float(labelMat[i])
# todo (疑问点)检查KKT条件是否满足
if ((labelMat[i] * Ei < -toler) and (alphas[i] < C)) or ((labelMat[i] * Ei > toler) and (alphas[i] > 0)):
# 现在开始处理第二个参数alphaJ,由于是简化版smo,先随机选择j
j = randomSelectJ(i, m)
# 计算f(xj),Ej ,计算方法用i
fxj = float(multiply(alphas, labelMat).T * (dataMat * dataMat[j, :].T)) + b
Ej = fxj - float(labelMat[j])
# 为ai,aj 分配新内存,以免被修改
alphaIOld = alphas[i].copy()
alphaJOld = alphas[j].copy()
# 设置alpha的上限H、下限L,用于裁剪,使其满足范围(0,C)
if labelMat[i] != labelMat[j]:
L = max(0, alphas[j] - alphas[i])
H = min(C, C + alphas[j] - alphas[i])
else:
L = max(0, alphas[j] + alphas[i] - C)
H = min(C, alphas[j] + alphas[i])
if L == H:
print("L==H")
continue
# 计算η = K11 + K22 - 2*K12; 这里取其相反数
eta = 2 * dataMat[i, :] * dataMat[j, :].T - dataMat[i, :] * dataMat[i, :].T \
- dataMat[j, :] * dataMat[j, :].T
if eta >= 0:
print("eta >= 0")
continue
# 更新aj的值 。具体公式推导见《统计学习方法》7.4.1节 定理7.6
alphas[j] -= labelMat[j] * (Ei - Ej) / eta
alphas[j] = clipAlpha(alphas[j], H, L)
# 判断更改是否过小
if abs(alphas[j] - alphaJOld) < 0.00001:
print("j not moving enough")
continue
# 用aj 更新ai
alphas[i] += labelMat[j] * labelMat[i] * (alphaJOld - alphas[j]) # update i by the same amount as j
# todo (疑问点)更新b的值。 ai,aj 若在(0,C)之间,则b1 = b2的,所以b = b1 = b2
# todo 若ai,aj 等于0或C 则bi,bj之间的数都是满足KKT的,此时用bi bj的中间值作为b值
# todo 具体公式推导见《统计学习方法》7.4.2节 其3.
bi = b - Ei - labelMat[i] * dataMat[i, :] * dataMat[i, :].T * (alphas[i] - alphaIOld) - labelMat[j] * \
dataMat[j, :] * dataMat[i, :].T * (alphas[j] - alphaJOld)
bj = b - Ej - labelMat[i] * dataMat[i, :] * dataMat[j, :].T * (alphas[i] - alphaIOld) - labelMat[j] * \
dataMat[j, :] * dataMat[j, :].T * (alphas[j] - alphaJOld)
if (0 < alphas[i]) and (C > alphas[i]):
b = bi
elif (0 < alphas[j]) and (C > alphas[j]):
b = bj
else:
b = (bi + bj) / 2.0
# 使更新alpha数+1,用于控制循环
alphaPairsChanged += 1
print("iter: %d i:%d, pairs changed %d" % (iter, i, alphaPairsChanged))
if alphaPairsChanged == 0:
iter += 1
else:
iter = 0
print("iteration number: %d" % iter)
return b, alphas
b, alphas = simpleSMO(dataMat,labelMat, 0.6, 0.001, 40)
labelMat1 = mat(labelMat).transpose()
y = multiply(alphas, labelMat1).T * multiply(dataArray , dataArray) + b
yArray = np.array(y)
yArray
plt.plot(dataArray[:, 0], dataArray[:, 1], '.')
plt.scatter(yArray[:, 0], yArray[:, 1])
for i in range(100):
if alphas[i] > 0:
print(dataArray[i],labelMat[i])
if labelMat[i] >0:
plt.plot(dataArray[i,0],dataArray[i,1],'r>')
else:
plt.plot(dataArray[i,0],dataArray[i,1],'ko')
plt.show()
|
MLiA/SVM/simpleSMO.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# 
# Back to the main [Index](../Index.ipynb)
# # Community Modeling
# Community Modeling examples with modelseedpy models.
# ## Examples
# * [notebook1](notebook.ipynb) topic
|
examples/Community Modeling/Index.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''.venv'': venv)'
# name: python3
# ---
# # Model Profiler API Examples
#
# This demonstrates how to use the [profile_model](https://siliconlabs.github.io/mltk/docs/python_api/operations.html#profile_model) API.
#
# Refer to the [Model Profiler](https://siliconlabs.github.io/mltk/docs/guides/model_profiler.html) guide for more details.
#
# __NOTES:__
# - Click here: [](https://colab.research.google.com/github/siliconlabs/mltk/blob/master/mltk/examples/profile_model.ipynb) to run this example interactively in your browser
# - Refer to the [Notebook Examples Guide](https://siliconlabs.github.io/mltk/docs/guides/notebook_examples_guide.html) for how to run this example locally in VSCode
# ## Install MLTK Python Package
# Install the MLTK Python package (if necessary)
# !pip install --upgrade silabs-mltk
# ## Import Python Packages
# +
# Import the standard Python packages used by the examples
import os
import urllib
import shutil
import tempfile
# Import the necessary MLTK APIs
from mltk.core import profile_model
from mltk.utils.commander import query_platform
# -
# ## Download .tflite model file
#
# A `.tflite` model file is required to run these examples.
# The following code downloads a model.
#
# __NOTE:__ Update `TFLITE_MODEL_URL` or `tflite_path` to point to your model if necesary
# +
# Use .tflite mode found here:
# https://github.com/siliconlabs/mltk/tree/master/mltk/utils/test_helper/data/
# NOTE: Update this URL to point to your model if necessary
TFLITE_MODEL_URL = 'https://github.com/siliconlabs/mltk/raw/master/mltk/utils/test_helper/data/image_example1.tflite'
# Download the .tflite file and save to the temp dir
tflite_path = os.path.normpath(f'{tempfile.gettempdir()}/image_example1.tflite')
with open(tflite_path, 'wb') as dst:
with urllib.request.urlopen(TFLITE_MODEL_URL) as src:
shutil.copyfileobj(src, dst)
# -
# ## Example 1: Profile .tflite file in basic simulator
#
# This example profiles the `.tflite` model file in the "basic simulator" of the model profiler.
# +
# Profile the tflite model using the "basic simulator"
# NOTE: Update tflite_path to point to your model if necessary
profiling_results = profile_model(tflite_path)
# Print the profiling results
print(profiling_results)
# -
# ## Example 2: Profile .tflite file in MVP hardware simulator
#
# This example profiles the `.tflite` model file in the MVP hardware accelerator simulator of the model profiler.
# +
# Profile the tflite model using the MVP hardware accelerator simulator
# NOTE: Update tflite_path to point to your model if necessary
profiling_results = profile_model(tflite_path, accelerator='MVP')
# Print the profiling results
print(profiling_results)
# -
# ## Example 3: Profile .tflite file on physical device
#
# This example profiles the `.tflite` model file on a physically connected embedded device.
#
# __NOTE:__ A supported development board must be connected and properly enumerated for this example to work.
# +
# Determine the currently connected device
# Just print an error and return if no device is connected
try:
platform_name = query_platform()
except Exception as e:
print(f'Failed to determine connected device, err:\n{e}')
sys.exit(0)
print(f'Conencted device platform: {platform_name}')
accelerator = None
if platform_name in ('brd2601a', 'brd4186b'):
# Use the MVP hardware accelerator if the platform supports it
accelerator = 'MVP'
# Profile the tflite model on the physical device
profiling_results = profile_model(
tflite_path,
accelerator=accelerator,
use_device=True
)
# Print the profiling results
print(profiling_results)
# -
# ## Example 4: Profile model before training
#
# Training a model can be very time-consuming, and it is useful to know how efficiently a
# model will execute on an embedded device before investing time and energy into training it.
# For this reason, the MLTK [profile_model](https://siliconlabs.github.io/mltk/docs/python_api/operations.html#profile_model) API features a `build` argument to build a model and profile it _before_ the model is fully trained.
#
# In this example, the [image_example1](https://siliconlabs.github.io/mltk/docs/python_api/models/examples/image_example1.html) model is built
# at command-execution-time and profiled in the MVP hardware simulator.
# Note that _only_ the [model specification](https://siliconlabs.github.io/mltk/docs/guides/model_specification.html) script is required,
# it does _not_ need to be trained first.
# +
# Build the image_example1 model then profile it using the MVP hardware accelerator simulator
# NOTE: Since build=True, the model does NOT need to be trained first
profiling_results = profile_model('image_example1', accelerator='MVP', build=True)
# Print the profiling results
print(profiling_results)
|
mltk/examples/profile_model.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ralsouza/python_fundamentos/blob/master/src/06_Modulos_Analise_de_Dados/08_Pandas_Dataframes.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="LFrKL1OH570U" colab_type="code" colab={}
# Import DataFrame from Pandas
from pandas import DataFrame
# + id="DySSr2_UqqV6" colab_type="code" colab={}
data = {'State': ['Santa Catarina','Paraná','Goiás','Bahia','Minas Gerais'],
'Year': [2002,2003,2004,2005,2006],
'Population': [1.5,1.7,3.6,2.4,2.9]}
# + id="kEE-fxh6rj_H" colab_type="code" colab={}
frame = DataFrame(data)
# + id="UpgCw-Qzrp8B" colab_type="code" outputId="73c3e738-a799-41f5-b354-71574dea43de" colab={"base_uri": "https://localhost:8080/", "height": 204}
frame
# + id="_6aAbQchtJuQ" colab_type="code" outputId="21e7f24a-ff62-43ca-c244-791367c14562" colab={"base_uri": "https://localhost:8080/", "height": 35}
type(frame)
# + id="C_s1Yi2ktP71" colab_type="code" outputId="df1b72a9-9cc0-4015-bce6-8ece9110c760" colab={"base_uri": "https://localhost:8080/", "height": 204}
# Setting the column names
DataFrame(data, columns=['Year','State','Population'])
# + id="dSkQkWd2x9pR" colab_type="code" outputId="5c3c7733-68d4-4ed1-a28a-8e41ce6faa76" colab={"base_uri": "https://localhost:8080/", "height": 204}
# Creating another dataframe with the same previous data, but adding another column and defining the names of the indexes
# Note that the Debit column will be filled with NaN values, because we did not pass these values
# In the next file, 09_Pandas_Dataframes_Pandas_NumPy.ipynb, we will fill the Debit column
frame2 = DataFrame(data, columns=['Year','State','Population','Debit'],
index=['one','two','three','four','five'])
frame2
# + id="tWbKLkl_zG07" colab_type="code" outputId="b523513c-fbbf-40db-d473-49266f65ed15" colab={"base_uri": "https://localhost:8080/", "height": 125}
# Print a column by name with slicing notation
frame2['State']
# + id="hH3MIhAO2V4m" colab_type="code" outputId="f438517e-2861-435a-c477-3aa8e1c16752" colab={"base_uri": "https://localhost:8080/", "height": 35}
# Print the indexes
frame2.index
# + id="2RNeS9Ic2hoj" colab_type="code" outputId="c6fb96e2-179b-44e0-fa05-08e73ebf9c5b" colab={"base_uri": "https://localhost:8080/", "height": 35}
# Print the column names
frame2.columns
# + id="zZQp2jqf2pvA" colab_type="code" outputId="5296ae4e-1663-4ba0-8035-3f8b358f9538" colab={"base_uri": "https://localhost:8080/", "height": 107}
# Print the values
frame2.values
# + id="4Tu5k8JA2u1v" colab_type="code" outputId="4253d56f-bbfd-4a54-b248-25f1c30061b3" colab={"base_uri": "https://localhost:8080/", "height": 107}
# Show de data types
frame2.dtypes
# + id="wNPBIow3282Z" colab_type="code" outputId="8806fb2c-5999-4681-82fb-5c7ac1dc9de0" colab={"base_uri": "https://localhost:8080/", "height": 125}
# Select a column using attribute notation
frame2.Year
# + id="Y4YLJnts3hS3" colab_type="code" outputId="7488eb45-31a0-407b-f0df-d23b6cb21872" colab={"base_uri": "https://localhost:8080/", "height": 111}
# Another notation of slicing
# Note that the index 2 is exclusive
frame2[:2]
# + id="1UilXbLg3uiD" colab_type="code" colab={}
|
src/06_Modulos_Analise_de_Dados/08_Pandas_Dataframes.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Artificial Intelligence Nanodegree
#
# ## Convolutional Neural Networks
#
# ---
#
# In your upcoming project, you will download pre-computed bottleneck features. In this notebook, we'll show you how to calculate VGG-16 bottleneck features on a toy dataset. Note that unless you have a powerful GPU, computing the bottleneck features takes a significant amount of time.
#
# ### 1. Load and Preprocess Sample Images
#
# Before supplying an image to a pre-trained network in Keras, there are some required preprocessing steps. You will learn more about this in the project; for now, we have implemented this functionality for you in the first code cell of the notebook. We have imported a very small dataset of 8 images and stored the preprocessed image input as `img_input`. Note that the dimensionality of this array is `(8, 224, 224, 3)`. In this case, each of the 8 images is a 3D tensor, with shape `(224, 224, 3)`.
# +
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing import image
import numpy as np
import glob
img_paths = glob.glob("images/*.jpg")
def path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(224, 224))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
def paths_to_tensor(img_paths):
list_of_tensors = [path_to_tensor(img_path) for img_path in img_paths]
return np.vstack(list_of_tensors)
# calculate the image input. you will learn more about how this works the project!
img_input = preprocess_input(paths_to_tensor(img_paths))
print(img_input.shape)
# -
# ### 2. Recap How to Import VGG-16
#
# Recall how we import the VGG-16 network (including the final classification layer) that has been pre-trained on ImageNet.
#
# 
from keras.applications.vgg16 import VGG16
model = VGG16()
model.summary()
# For this network, `model.predict` returns a 1000-dimensional probability vector containing the predicted probability that an image returns each of the 1000 ImageNet categories. The dimensionality of the obtained output from passing `img_input` through the model is `(8, 1000)`. The first value of `8` merely denotes that 8 images were passed through the network.
model.predict(img_input).shape
# ### 3. Import the VGG-16 Model, with the Final Fully-Connected Layers Removed
#
# When performing transfer learning, we need to remove the final layers of the network, as they are too specific to the ImageNet database. This is accomplished in the code cell below.
#
# 
from keras.applications.vgg16 import VGG16
model = VGG16(include_top=False)
model.summary()
# ### 4. Extract Output of Final Max Pooling Layer
#
# Now, the network stored in `model` is a truncated version of the VGG-16 network, where the final three fully-connected layers have been removed. In this case, `model.predict` returns a 3D array (with dimensions $7\times 7\times 512$) corresponding to the final max pooling layer of VGG-16. The dimensionality of the obtained output from passing `img_input` through the model is `(8, 7, 7, 512)`. The first value of `8` merely denotes that 8 images were passed through the network.
print(model.predict(img_input).shape)
# This is exactly how we calculate the bottleneck features for your project!
|
transfer-learning-Keras/bottleneck_features.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:apache_beam]
# language: python
# name: conda-env-apache_beam-py
# ---
# ## Summary
#
# Prepare the training / validation / test datasets using Apache Beam.
#
# ----
# ## Imports
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.runners.interactive.caching import CacheManager
# ## Parameters
options = PipelineOptions(
project="deep-protein-gen", temp_location="gs://strokach-dataflow-temp", streaming=False,
)
options.display_data()
# ## Workflow
# +
p = beam.Pipeline()
p = (
#
p
| beam.io.ReadAllFromParquet("gs://deep-protein-gen/training_data/database_id=G3DSA%253A1.10.10.180")
| beam
)
# -
|
notebooks/02_prepare_dataset_beam.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# +
# %matplotlib inline
import pywt
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
filename = 'data/facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data
# -
training_data.describe()
len(np.unique(training_data['Well Name']))
# +
sns.set_context("talk")
nrows=2
fig, axs = plt.subplots(figsize=(15,20),
nrows=nrows, ncols=5, # fix as above
#gridspec_kw=dict(hspace=0.4),
sharey='row',sharex=False) # Much control of gridspec
grouped = training_data.groupby(['Well Name'])
targets = zip(grouped.groups.keys(), axs.flatten())
for i in range(nrows):
axs[i,0].invert_yaxis()
for i, (key, ax) in enumerate(targets):
ax.plot(grouped.get_group(key).PE,grouped.get_group(key).Depth)
ax.set_title('%s'%key)
ax.set_xlim(0,10)
# -
print(pywt.wavelist())
pywt.Modes.modes
w = pywt.Wavelet('db2')
w.dec_len
# +
cA=[]
cD=[]
grouped = training_data.groupby(['Well Name'])
for key in grouped.groups.keys():
tempA, tempD = pywt.dwt(grouped.get_group(key).GR,w,mode='symmetric')
cA.append(tempA)
cD.append(tempD)
print(len(grouped.get_group(key).GR),len(tempA),len(tempD))
if (key == 'SHANKLE'):
fig, ax = plt.subplots()
ax.plot(tempD,range(len(tempD)))
ax.plot(tempA,range(len(tempA)))
ax.plot(grouped.get_group(key).GR,np.linspace(0,227,449))
sum([len(cA[i]) for i in range(len(cA))])
# -
for key in grouped.groups.keys():
print(key)
db3 = pywt.Wavelet('db3')
data = grouped.get_group('SHANKLE').GR
depth = grouped.get_group('SHANKLE').Depth
pywt.dwt_max_level(len(data), db3)
# +
n_levels = 6
fig, axs = plt.subplots(figsize=(20,20),
nrows=2, ncols=3,
#gridspec_kw=dict(hspace=0.4),
sharey='row',sharex='col') # Much control of gridspec
for i in range(nrows):
axs[i,0].invert_yaxis()
temp_data = data
for i,ax in zip(range(1,n_levels+1),axs.flatten()):
cA,cD = pywt.dwt(temp_data,db3,mode='symmetric')
ax.plot(cA,np.linspace(min(depth),max(depth),len(cA)))
ax.plot(cD,np.linspace(min(depth),max(depth),len(cD)))
ax.plot(data,depth)
ax.set_title('SHANKLE GR DECOMP. LEVEL %d' % (i))
ax.set_ylabel('Depth')
ax.legend(['cA','cD','GR'])
temp_data = cA
# -
data2=np.concatenate((data,data[-1:]))
len(data2)
[cA6, cD6, cD5, cD4, cD3, cD2, cD1]= pywt.wavedec(data2, db3, level=6)
fig,ax = plt.subplots()
ax.invert_yaxis()
ax.plot(cA6,np.linspace(min(depth),max(depth),len(cA6)))
ax.plot(cD6,np.linspace(min(depth),max(depth),len(cD6)))
ax.plot(cD5,np.linspace(min(depth),max(depth),len(cD5)))
ax.plot(cD4,np.linspace(min(depth),max(depth),len(cD4)))
ax.plot(cD3,np.linspace(min(depth),max(depth),len(cD3)))
ax.plot(cD2,np.linspace(min(depth),max(depth),len(cD2)))
ax.plot(cD1,np.linspace(min(depth),max(depth),len(cD1)));
pywt.central_frequency(db3)
from scipy.interpolate import interp1d
# +
n_levels = 6
fig, axs = plt.subplots(figsize=(20,20),
nrows=2, ncols=3,
#gridspec_kw=dict(hspace=0.4),
sharey='row',sharex='col') # Much control of gridspec
for i in range(nrows):
axs[i,0].invert_yaxis()
temp_data = data
for i,ax in zip(range(1,n_levels+1),axs.flatten()):
cA,cD = pywt.dwt(temp_data,db3,mode='symmetric')
new_depth = np.linspace(min(depth),max(depth),len(cD))
f1 = interp1d(new_depth,cD,kind='linear')
f2 = interp1d(new_depth,cD,kind='nearest')
f3 = interp1d(new_depth,cA,kind='linear')
cD1 = f1(depth)
cD2 = f2(depth)
cA1 = f3(depth)
ax.plot(cA1,depth)
ax.plot(cD1,depth)
ax.plot(cD2,depth)
ax.plot(data,depth)
ax.set_title('SHANKLE GR DECOMP. LEVEL %d' % (i))
ax.set_ylabel('Depth')
ax.legend(['cA resampled','cD resampled linear','cD resampled nearest','GR'])
temp_data = cA
# -
|
wavelet_test.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# # Get the data-based distance errors by using Haversine distance instead of reported travel distance
# Survey data from Sweden and the Netherlands.
# + pycharm={"name": "#%%\n"}
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# + pycharm={"name": "#%%\n"}
import os
import subprocess
import sys
import yaml
import pandas as pd
def get_repo_root():
"""Get the root directory of the repo."""
dir_in_repo = os.path.dirname(os.path.abspath('__file__')) # os.getcwd()
return subprocess.check_output('git rev-parse --show-toplevel'.split(),
cwd=dir_in_repo,
universal_newlines=True).rstrip()
sys.path.append(get_repo_root())
ROOT_dir = get_repo_root()
import lib.sweden_sv as sweden_sv
import lib.netherlands as netherlands
import lib.genericvalidation as genericvalidation
with open(ROOT_dir + '/lib/regions.yaml') as f:
region_manager = yaml.load(f, Loader=yaml.FullLoader)
region_path = {
'sweden': {
'home_locations_path': ROOT_dir + "/dbs/sweden/homelocations.csv",
'tweets_calibration': ROOT_dir + "/dbs/sweden/geotweets_c.csv",
'tweets_validation': ROOT_dir + "/dbs/sweden/geotweets_v.csv",
'gt': sweden_sv.GroundTruthLoader()
},
'netherlands': {
'home_locations_path': ROOT_dir + "/dbs/netherlands/homelocations.csv",
'tweets_calibration': ROOT_dir + "/dbs/netherlands/geotweets_c.csv",
'tweets_validation': ROOT_dir + "/dbs/netherlands/geotweets_v.csv",
'gt': netherlands.GroundTruthLoader()
}
}
# + [markdown] pycharm={"name": "#%% md\n"}
# ## 1. Load trip survey data
# Sweden, the Netherlands
# + [markdown] pycharm={"name": "#%% md\n"}
# ### 1.1 Sweden
# + pycharm={"name": "#%%\n"}
region='sweden'
df_se = pd.read_csv(ROOT_dir + f"/dbs/{region}/survey_deso/day_trips.csv")
df_se = df_se.loc[:, ["sub_id", 'trip_id', 'trip_main_id', 'distance_main',
'date', "origin_main_deso", "desti_main_deso", 'trip_weight']]
df_se = df_se.drop_duplicates(subset=["sub_id", 'trip_id', 'trip_main_id'])
df_se["T"] = df_se["date"].apply(lambda x: pd.to_datetime(x))
df_se = df_se.loc[~df_se["T"].apply(lambda x: x.weekday()).isin([5, 6]), :]
df_se.dropna(axis=0, how='any', inplace=True)
df_se = df_se.loc[:, ["origin_main_deso",
"desti_main_deso",
'distance_main']].rename(columns={"origin_main_deso": 'origin',
"desti_main_deso": 'destination',
'distance_main': 'distance_network'})
# + pycharm={"name": "#%%\n"}
# calculate ODM-based distances
ground_truth = region_path[region]['gt']
# load zones
ground_truth.load_zones()
# + pycharm={"name": "#%%\n"}
distances_se = pd.DataFrame(genericvalidation.zone_distances(ground_truth.zones))
idx = distances_se.index.rename(['origin', 'destination'])
distances_se.set_index(idx, inplace=True)
distances_se = distances_se.reset_index()
distances_se.columns = ["origin", "destination", 'distance']
# + pycharm={"name": "#%%\n"}
distances_se.head()
# + pycharm={"name": "#%%\n"}
df_se = pd.merge(df_se, distances_se, on=["origin", "destination"])
df_se.head()
# -
# ### 1.2 the Netherlands
# + pycharm={"name": "#%%\n"}
def trip_row(df):
row = df.iloc[0]
row['dest_zip'] = df.iloc[-1]['dest_zip']
row['dest_time'] = df.iloc[-1]['dest_time']
return row
region='netherlands'
sheet1 = pd.read_excel(ROOT_dir + f"/dbs/{region}/mobility_data/OViN2017_Databestand.xlsx")
trips = sheet1[
['OPID', 'AfstV', 'Wogem', 'Jaar', 'Maand', 'Dag', 'VerplID',
'VertUur', 'VertPC', 'AankUur', 'AankPC', 'FactorV']]
trips = trips.rename(columns={
'Wogem': 'home_city',
'Jaar': 'year',
'Maand': 'month',
'Dag': 'day',
'VerplID': 'trip_id',
'VertUur': 'origin_time',
'VertPC': 'origin_zip',
'AankUur': 'dest_time',
'AankPC': 'dest_zip',
'FactorV': 'weight_trip',
'AfstV': 'distance'
})
# Process
trips = trips.dropna(subset=['trip_id'])
trips = trips.dropna(subset=['distance'])
trips.loc[:, 'distance'] = trips.loc[:, 'distance'] / 10 # hectometer to km
trips = trips.groupby(['OPID', 'trip_id']).apply(trip_row)
trips['origin_zip'] = trips['origin_zip'].astype('int64')
trips['dest_zip'] = trips['dest_zip'].astype('int64')
df_nt = trips.loc[:, ['origin_zip',
'dest_zip',
'distance']].rename(columns={'origin_zip': 'origin',
'dest_zip': 'destination',
'distance': 'distance_network'})
# + pycharm={"name": "#%%\n"}
# calculate ODM-based distances
ground_truth = region_path[region]['gt']
# load zones
ground_truth.load_zones()
# + pycharm={"name": "#%%\n"}
distances_nt = pd.DataFrame(genericvalidation.zone_distances(ground_truth.zones))
idx = distances_nt.index.rename(['origin', 'destination'])
distances_nt.set_index(idx, inplace=True)
distances_nt = distances_nt.reset_index()
distances_nt.columns = ["origin", "destination", 'distance']
# + pycharm={"name": "#%%\n"}
distances_nt.head()
# + pycharm={"name": "#%%\n"}
df_nt = pd.merge(df_nt, distances_nt, on=["origin", "destination"])
df_nt.head()
# + [markdown] pycharm={"name": "#%% md\n"}
# ## 2. Merge two areas' data and save it
# + pycharm={"name": "#%%\n"}
df_se.loc[:, 'region'] = 'sweden'
df_nt.loc[:, 'region'] = 'netherlands'
df_se = df_se.loc[df_se.distance > 0.1, :]
df_nt = df_nt.loc[df_nt.distance > 0.1, :]
df = pd.concat([df_se, df_nt])
# + pycharm={"name": "#%%\n"}
df.to_csv(ROOT_dir + '/dbs/distance_error_data.csv', index=False)
|
src/py/7-distance-error-data-based.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import re
df=pd.read_csv("loan2.csv")
df
type(df['emp_length'])
# +
pattern=r'\d+'
emp_length2=[]
for value in df['emp_length']:
# print (value)
k=re.findall(pattern,value)
if (k):
# print (k[0])
emp_length2.append(k[0])
else:
emp_length2.append(0)
len(emp_length2)
# -
df['emp_length']=emp_length2
df.to_csv("loan3.csv")
|
Cleaning emp_length.ipynb
|
# ---
# jupyter:
# jupytext:
# formats: ipynb,../imodels/tests/notebooks//py:percent
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# %% pycharm={"is_executing": false}
# %load_ext autoreload
# %autoreload 2
import os
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(13)
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
from sklearn.tree import plot_tree, DecisionTreeClassifier
from sklearn import metrics
from scipy.io.arff import loadarff
# installable with: `pip install imodels`
from imodels import SLIMRegressor, BayesianRuleListClassifier, RuleFitRegressor, GreedyRuleListClassifier
from imodels import SLIMClassifier, OneRClassifier, BoostedRulesClassifier
# change working directory to project root
if os.getcwd().split('/')[-1] != 'imodels':
os.chdir('..')
def get_ames_data():
housing = fetch_openml(name="house_prices", as_frame=True)
housing_target = housing['target'].values
housing_data_numeric = housing['data'].select_dtypes('number').drop(columns=['Id']).dropna(axis=1)
feature_names = housing_data_numeric.columns.values
X_train_reg, X_test_reg, y_train_reg, y_test_reg = train_test_split(
housing_data_numeric.values, housing_target, test_size=0.75)
return X_train_reg, X_test_reg, y_train_reg, y_test_reg, feature_names
def get_diabetes_data():
'''load (classification) data on diabetes
'''
data = loadarff("tests/test_data/diabetes.arff")
data_np = np.array(list(map(lambda x: np.array(list(x)), data[0])))
X = data_np[:, :-1].astype('float32')
y_text = data_np[:, -1].astype('str')
y = (y_text == 'tested_positive').astype(int) # labels 0-1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.75) # split
feature_names = ["#Pregnant", "Glucose concentration test", "Blood pressure(mmHg)",
"Triceps skin fold thickness(mm)",
"2-Hour serum insulin (mu U/ml)", "Body mass index", "Diabetes pedigree function", "Age (years)"]
return X_train, X_test, y_train, y_test, feature_names
X_train_reg, X_test_reg, y_train_reg, y_test_reg, feat_names_reg = get_ames_data()
X_train, X_test, y_train, y_test, feat_names = get_diabetes_data()
def viz_classification_preds(probs, y_test):
'''look at prediction breakdown
'''
plt.subplot(121)
plt.hist(probs[:, 1][y_test == 0], label='Class 0')
plt.hist(probs[:, 1][y_test == 1], label='Class 1', alpha=0.8)
plt.ylabel('Count')
plt.xlabel('Predicted probability of class 1')
plt.legend()
plt.subplot(122)
preds = np.argmax(probs, axis=1)
plt.title('ROC curve')
fpr, tpr, thresholds = metrics.roc_curve(y_test, preds)
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.plot(fpr, tpr)
plt.tight_layout()
plt.show()
# load some data
print('regression data', X_train_reg.shape, 'classification data', X_train.shape)
# %% [markdown]
# # rule sets
# Rule sets are models that create a set of (potentially overlapping) rules.
# %% [markdown]
# ### rulefit
# %% pycharm={"is_executing": false}
# fit a rulefit model
rulefit = RuleFitRegressor(max_rules=10)
rulefit.fit(X_train_reg, y_train_reg, feature_names=feat_names_reg)
# get test performance
preds = rulefit.predict(X_test_reg)
print(f'test r2: {metrics.r2_score(y_test_reg, preds):0.2f}')
# inspect and print the rules
rules = rulefit.get_rules()
rules = rules[rules.coef != 0].sort_values("support", ascending=False)
# 'rule' is how the feature is constructed
# 'coef' is its weight in the final linear model
# 'support' is the fraction of points it applies to
rules[['rule', 'coef', 'support']].style.background_gradient(cmap='viridis')
# %% [markdown]
# ## boosted stumps
# %%
# fit boosted stumps
brc = BoostedRulesClassifier(n_estimators=10)
brc.fit(X_train, y_train, feature_names=feat_names)
print(brc)
# look at performance
probs = brc.predict_proba(X_test)
viz_classification_preds(probs, y_test)
# %% [markdown]
# # rule lists
# %% [markdown]
# ### greedy rule lists
# **like a decision tree that only ever splits going left**
# %% pycharm={"is_executing": false}
# fit a greedy rule list
m = GreedyRuleListClassifier()
m.fit(X_train, y=y_train, feature_names=feat_names) # stores into m.rules_
probs = m.predict_proba(X_test)
# print the list
print(m)
# look at prediction breakdown
viz_classification_preds(probs, y_test)
# %% [markdown]
# ### oneR
# **fits a rule list restricted to use only one feature**
# %%
# fit a oneR model
m = OneRClassifier()
m.fit(X_train, y=y_train, feature_names=feat_names) # stores into m.rules_
probs = m.predict_proba(X_test)
# print the rule list
print(m)
# look at prediction breakdown
viz_classification_preds(probs, y_test)
# %% [markdown]
# ### scalable bayesian rule lists
# %%
# train classifier (allow more iterations for better accuracy; use BigDataRuleListClassifier for large datasets)
print('training...')
m = BayesianRuleListClassifier(max_iter=3000, class1label="diabetes", verbose=False)
m.fit(X_train, y_train)
probs = m.predict_proba(X_test)
print("learned model:\n", m)
viz_classification_preds(probs, y_test)
# %% [markdown]
# # rule trees
# %% [markdown]
# ### short decision tree
# %% pycharm={"is_executing": false}
# specify a decision tree with a maximum depth
dt = DecisionTreeClassifier(max_depth=3)
dt.fit(X_train, y_train)
# calculate mse on the training data
probs = dt.predict_proba(X_test)
# print(f'test mse: {np.mean(np.square(preds-y)):0.2f}')
plot_tree(dt)
# plt.savefig('tree.pdf')
plt.show()
viz_classification_preds(probs, y_test)
# %% [markdown]
# ### optimal classification tree
# - docs [here](https://github.com/csinva/interpretability-workshop/tree/master/imodels/optimal_classification_tree)
# - note: this implementation is still somewhat unstable, and can be made faster by installing either `cplex` or `gurobi`
# %%
# sys.path.append('../imodels/optimal_classification_tree/pyoptree')
# sys.path.append('../imodels/optimal_classification_tree/')
# %%
# from optree import OptimalTreeModel
# feature_names = np.array(["x1", "x2"])
# X = np.array([[1, 2, 2, 2, 3], [1, 2, 1, 0, 1]]).T
# y = np.array([1, 1, 0, 0, 0]).reshape(-1, 1)
# X_test = np.array([[1, 1, 2, 2, 2, 3, 3], [1, 2, 2, 1, 0, 1, 0]]).T
# y_test = np.array([1, 1, 1, 0, 0, 0, 0])
# np.random.seed(13)
# model = OptimalTreeModel(tree_depth=3, N_min=1, alpha=0.1) #, solver_name='baron'
# model.fit(X_test, y_test) # this method is currently using the fast, but not optimal solver
# preds = model.predict(X_test)
# # fit on the bigger diabetes dset from above
# # model.fit(Xtrain, ytrain) # this method is currently using the fast, but not optimal solver
# # preds = model.predict(Xtest)
# print('acc', np.mean(preds == y_test))
# %%
# model.print_tree(feature_names)
# %% [markdown]
# # algebraic models
# %% [markdown]
# ### integer linear models
# %% pycharm={"is_executing": false}
np.random.seed(123)
# generate X and y
n, p = 500, 10
X_sim = np.random.randn(n, p)
y_sim = 1 * X_sim[:, 0] + 2 * X_sim[:, 1] - 1 * X_sim[:, 2] + np.random.randn(n)
# fit linear models with different regularization parameters
print('groundtruth weights should be 1, 2, -1...')
model = SLIMRegressor()
for lambda_reg in [1e-3, 1e-2, 5e-2, 1e-1, 1, 2, 5, 10]:
model.fit(X_sim, y_sim, lambda_reg)
mse = np.mean(np.square(y_sim - model.predict(X_sim)))
print(f'lambda: {lambda_reg}\tmse: {mse: 0.2f}\tweights: {model.model_.coef_}')
# %%
y_sim = 1 / (1 + np.exp(-y_sim))
y_sim = np.round(y_sim)
# fit linear models with different regularization parameters
print('groundtruth weights should be 1, 2, -1...')
model = SLIMClassifier()
for lambda_reg in [1e-3, 1e-2, 5e-2, 1e-1, 1, 2, 5, 10]:
model.fit(X_sim, y_sim, lambda_reg)
mll = np.mean(metrics.log_loss(y_sim, model.predict(X_sim)))
print(f'lambda: {lambda_reg}\tmlogloss: {mll: 0.2f}\tweights: {model.model_.coef_}')
# %%
|
notebooks/imodels_demo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Session I: Introduction to Python and Programming Concepts
# ---
# ### What is Programming?
# writing a set of instructions for the computer to carry out using a language the computer can understand
#
# writing a set of instructions
# ---
# ### What is Python?
# [Here](https://www.python.org/doc/essays/blurb/) is a link to the full, 50¢ word description of Python. This big take-away keywords to know are:
# 1. *Scripted Language*: supports running scripts
# 2. *Interpreted*: an 'interpreter' translates the code as it comes to it, instead of compiling it beforehand
# 3. *Object-oriented*: everything is an 'object', or something that contains data (*attributes*) and/or functions that it can do (*methods*)
# ### Some of the 'Pros' of Python:
# 1. Designed to be easily readable
# 2. Very quick development time (e.g. Easier to write)
# 3. Works on almost every platform
# 4. Easily scalable (e.g. Can be run on distributed computing environments)
# 5. Efficiently 'glues' other languages and components together
# 6. Very strong Open-source community
# Zen of Python
import this
# #### Some of the 'Cons' of Python:
# 1. Difficult to parallelize
# 2. Generally slower compared to other languages
# 3. Has a little of a learning curve when specializing in niche areas
# 4. Very strong Open-source community
# ---
# ### How important is readability?
# 
# Can you visually tell the separation of thoughts in both of these examples?
# Everything in Python is grouped by whitespace. Like the 'Reverse Indentation format' example above, blocks of thought/instructions are grouped by indenting.
# +
# An example
a_list = list(range(10))
for each_item in a_list:
if each_item % 2 == 0: # this is called a modulo: gives the remainder after division
print(each_item, 'is Even')
else:
print(each_item, ' is Odd')
else:
print('List Processed')
# -
# ### What does bad code look like?
# Readability and brevity matter in programming. In [this](./datasets/prog.c) example, you see a clear example of why readability is important. Can you, even as a non-coder, tell me what components of this C program belong to which?
# And like readability, to the reader of your code, brevity does too. Here is another [example](./datasets/csv_manip.c) of another C program that manipulates a CSV file.
# ### What about good-ish code?
# Here is an [example](./datasets/csc_manip.py) of Python script that does the same thing that the C script does. Both brevity and readability are somewhat present here.
# Think of script like a paper you would write for a journal:
#
# | Paper Component | Coding Counterpart |
# | :-- | :-- |
# | Nouns | Variables |
# | Verbs | Operators |
# | Sentences | Lines |
# | Paragraphs | Functions |
# | Sections | Modules |
# ### In Summary
# Python is a great language for all skill levels. Not only does the readability and ease-of-use help beginners to learn, but also there is a *very* active job market for Python developers in everything from biomedical research to Web Development to Netflix & Disney.
# It is, however, important to say that Python is just a tool...and not the only one. A programming language should be used when it answers a programmatic question you are trying to solve.
#
# * For Prototyping and quickly exploring data, R is an easy choice
# * For high-speed, high-throughput, parallel computing; C/C++ is a great option
# * For web development or internet facing applications, it is hard to beat JavaScript
#
# While Python can do all these things, these others are often better at it.
# ---
# ### Noodles
# Write down all the steps it would take to get noodles into your mouth from the starting point of not having any noodles.
#
# ---
# ### Challenge
# This is meant to be a challenge, and if you don't get it, don't worry. The purpose is to make you think about how you would solve it.
# Go to [basic_script.py](./basic_script.py), and write your first program.
# Now run you program by running the cell below
# !python basic_script.py
# This same command can be run from the console/terminal (without the exclamation mark)
|
session1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import time
import cv2
import os, errno
def video_to_frames(input_loc, output_loc):
"""Function to extract frames from input video file
and save them as separate frames in an output directory.
Args:
input_loc: Input video file.
output_loc: Output directory to save the frames.
Returns:
None
"""
try:
os.makedirs(output_loc)
except OSError as e:
if e.errno != errno.EEXIST:
raise
# Log the time
time_start = time.time()
# Start capturing the feed
cap = cv2.VideoCapture(input_loc)
# Find the number of frames
video_length = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) - 1
print ("Number of frames: ", video_length)
count = -1
print ("Converting video..\n")
# Start converting the video
while cap.isOpened():
# Extract the frame
ret, frame = cap.read()
# Write the results back to output location.
cv2.imwrite(output_loc + "/%#010d.png" % (count+1), frame)
count = count + 1
# If there are no more frames left
if (count > (video_length-1)):
# Log the time again
time_end = time.time()
# Release the feed
cap.release()
# Print stats
print ("Done extracting frames.\n%d frames extracted" % count)
print ("It took %d seconds for conversion." % (time_end-time_start))
break
def main():
sourcePath = r"../drone_videos/cumvideos01/Trimmed"
outPath = r"../drone_videos/Dataset/Trimmed"
for file in os.listdir(sourcePath):
if file.endswith(".avi"):
vPath=os.path.join(sourcePath, file)
tempOutPath = "2011_09_28_drive_" + file[:file.find(".")] + "_sync/image_02/data"
oPath=os.path.join(outPath, tempOutPath)
video_to_frames(vPath, oPath)
return True
if __name__=="__main__":
main()
# -
|
Auxiliaries/vidToFrames.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DAT210x - Programming with Python for DS
# ## Module3 - Lab2
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
# +
# Look pretty...
# matplotlib.style.use('ggplot')
plt.style.use('ggplot')
# -
# Load up the wheat seeds dataset into a dataframe. We've stored a copy in the Datasets directory.
df = pd.read_csv('./Datasets/wheat.data')
# Create a 2d scatter plot that graphs the `area` and `perimeter` features:
df.plot.scatter(x='area', y='perimeter')
plt.suptitle('Perimiter vs. Area')
plt.xlabel('Area')
plt.ylabel('Perimeter')
plt.show()
# Create a 2d scatter plot that graphs the `groove` and `asymmetry` features:
df.plot.scatter(x='asymmetry', y='groove')
plt.suptitle('Groove vs. Asymmetry')
plt.xlabel('Asymmetry')
plt.ylabel('Groove')
plt.show()
# Create a 2d scatter plot that graphs the `compactness` and `width` features:
df.plot.scatter(x='width', y='compactness')
plt.suptitle('Compactness vs. Width')
plt.xlabel('Width')
plt.ylabel('Compactness')
plt.show()
# ### BONUS
#
# After completing the above, go ahead and run your program Check out the results, and see what happens when you add in the optional display parameter marker with values of either `'^'`, `'.'`, or `'o'`:
df.plot.scatter(x='width', y='compactness', marker='^')
plt.suptitle('Compactness vs. Width')
plt.xlabel('Width')
plt.ylabel('Compactness')
# Display the graphs:
plt.show()
|
Module3/Module3 - Lab2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="YWziSLFon0VZ"
import numpy as np
def random_predict(number:int=1)->int:
"""Рандомно угадываем число, двигаясь в нужном направлении
Args:
number (int, optional): Загаданное число. Defaults to 1.
Returns:
int: Число попыток
"""
count = 0
min_value = 1
max_value = 101
while True:
count += 1
predict_number = np.random.randint(min_value,max_value) #предполагаемое число
if number == predict_number:
break #выход из цикла, угадали
elif number>predict_number:
min_value = predict_number
elif number<predict_number:
max_value=predict_number
return count
def score_game(random_predict)->int:
"""За какое количество попыток в среднем из 1000 подходов угадывает наш алгоритм
Args:
random_predict (_type_): функция угадывания
Returns:
int: среднее количество попыток
"""
count_ls = []
np.random.seed(1)
random_array = np.random.randint(1, 101, size=1000)
for number in random_array:
count_ls.append(random_predict(number))
score = int(np.mean(count_ls))
print(f'Алгоритм угадывает число в среднем за {score} попыток')
# + [markdown] id="wQ8oVYgMn6hq"
# Угадываем одно число
# + id="Bi2kz8hooBrp"
number = np.random.randint(1,101)
print('Угадали число {} за {} попыток'.format(number, random_predict(number)))
# + [markdown] id="A-drIWwmon7q"
# Считаем статистику угадывания за 1000 попыток
# + id="bx0bCiruouV1"
score_game(random_predict)
|
game.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="t1EUEhVoPpFv"
# ### Wikiart: Metric Anaylsis of different GANs
# + id="QGUWsXJVPwuW" executionInfo={"status": "ok", "timestamp": 1624385970848, "user_tz": -120, "elapsed": 25, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09777395832350774991"}}
import numpy as np
import matplotlib.pyplot as plt
from google.colab import drive
from numpy import genfromtxt
# + [markdown] id="oboOsqixPWVg"
# ### Mount Google Drive
# + colab={"base_uri": "https://localhost:8080/"} id="HWeFr6JELGAS" executionInfo={"status": "ok", "timestamp": 1624385970850, "user_tz": -120, "elapsed": 23, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09777395832350774991"}} outputId="492aebde-7c88-48c2-f871-a7f14ca492de"
# mount drive
drive.mount('/content/drive')
# + [markdown] id="QuT6OhtPQCA5"
# ### Import results
# + id="sKaAUzJfQAI1" executionInfo={"status": "ok", "timestamp": 1624393613881, "user_tz": -120, "elapsed": 292, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09777395832350774991"}}
# regular dc gan
is_dcgan_style = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/dcgan/is_dcgan.txt',
delimiter=',')
fid_dcgan_style = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/dcgan/fid_dcgan.txt',
delimiter=',')
# regular cgan
is_cgan_style = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/cgan_style/is_cgan.txt',
delimiter=',')
fid_cgan_style = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/cgan_style/fid_cgan.txt',
delimiter=',')
# emotion cgan
is_cgan_emo = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/cgan_emotion/is_cgan_emo.txt',
delimiter=',')
fid_cgan_emo = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/cgan_emotion/fid_cgan_emo.csv',
delimiter=',', usecols = 1, skip_header=1)
# regular can
is_can_style = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/can_style/is_can.txt',
delimiter=',')
fid_can_style = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/can_style/fid_can.txt',
delimiter=',')
# emotion can
is_can_emo = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/can_emotion/is_can_emo.txt',
delimiter=',')
fid_can_emo = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/can_emotion/fid_can_emo.csv',
delimiter=',', usecols = 1, skip_header=1)
# Nevar Metric
nevar = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/dc_gan_nevar.txt',
delimiter=',')
# IC
ic = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/dc_gan_ic.txt',
delimiter=',')
# PC
pc = genfromtxt('/content/drive/MyDrive/neural_networks/metric_results/dc_gan_pc.txt',
delimiter=',')
# + colab={"base_uri": "https://localhost:8080/", "height": 426} id="Ax5ce5bqRx4J" executionInfo={"status": "ok", "timestamp": 1624386430461, "user_tz": -120, "elapsed": 465, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09777395832350774991"}} outputId="d58cb44e-4a66-41d4-e8f1-58f10dec59fa"
plt.figure(figsize=(8, 6))
plt.rc('xtick', labelsize=17)
plt.rc('ytick', labelsize=17)
plt.plot(is_dcgan_style, linewidth = 2.5)
plt.plot(is_cgan_style, linewidth = 2.5)
plt.plot(is_cgan_emo, linewidth = 2.5)
plt.plot(is_can_style, linewidth = 2.5)
plt.plot(is_can_emo, linewidth = 2.5)
plt.title('Average inception score per epoch and model', fontsize = 19)
plt.xlabel('Number of epochs', fontsize = 19)
plt.ylabel('Average inception score', fontsize = 19)
plt.legend(['dcgan', 'cgan-style','cgan-emo','can-style','can-emo'],
prop={'size': 19}, frameon=False)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 426} id="Ah-Vdg5X4uKM" executionInfo={"status": "ok", "timestamp": 1624386592356, "user_tz": -120, "elapsed": 586, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09777395832350774991"}} outputId="4d319010-fd15-42b2-e074-43b33660388f"
plt.figure(figsize=(8, 6))
plt.rc('xtick', labelsize=17)
plt.rc('ytick', labelsize=17)
plt.plot(fid_dcgan_style, linewidth = 2)
plt.plot(fid_cgan_style, linewidth = 2)
plt.plot(fid_cgan_emo, linewidth = 2)
plt.plot(fid_can_style, linewidth = 2)
plt.plot(fid_can_emo, linewidth = 2)
plt.title('Average inception distance per epoch and model', fontsize = 19)
plt.xlabel('Number of epochs', fontsize = 19)
plt.ylabel('Average inception distance', fontsize = 19)
plt.legend(['dcgan', 'cgan-style','cgan-emo','can-style','can-emo'],
prop={'size': 19}, frameon=False, loc = "upper right",
bbox_to_anchor=(0.9,1.0))
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 310} id="OWUS4u1TZY_4" executionInfo={"status": "ok", "timestamp": 1624395640051, "user_tz": -120, "elapsed": 536, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09777395832350774991"}} outputId="21751a39-8523-4698-e018-8c96fa5386c2"
import seaborn as sns
sns.set_style("whitegrid", {
"ytick.major.size": 0.1,
"ytick.minor.size": 0.05,
'grid.linestyle': '--'
})
plt.figure(figsize=(14, 4))
plt.rc('xtick', labelsize=17)
plt.rc('ytick', labelsize=17)
plt.plot(nevar, linewidth = 1.5)
plt.title('Average NEvAr score per epoch', fontsize = 16)
plt.xlabel('Number of epochs', fontsize = 15)
plt.ylabel('Average NEvAr score', fontsize = 15)
plt.legend(['dcgan'],
prop={'size': 19}, frameon=False)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 310} id="fiMYsK9NZhLg" executionInfo={"status": "ok", "timestamp": 1624395709469, "user_tz": -120, "elapsed": 803, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09777395832350774991"}} outputId="491e4669-1566-4235-9e83-04841009d108"
plt.figure(figsize=(14, 4))
plt.rc('xtick', labelsize=17)
plt.rc('ytick', labelsize=17)
plt.plot(ic, linewidth = 1.5)
plt.title('Average image complexity across epochs', fontsize = 16)
plt.xlabel('Number of epochs', fontsize = 15)
plt.ylabel('Average image complexity', fontsize = 15)
plt.legend(['IC'],
prop={'size': 15}, frameon=False)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 310} id="up-udlMyads2" executionInfo={"status": "ok", "timestamp": 1624395742646, "user_tz": -120, "elapsed": 955, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "09777395832350774991"}} outputId="02deb713-16af-44ae-a2bd-ca0e0433b56f"
plt.figure(figsize=(14, 4))
plt.rc('xtick', labelsize=17)
plt.rc('ytick', labelsize=17)
plt.plot(pc, linewidth = 1.5)
plt.title('Average processing complexity across epochs', fontsize = 16)
plt.xlabel('Number of epochs', fontsize = 15)
plt.ylabel('Average processing complexity', fontsize = 15)
plt.legend(['PC'],
prop={'size': 19}, frameon=False)
plt.show()
|
gan_models/metric_anaylsis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using the Pokemon Data
import pandas as pd
pokemon_df = pd.read_csv("data/pokemon.csv", index_col=0, dtype={"Generation": 'object'})
pokemon_df = pokemon_df.rename(columns={"Type 1":"Type"})
pokemon_df.head()
pokemon_df.describe()
# # Do some plots with Bokeh and explore the interativity
# !pip install bokeh
# +
from bokeh.models import ColumnDataSource
groupby_df = pokemon_df.groupby("Generation").mean()
source = ColumnDataSource(groupby_df)
# -
groupby_df = groupby_df.reset_index()
# +
from bokeh.layouts import column
from bokeh.plotting import figure, output_notebook, show
from bokeh.palettes import Spectral6
from bokeh.transform import factor_cmap
from bokeh.models import Slider
# output to jupyter notebook
output_notebook()
# create a new plot with a title and axis labels
p = figure(title="Average Attack vs Defence in Different Pokemon Generations", x_axis_label='Attack', y_axis_label='Defense',
plot_width=400, plot_height=400)
# color mapper
mapper = factor_cmap(field_name='Generation', palette=Spectral6,
factors=groupby_df['Generation'])
# add a line renderer with legend and line thickness
dots = p.circle(x='Attack', y='Defense', size=10, source=source, color=mapper, legend_field="Generation")
p.legend.orientation = "horizontal"
p.legend.location = "top_center"
slider = Slider(start=10, end=30, step=1, value=15)
slider.js_link('value', dots.glyph, 'size')
# show the results
show(column(p, slider))
# +
group = pokemon_df[["Generation", "Type", "HP"]].groupby(["Generation", "Type"])
index_cmap = factor_cmap('Generation_Type', palette=Spectral6, factors=sorted(pokemon_df.Generation.unique()), end=1)
p = figure(plot_width=800, plot_height=300,
title="Pokemon HP across Generations",
x_range=group, toolbar_location=None,
tooltips=[("HP", "@HP_mean")])
p.vbar(x='Generation_Type', top='HP_mean', width=1, source=group,
line_color="white", fill_color=index_cmap)
p.y_range.start = 0
p.x_range.range_padding = 0.05
p.xgrid.grid_line_color = None
p.xaxis.axis_label = "Type of pokemon group by Generation"
p.xaxis.major_label_orientation = 1.2
p.outline_line_color = None
show(p)
# -
# # Explore Folium
#
# Sorry we didn't have time to cover that. You can try out some tutorials here: https://python-visualization.github.io/folium/quickstart.html
# !pip install folium
|
viz4.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# cd /Users/Will/Documents/GITHUB/class_project/class_project/HW7
from data_cleaning_utils import *
import matplotlib.pyplot as plt
# %matplotlib inline
# This is a quick notebook designed to show the functions within the data_cleaning_utils.py There are 3 main functions:
#
# * data import tool
# * data smoothing tool
# * resampling tool
#
# right now, each function is written to run individually, but can be modified so that they aren't re-run each time
# cd /Users/Will/Documents/GITHUB/class_project/class_project/Data/Amazon
raw = import_data()
raw.head()
smooth = smooth_data("pH")
reduced = reducer()
reduced.head()
|
Analysis/Data Cleaning Utils.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# Example source: https://developers.google.com/earth-engine/Earth_Engine_asset_from_cloud_geotiff
# This has details about the Earth Engine Python Authenticator client.
from ee import oauth
from google_auth_oauthlib.flow import Flow
import json
# -
# Build the `client_secrets.json` file by borrowing the
# Earth Engine python authenticator.
client_secrets = {
'web': {
'client_id': oauth.CLIENT_ID,
'client_secret': oauth.CLIENT_SECRET,
'redirect_uris': [oauth.REDIRECT_URI],
'auth_uri': 'https://accounts.google.com/o/oauth2/auth',
'token_uri': 'https://accounts.google.com/o/oauth2/token'
}
}
# Write to a json file.
client_secrets_file = 'client_secrets.json'
with open(client_secrets_file, 'w') as f:
json.dump(client_secrets, f, indent=2)
# Start the flow using the client_secrets.json file.
flow = Flow.from_client_secrets_file(client_secrets_file,
scopes=oauth.SCOPES,
redirect_uri=oauth.REDIRECT_URI)
# Get the authorization URL from the flow.
auth_url, _ = flow.authorization_url(prompt='consent')
# +
# Print instructions to go to the authorization URL.
oauth._display_auth_instructions_with_print(auth_url)
print('\n')
print('\n')
print("after entering key hit enter to store")
# The user will get an authorization code.
# This code is used to get the access token.
code = input('Enter the authorization code: \n')
flow.fetch_token(code=code)
# Get an authorized session from the flow.
session = flow.authorized_session()
# +
#Request body
#The request body is an instance of an EarthEngineAsset. This is where the path to the COG is specified, along with other useful properties. Note that the image is a small area exported from the composite made in this example script. See this doc for details on exporting a COG.
#Earth Engine will determine the bands, geometry, and other relevant information from the metadata of the TIFF. The only other fields that are accepted when creating a COG-backed asset are properties, start_time, and end_time.
# Request body as a dictionary.
request = {
'type': 'IMAGE',
'gcs_location': {
'uris': ['gs://bnl_uas_data/NGEEArctic_UAS_Kougarok_20180725_Flight6_CHM_cog.tif']
},
'properties': {
'source': 'https://osf.io/erv4m/download'
},
'startTime': '2018-07-25T00:00:00.000000000Z',
'endTime': '2018-07-26T00:00:00.000000000Z',
}
from pprint import pprint
pprint(json.dumps(request))
# +
#Send the request
#Make the POST request to the Earth Engine CreateAsset endpoint.
# Where Earth Engine assets are kept.
project_folder = 'earthengine-legacy'
# Your user folder name and new asset name.
asset_id = 'users/serbinsh/uas_data/NGEEArctic_UAS_Kougarok_20180725_Flight6_CHM_cog'
url = 'https://earthengine.googleapis.com/v1alpha/projects/{}/assets?assetId={}'
response = session.post(
url = url.format(project_folder, asset_id),
data = json.dumps(request)
)
pprint(json.loads(response.content))
# -
|
Python/add_cogs_as_asset_BNL_UAS_KG_CHM.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 前言
#
# 通过 Pandas 可以方便的读取 CSV 文件。
#
# # 什么是 CSV 文件?
#
# CSV是最通用的一种文件格式,它可以非常容易地被导入各种PC表格及数据库中,你在 EXCEL 里面就可以导出这样一个文件,我们可以试试。
#
# 而我们熟悉的XLS则是EXCEL专用格式。 CSV文件一行即为数据表的一行,生成数据表字段用半角逗号隔开。因此 Comma Separated Value即是CSV的全称,你可以尝试用记事本打开CSV和XLS文件看看。
#
#
# # 操作-读取
#
# 用 Pandas 读取 CSV 非常特别简单。
#
# 1. 导入 pandas 包
# 2. 一行代码`pd.read_csv()`+文件地址,再赋值给一个变量,我们习惯命名为 `df`
# 3. 打印出来看看结果
# +
# 如何获取绝对路径
# import os
# os.getcwd( ) # 获取现在目录
# %pwd
# +
import pandas as pd
# 读取 CSV,这个 stock_basic.csv 文件是从 tushare 直接下载的股票基本数据,读取一下试试
## 如何获取文件的位置?
### 绝对位置:/Users/davidfnck/Downloads/CSVFiles/stock_basic.csv
### 相对位置:CSVFiles/stock_basic.csv # 在本程序所在文件夹开始算位置,同一级是 ./ ,上一级是 ../
df = pd.read_csv('CSVFiles/stock_basic.csv')
print(df)
# -
print(df.info()) # 我们也可以看一下这张 DataFrame 的基本属性
# # 问题解读
#
# 我们看上面打印出来的结果有什么问题?
#
# 1. pandas 识别到 CSV 的第一行包含列名,并自动使用它们
#
# 2. pandas 也在 DataFrame 中使用从零开始的整数索引,那是因为没有告诉它我们的索引应该是什么
#
# 3. 第一列(我们这里的 index 不算一列)的标题,是`Unnanmed: 0`,这是什么?打开源文件看一下,我们发现这是我们当时将 DataFrame 写入文件的时候,index 同时写入留下的
#
# 4. 第三列`symbol`原先的`000001`变成了`1`,前面的`0`全部被省略了
#
# 5. 最后一列`list_date`是表示时间,但是格式却是数字
#
# 让我们接下来解决这些问题。
#
# 要使用其他列作为 DataFrame 的索引,添加 index_col 可选参数
df = pd.read_csv('CSVFiles/stock_basic.csv',index_col=0)
print(df)
# 接下来,我们看到`symbol`字段的数据类型是`int`,我们希望它是字符串格式从文件中原封不动地读取出来,这样才不会省略前面的`0`。
df = pd.read_csv('CSVFiles/stock_basic.csv',index_col=0,dtype={'symbol':'str'})
print(df)
# 接下来,让我们修复`list_date`字段的数据类型。可以使用 `parse_dates` 可选参数强制 pandas 将数据作为日期读取,该参数定义为要作为日期处理的列名列表:
df = pd.read_csv('CSVFiles/stock_basic.csv',index_col=0,dtype={'symbol':'str'},parse_dates=['list_date'])
print(df)
# 如果 CSV 文件的第一行中没有列名,则可以使用 `names` 可选参数来提供列名的列表。 如果要覆盖第一行中提供的列名,也可以使用此选项。 在这种情况下,还必须使用`header = 0`可选参数告诉 `pandas.read_csv()`忽略现有列名:
df = pd.read_csv('CSVFiles/stock_basic.csv',
index_col=0,
dtype={'股票代码':'str'},
parse_dates=['上市时间'],
header=0,
names=['市场编号', '股票代码','股票名称', '所在城市','行业','上市时间'])
print(df)
# 请注意,由于列名称已更改,因此还必须更改`symbol`和`parse_dates`可选参数中指定的列。
# # 代码解读
#
# 当然,我们解决这些问题最根本的方法,还是回到`pd.read_csv`这句代码上去,正好我们也借此机会再来读一读源码。
#
# 查看代码参数的方法:`pandas.read_csv`+`shift+tab`
#
# 链接:https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
# 术语参照:https://docs.python.org/zh-cn/3.7/glossary.html
#
# ```python
# pandas.read_csv(
# filepath_or_buffer: Union[str, pathlib.Path, IO[~AnyStr]], # str,pathlib。str, pathlib.Path, py._path.local.LocalPath or any object with a read() method (such as a file handle or StringIO)
# # 可以是URL,可用URL类型包括:http, ftp, s3和文件。对于多文件正在准备中。本地文件读取实例:://localhost/path/to/table.csv
# # 参数名字本身反映了功能
# # + 这里可以接受一个文件名,或者一个URL,
# # + 也可以接受一个打开的文件句柄,
# # + 或者其他任何提供了read方法的对象,
# # + 举个栗子:某个URL输出CSV,但是需要验证密码,那么就没法让 read_csv 直接读取URL,但是可以使用 urlopen 发送附带了验证信息的Request,并把返回的 Response 对象传给 read_csv 函数,进而通过 Response 对象的 read 方法加载数据;
#
# sep=',', # str, default ‘,’
# # 指定分隔符。如果不指定参数,则会尝试使用逗号分隔。(分隔符长于一个字符并且不是‘\s+’,将使用python的语法分析器。并且忽略数据中的逗号。正则表达式例子:'\r\t'。不想适可而止的同学,要去学习一下正则表达式。)
#
# delimiter=None, # str, default None
# # 定界符,备选分隔符(如果指定该参数,则sep参数失效)
# # `delimiter`是`sep`的别名;如果指定为 `\t(制表符)`的话,就可以实现 `read_table` 的默认功能;支持使用正则表达式来匹配某些不标准的CSV文件;
#
# # Column and Index Locations and Names
# # 列和索引的位置和名称
# header='infer', # int or list of ints, default ‘infer’
# # 指定行数用来作为列名,数据开始行数。如果文件中没有列名,则默认为0,否则设置为None。如果明确设定header=0 就会替换掉原来存在列名。header参数可以是一个list例如:[0,1,3],这个list表示将文件中的这些行作为列标题(意味着每一列有多个标题),介于中间的行将被忽略掉(例如本例中的2;本例中的数据1,2,4行将被作为多级标题出现,第3行数据将被丢弃,dataframe的数据从第5行开始。)。注意:如果skip_blank_lines=True 那么header参数忽略注释行和空行,所以header=0表示第一行数据而不是文件的第一行。
#
# names=None, # array-like, default None
# # 用于结果的列名列表,如果数据文件中没有列标题行,就需要执行header=None。默认列表中不能出现重复,除非设定参数mangle_dupe_cols=True。
#
# index_col=None, # int or sequence or False, default None
# # 用作行索引的列编号或者列名,如果给定一个序列则有多个行索引。如果文件不规则,行尾有分隔符,则可以设定index_col=False 来是的pandas不适用第一列作为行索引。
#
# usecols=None, # array-like, default None
# # 返回一个数据子集,该列表中的值必须可以对应到文件中的位置(数字可以对应到指定的列)或者是字符传为文件中的列名。例如:usecols有效参数可能是 [0,1,2]或者是 [‘foo’, ‘bar’, ‘baz’]。使用这个参数可以加快加载速度并降低内存消耗。
#
# squeeze=False, # boolean, default False
# # 如果文件只包含一列,则返回一个Series
#
# prefix=None, # str, default None
# # 在没有列标题时,给列添加前缀。例如:添加‘X’ 成为 X0, X1, ...
#
# mangle_dupe_cols=True, # boolean, default True
# # 重复的列,将‘X’...’X’表示为‘X.0’...’X.N’。如果设定为false则会将所有重名列覆盖。
#
# # General Parsing Configuration
# # 常规解析配置
# dtype=None, # Type name or dict of column -> type, default None
# # 每列数据的数据类型。例如 {‘a’: np.float64, ‘b’: np.int32}
#
# engine=None, # {‘c’, ‘python’}, optional
# # 使用的分析引擎。可以选择C或者是python。C引擎快但是Python引擎功能更加完备。
#
# converters=None, # dict, default None
# # 列转换函数的字典。key可以是列名或者列的序号。
#
# true_values=None, # list, default None
# # Values to consider as True
#
# false_values=None, # list, default None
# # Values to consider as False
#
# skipinitialspace=False, # boolean, default False
# # 忽略分隔符后的空白(默认为False,即不忽略)
#
# skiprows=None, # list-like or integer, default None
# # 需要忽略的行数(从文件开始处算起),或需要跳过的行号列表(从0开始)。
#
# skipfooter=0, # int, default 0
# # 从文件尾部开始忽略。 (c引擎不支持)
#
# nrows=None, # int, default None
# # 需要读取的行数(从文件头开始算起)。
#
# # NA and Missing Data Handling
# # 空值和缺失数据处理
# na_values=None, # scalar, str, list-like, or dict, default None
# # 一组用于替换NA/NaN的值。如果传参,需要制定特定列的空值。默认为‘1.#IND’, ‘1.#QNAN’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘nan’`.
#
# keep_default_na=True, # bool, default True
# # 如果指定na_values参数,并且keep_default_na=False,那么默认的NaN将被覆盖,否则添加。
#
# na_filter=True, # boolean, default True
# # 是否检查丢失值(空字符串或者是空值)。对于大文件来说数据集中没有空值,设定na_filter=False可以提升读取速度。
#
# verbose=False, # boolean, default False
# # 是否打印各种解析器的输出信息,例如:“非数值列中缺失值的数量”等。
#
# skip_blank_lines=True, # boolean, default True
# # 如果为True,则跳过空行;否则记为NaN。
#
# # Datetime Handling
# # 日期时间处理
# parse_dates=False, # boolean or list of ints or names or list of lists or dict, default False。这个参数指定对CSV文件中日期序列的处理方式:
# # boolean. True -> 解析索引,默认为False,原样加载,不解析日期时间,可以为True,尝试解析日期索引,
# # list of ints or names. e.g. If [1, 2, 3] -> 可以为数字或 names 的列表,解析指定的列为时间序列,解析1,2,3列的值作为独立的日期列;
# # list of lists. e.g. If [[1, 3]] -> 可以为以列表为元素的列表,解析每个子列表中的字段组合为时间序列,合并1,3列作为一个日期列使用
# # dict, e.g. {‘foo’ : [1, 3]} -> 可以为值为列表的字典,解析每个列表中的字段组合为时间序列,并命名为字典中对应的键值;将1,3列合并,并给合并后的列起名为"foo"
#
#
# infer_datetime_format=False, # boolean, default False
# # 如果设定为True并且 parse_dates 可用,那么pandas将尝试转换为日期类型,如果可以转换,转换方法并解析。在某些情况下会快5~10倍。
#
# keep_date_col=False, # boolean, default False
# # 如果连接多列解析日期,则保持参与连接的列。默认为False。
# # 解析出日期序列后,是否保留原来的列;
#
# date_parser=None, # function, default None
# # 用于解析日期的函数,默认使用dateutil.parser.parser来做转换。Pandas尝试使用三种不同的方式解析,如果遇到问题则使用下一种方式。
# # 1.使用一个或者多个arrays(由parse_dates指定)作为参数;
# # 2.连接指定多列字符串作为一个列作为参数;
# # 3.每行调用一次date_parser函数来解析一个或者多个字符串(由parse_dates指定)作为参数。
#
# dayfirst=False, # boolean, default False
# # DD/MM格式的日期类型
#
# cache_dates=True,
#
#
# # Iteration
# # 迭代
# iterator=False, # boolean, default False
# # 返回一个TextFileReader 对象,以便逐块处理文件。
#
# chunksize=None, # int, default None
# # 文件块的大小
#
# # Quoting, Compression, and File Format
# # 引用,压缩和文件格式
# compression='infer', # {‘infer’, ‘gzip’, ‘bz2’, ‘zip’, ‘xz’, None}, default ‘infer’
# # 直接使用磁盘上的压缩文件。如果使用infer参数,则使用 gzip, bz2, zip或者解压文件名中以‘.gz’, ‘.bz2’, ‘.zip’, or ‘xz’这些为后缀的文件,否则不解压。如果使用zip,那么ZIP包中国必须只包含一个文件。设置为None则不解压。新版本0.18.1版本支持zip和xz解压
#
# thousands=None, # str, default None
# # 千分位分割符,如“,”或者“."
#
# decimal=b'.', # str, default ‘.’
# # 字符中的小数点 (例如:欧洲数据使用’,‘).
#
# lineterminator=None, # str (length 1), default None
# # 行分割符,只在C解析器下使用。
#
# quotechar='"', # str (length 1), optional
# # 引号,用作标识开始和解释的字符,引号内的分割符将被忽略。
#
# quoting=0, # int or csv.QUOTE_* instance, default 0
# # 控制csv中的引号常量。可选 QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3)
#
# doublequote=True, # boolean, default True
# # 双引号,当单引号已经被定义,并且quoting 参数不是QUOTE_NONE的时候,使用双引号表示引号内的元素作为一个元素使用。
#
# escapechar=None, # str (length 1), default None
# # 当quoting 为QUOTE_NONE时,指定一个字符使的不受分隔符限值。
#
# comment=None, # str, default None
# # 标识着多余的行不被解析。如果该字符出现在行首,这一行将被全部忽略。这个参数只能是一个字符,空行(就像skip_blank_lines=True)注释行被header和skiprows忽略一样。例如如果指定comment='#' 解析‘#empty\na,b,c\n1,2,3’ 以header=0 那么返回结果将是以’a,b,c'作为header。
#
# encoding=None, # str, default None
# # 指定字符集类型,通常指定为'utf-8'.
#
# dialect=None, # str or csv.Dialect instance, default None
# # 如果没有指定特定的语言,如果sep大于一个字符则忽略。具体查看csv.Dialect 文档
#
# # Error Handling
# # 错误处理
# error_bad_lines=True, # boolean, default True
# # 如果一行包含太多的列,那么默认不会返回DataFrame ,如果设置成false,那么会将改行剔除(只能在C解析器下使用)。
#
# warn_bad_lines=True, # boolean, default True
# # 如果error_bad_lines =False,并且warn_bad_lines =True 那么所有的“bad lines”将会被输出(只能在C解析器下使用)。
#
# # Internal
# # 内部
# delim_whitespace=False,
# low_memory=True, # boolean, default True
# # 分块加载到内存,再低内存消耗中解析。但是可能出现类型混淆。确保类型不被混淆需要设置为False。或者使用dtype 参数指定类型。注意使用chunksize 或者iterator 参数分块读入会将整个文件读入到一个Dataframe,而忽略类型(只能在C解析器中有效)
# memory_map=False, # boolean, default False
# # 如果使用的文件在内存内,那么直接map文件使用。使用这种方式可以避免文件再次进行IO操作。
# float_precision=None # string, default None
# # Specifies which converter the C engine should use for floating-point values. The options are None for the ordinary converter, high for the high-precision converter, and round_trip for the round-trip converter.
# )
# ```
# # 实例演示
# +
# 演示分隔符
## 如果不明确的话,原始读出来是多行一列,没有分割开
# df_sep = pd.read_csv('CSVFiles/read_csv_header.csv')
## 在文件中间隔随意添加 # 作为间隔
df_sep = pd.read_csv('CSVFiles/read_csv_sep.csv',sep='#',delimiter=',')
print(df_sep)
print(df_sep.shape)
print(df_sep.info())
# +
# 演示列和索引的位置和名称
## 如果不明确的话,原始读出来是多行一列,没有分割开
# df_header = pd.read_csv('CSVFiles/read_csv_header.csv',header=[1,4],skip_blank_lines=False)
## 如果跳过空行,默认是跳过,则是数据的第一行
df_header = pd.read_csv('CSVFiles/read_csv_header.csv',names=['one','two','three','twoa','threea'],skip_blank_lines=False)
print(df_header)
print(df_header.shape)
print(df_header.info())
# +
# 如果文件只包含一列,则返回一个Series
df_squeeze = pd.read_csv('CSVFiles/read_csv_squeeze.csv',squeeze=False)
print(df_squeeze)
print(type(df_squeeze))
print(df_squeeze.shape)
print(df_squeeze.info())
# -
# # 操作-写入
#
# 用 Pandas 写入 CSV 非常特别简单。
# +
import pandas as pd
df = pd.read_csv('CSVFiles/stock_basic.csv',
index_col=0,
dtype={'股票代码':'str'},
parse_dates=['上市时间'],
header=0,
names=['市场编号', '股票代码','股票名称', '所在城市','行业','上市时间'])
print(df)
# 写入
df.to_csv('CSVFiles/stock_basic_new.csv',header=0,index=0)
# -
# # 代码解读
#
# ```python
# DataFrame.to_csv(self,
# path_or_buf=None, # str or file handle, default None
# # 字符串或文件句柄,默认无文件。路径或对象,如果没有提供,结果将返回为字符串。
#
# sep=', ', # str, default ‘,’
# # 分隔符。长度为1的字符串。
#
# na_rep='', # str, default ‘’
# # 替换空值。
#
# float_format=None, # str, default None
# # 浮点数格式,保留几位小数,例如 float_format='%.2f'
#
# columns=None, # sequence(序列), optional
# # 要写入文件的列,例如columns=['name']
#
# header=True, # bool or list of str, default True
# # 是否保留列名。如果给出了字符串列表,则假定它是列名的别名。在版本0.24.0中更改:以前将Series默认为False。例如header=0,就是不保留列名,相当于 False
#
# index=True, # boolean, default True
# # 是否保留行索引
#
# index_label=None,
# # 索引列的列标签(如果需要)。 如果没有给出,并且标头和索引为True,则使用索引名。 如果对象使用MultiIndex,则应给出一个序列。 如果为False,则不打印索引名称的字段。
#
# mode='w', # str
# # Python写模式,默认“w”
#
# encoding=None, # string, optional
# # 表示在输出文件中使用的编码的字符串
#
# compression='infer', # str, default ‘infer’
# # 表示在输出文件中使用的压缩的字符串,允许值为“gzip”、“bz2”、“xz”,仅在第一个参数是文件名时使用。
#
# quoting=None, # optional constant from csv module( CSV模块的可选常量。)
# # 默认值为to_csv.QUOTE_MINIMAL。如果设置了浮点格式,那么浮点将转换为字符串,因此csv.QUOTE_NONNUMERIC会将它们视为非数值的。
#
# quotechar='"', # string (length 1), default ‘”’(字符串(长度1),默认“”)
# # 用于引用字段的字符
#
# line_terminator=None, # string, default ‘\n’
# # 在输出文件中使用的换行字符或字符序列
#
# chunksize=None, # int or None
# # 一次写入行
#
# date_format=None, # string, default None
# # 字符串对象转换为日期时间对象
#
# doublequote=True, # bool, default True
# # 控制字段内quotechar的引用。
#
# escapechar=None, # str, default None
# # 长度为1的字符串。适当时用于转义sep和quotechar的字符。
#
# decimal='.' # string, default ‘.’
# # Character recognized as decimal separator. E.g. use ‘,’ for European data.
# )
#
# ```
|
003-学习 Pandas/读写CSV文件.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Developing NGBoost
# + tags=["remove_cell"]
import sys
sys.path.append('/Users/c242587/Desktop/projects/git/ngboost')
# -
# As you work with NGBoost, you may want to experiment with distributions or scores that are not yet supported. Here we will walk through the process of implementing a new distribution or score.
# ## Adding Distributions
# The first order of business is to write the class for your new distribution. The distribution class must subclass the appropriate distribution type (either `RegressionDistn` or `ClassificationDistn`) and must implement methods for `fit()` and `sample()`. The scores compatible with the distribution should be stored in a class attribute called `score` and the number of parameters in an class attribute n_params. The class must also store the (internal) distributional parameters in a `_params` instance attribute. Additionally, regression distributions must implement a `mean()` method to support point prediction.
#
# We'll use the Laplace distribution as an example. The Laplace distribution has PDF $\frac{1}{2b} e^{-\frac{|x-\mu|}{b}}$ with user-facing parameters $\mu \in \mathbb{R}$ and $b > 0$, which we will call `loc` and `scale` to conform to the [`scipy.stats` implementation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.laplace.html).
#
# In NGBoost, all parameters must be represented internally in $\mathbb R$, so we need to reparametrize $(\mu, b)$ to, for instance, $(\mu, \log(b))$. The latter are the parameters we need to work with when we initialize a `Laplace` object and when implement the score.
# +
from scipy.stats import laplace as dist
import numpy as np
from ngboost.distns.distn import RegressionDistn
from ngboost.scores import LogScore
class LaplaceLogScore(LogScore): # will implement this later
pass
class Laplace(RegressionDistn):
n_params = 2
scores = [LaplaceLogScore] # will implement this later
def __init__(self, params):
# save the parameters
self._params = params
# create other objects that will be useful later
self.loc = params[0]
self.logscale = params[1]
self.scale = np.exp(params[1]) # since params[1] is log(scale)
self.dist = dist(loc=self.loc, scale=self.scale)
def fit(Y):
m, s = dist.fit(Y) # use scipy's implementation
return np.array([m, np.log(s)])
def sample(self, m):
return np.array([self.dist.rvs() for i in range(m)])
def __getattr__(self, name): # gives us access to Laplace.mean() required for RegressionDist.predict()
if name in dir(self.dist):
return getattr(self.dist, name)
return None
@property
def params(self):
return {'loc':self.loc, 'scale':self.scale}
# -
# The `fit()` method is a class method that takes a vector of observations and fits a marginal distribution. Meanwhile, `sample()` should return a $m$ samples from $P(Y|X=x)$, each of which is a vector of `len(Y)`.
#
# Here we're taking advantage of the fact that `scipy.stats` already has the Laplace distribution implemented so we can steal its `fit()` method and put a thin wrapper around `rvs()` to get samples. We also use `__getattr__()` on the internal `scipy.stats` object to get access to its `mean()` method.
#
# Lastly, we write a convenience method `params()` that, when called, returns the distributional parameters as the user expects to see them, i.e. $(\mu, b)$, not $(\mu, \log b)$.
#
# ### Implementing a Score for our Distribution
#
# Now we turn our attention to implementing a score that we can use with this distribution. We'll use the log score as an example.
#
# All implemented scores should subclass the appropriate score and implement three methods:
#
# * `score()` : the value of the score at the current parameters, given the data `Y`
# * `d_score()` : the derivative of the score at the current parameters, given the data `Y`
# * `metric()` : the value of the Riemannian metric at the current parameters
#
class LaplaceLogScore(LogScore):
def score(self, Y):
return -self.dist.logpdf(Y)
def d_score(self, Y):
D = np.zeros((len(Y), 2)) # first col is dS/d𝜇, second col is dS/d(log(b))
D[:, 0] = np.sign(self.loc - Y)/self.scale
D[:, 1] = 1 - np.abs(self.loc - Y)/self.scale
return D
# Notice that the attributes of an instance of `Laplace` are referenced using the `self.attr` notation even though we haven't said these will be attributes of the `LaplaceLogScore` class. When a user asks NGBoost to use the `Laplace` distribution with the `LogScore`, NGBoost will first find the implmentation of the log score that is compatible with `Laplace`, i.e. `LaplaceLogScore` and dynamically create a new class that has both the attributes of the distribution and the appropriate implementation of the score. For this to work, the distribution class `Laplace` must have a `scores` class attribute that includes the implementation `LaplaceLogScore` and `LaplaceLogScore` must subclass `LogScore`. As long as those conditions are satisfied, NGBoost can take care of the rest.
# The derivatives with respect to [$\log b$](https://www.wolframalpha.com/input/?i=d%2Fdb+-log%281%2F%282e%5Eb%29+e%5E%28-%7Cx-a%7C%2Fe%5Eb%29%29) and [$\mu$](https://www.wolframalpha.com/input/?i=d%2Fda+-log%281%2F%282e%5Eb%29+e%5E%28-%7Cx-a%7C%2Fe%5Eb%29%29) are easily derived using, for instance, WolframAlpha.
# In this example we won't bother implementing `metric()`, which would return the current Fisher Information. The reason is that the NGBoost implmentation of `LogScore` has a default `metric()` method that uses a Monte Carlo method to approximate the Fisher Information using the `gradient()` method and the distribution's `sample()` method (that's why we needed to implement `sample()`). By inhereting from `LogScore()`, not only can NGBoost find our implementation for the Laplace distribution, it can also fall back on the defualt `metric()` method. More on that later.
# Putting it all together:
# +
class LaplaceLogScore(LogScore):
def score(self, Y):
return -self.dist.logpdf(Y)
def d_score(self, Y):
D = np.zeros((len(Y), 2)) # first col is dS/d𝜇, second col is dS/d(log(b))
D[:, 0] = np.sign(self.loc - Y)/self.scale
D[:, 1] = 1 - np.abs(self.loc - Y)/self.scale
return D
class Laplace(RegressionDistn):
n_params = 2
scores = [LaplaceLogScore]
def __init__(self, params):
# save the parameters
self._params = params
# create other objects that will be useful later
self.loc = params[0]
self.logscale = params[1]
self.scale = np.exp(params[1]) # since params[1] is log(scale)
self.dist = dist(loc=self.loc, scale=self.scale)
def fit(Y):
m, s = dist.fit(Y) # use scipy's implementation
return np.array([m, np.log(s)])
def sample(self, m):
return np.array([self.dist.rvs() for i in range(m)])
def __getattr__(self, name): # gives us access to Laplace.mean() required for RegressionDist.predict()
if name in dir(self.dist):
return getattr(self.dist, name)
return None
@property
def params(self):
return {'loc':self.loc, 'scale':self.scale}
# -
# And we can test our method:
# +
from ngboost import NGBRegressor
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X, Y = load_boston(True)
X_reg_train, X_reg_test, Y_reg_train, Y_reg_test = train_test_split(X, Y, test_size=0.2)
ngb = NGBRegressor(Dist=Laplace, Score=LogScore).fit(X_reg_train, Y_reg_train)
Y_preds = ngb.predict(X_reg_test)
Y_dists = ngb.pred_dist(X_reg_test)
# test Mean Squared Error
test_MSE = mean_squared_error(Y_preds, Y_reg_test)
print('Test MSE', test_MSE)
# test Negative Log Likelihood
test_NLL = -Y_dists.logpdf(Y_reg_test).mean()
print('Test NLL', test_NLL)
# -
# Dig into the source of `ngboost.distns` to find more examples. If you write and test your own distribution, please contribute it to NGBoost by making a pull request!
# ### Censored Scores
# You can make your distribution suitable for use in surival analysis if you implement a censored version of the score. The signature for the `score()`, `d_score()` and `metric()` methods should be the same, but they should expect `Y` to be indexable into two arrays like `E, T = Y["Event"], Y["Time"]`. Furthermore, any censored scores should be linked to the distribution class definition via a class attribute called `censored_scores` instead of `scores`.
#
# Since censored scores are more general than their standard counterparts (fully observed data is a specific case of censored data), if you implement a censored score in NGBoost, it will automatically become available as a useable score for standard regression analysis. No need to implement the regression score seperately or register it in the `scores` class attribute.
# ### Metrics
# As we saw, using the log score, the easiest thing to do as a developer is to lean on the default ngboost method that calculates the log score metric.
#
# However, the distribution-agnostic default method is slow because it must sample from the distribution many times to build up an approximation of the metric. If you want to make it faster, then you must derive and implement the distribution-specific Riemannian metric, which for the log score is the Fisher information matrix of that distribution. You have to derive the Fisher with respect to the internal ngboost parameterization (if that is different to the user-facing parametrization, e.g. $\log(\sigma)$, not $\sigma$). Deriving a Fisher is not necessarily easy since you have to compute an expectation analytically, but there are many examples onlne of deriving Fisher matrices that you can look through.
#
# If you don't want to use the log score (say you want CRP score, for example), then ngboost does not (yet?) have a default method for calculating the metric and you *must* derive and implement it yourself. This is harder than deriving a Fisher because there are not many worked examples. The most general derivation process should follow the outline [here](https://wiseodd.github.io/techblog/2018/03/14/natural-gradient/), replacing the KL divergence (which is induced by the log score) with whichever divergence is induced by the scoring rule you want to use (e.g. L2 for CRPS), again taking care to derive with respect to the internal ngboost parameterization, not the user-facing one. For any particular score, there may be a specific closed-form expression that you can use to calculate the metric across distributions (the expression for the Fisher Info serves this purpose for the log score) or there may not be- I actually don't know the answer to this question! But if there were, that could suggest some kind of default implementation for that score's `metric()` method.
# ## Adding Scores
# We've seen how to *implement* an existing score for a new distribution, but making a new score altogether in NGBoost is also easy: just make a new class that subclasses `Score`:
# +
from ngboost.scores import Score
class SphericalScore(Score):
pass
# -
# That's it. Distribution-specific implemenations of this score (e.g. `LaplaceSphericalScore`) should subclass `SphericalScore`. The implementations of `LogScore` and `CRPScore` are in `ngboost.scores` for reference.
|
examples/user-guide/content/5-dev.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/MUYang99/NOx-Time-Series-Prediction-Based-on-Deep-Learning/blob/main/KNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="bVGqdDGGET1W"
from math import sqrt
from numpy import concatenate
import numpy as np
import pandas as pd
from pandas import read_csv
from pandas import DataFrame
from pandas import concat
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error, mean_absolute_error
from sklearn.neighbors import KNeighborsRegressor
from matplotlib import pyplot
# + id="APZjqdp0EfnB"
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
n_vars = 1 if type(data) is list else data.shape[1]
df = DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
# + colab={"base_uri": "https://localhost:8080/"} id="v4RMRXMkEjY6" outputId="ee969fa3-a0f8-4b54-fa6b-f4a17c997b56"
# load dataset
dataset = pd.read_excel('d1.xlsx')
dataset.columns = ['Date', 'NOx']
dataset = dataset.dropna(subset=['NOx'])
dataset = dataset[dataset.Date < '20151231']
# normalise features
scaler = MinMaxScaler(feature_range=(0,1))
nox = dataset.NOx.values
nox = nox.reshape(len(nox), 1)
scaled = scaler.fit_transform(nox)
# frame as supervised learning
n_mins = 5
n_features = 1
reframed = series_to_supervised(scaled, n_mins, 1)
# drop columns we don't want to predict
print(reframed.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="-4qgVVfTEmd5" outputId="ab1ea934-2f7d-493b-92ff-0cd8dd916c50"
# split into train and test sets
values = reframed.values
n_val_quarters = 144
train = values[:-n_val_quarters, :]
test = values[-n_val_quarters:, :]
# split into input and outputs
n_obs = n_mins * n_features
train_X, train_y = train[:, :n_obs], train[:, -1]
test_X, test_y =test[:, :n_obs], test[:,-1]
print(train_y,test_y)
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="mRipDSV4Ep_z" outputId="3116f2f8-644c-495c-8a26-54051c555b58"
# design knn model
clf = KNeighborsRegressor(n_neighbors=5)
# fit model
clf.fit(train_X,train_y)
# make a prediction
predict_y = clf.predict(test_X)
yhat = predict_y.reshape(predict_y.shape[0],1)
test_X = test_X.reshape((test_X.shape[0],n_mins*n_features))
print(yhat.shape,test_X.shape)
# + id="MrQunc-iEw1q"
# invert scaling for forecast
inv_yhat = concatenate((test_X[:, -24:-1], yhat), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
inv_yhat = inv_yhat[:,-1]
# invert scaling for actual
test_y = test_y.reshape((len(test_y), 1))
inv_y = concatenate((test_X[:, -24:-1], test_y), axis=1)
inv_y = scaler.inverse_transform(inv_y)
inv_y = inv_y[:,-1]
# + colab={"base_uri": "https://localhost:8080/", "height": 461} id="wz4QgpR6Eztx" outputId="eda871d6-3360-46d7-afce-16039480eb7c"
# plot
fig, ax = pyplot.subplots(figsize=(15,5), dpi = 300)
ax.plot([x for x in range(1, inv_yhat.shape[0]+1)], inv_yhat, linewidth=2.0, label = "predict")
ax.plot([x for x in range(1, inv_y.shape[0]+1)], inv_y, linewidth=2.0, label = "true")
ax.legend(loc=2);
pyplot.grid(linestyle='-.')
pyplot.show()
# + colab={"base_uri": "https://localhost:8080/"} id="oqdwbuXCE2ly" outputId="ab0c01d7-12ff-4c40-f180-525cab0e33f3"
# calculate RMSE
rmse = sqrt(mean_squared_error(inv_y, inv_yhat))
print('Test RMSE: %.3f' % rmse)
# calculate MAE
MAE = mean_absolute_error(inv_y, inv_yhat)
print('Test MAE: %.3f' %MAE)
|
UnivariateModel/KNN.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Sample notebook in R for accessing the Agrimetrics GraphQL API to obtain geospatial data from multiple measures within a bounded region
# Agrimetrics' GraphQL API provides a convenient interface through which data hosted in the Agrimetrics Data Platform can be queried. Since some of these datasets are geospatially indexed (geospatialMeasures), querying the API with a region of interest, whether a polygon or point, will yield data that are implicitly geospatially related. Since all geospatialMeasures data are queried in the same manner - for a given region, return all selected data for the specified datasets - to reduce some of the boiler-plate code that would otherwise be needed, some helper functions have been written and included in this repository. This notebook aims to demonstrate how to query for and visualise data using these helper functions.
#
# In this notebook, you will find a few GraphQL queries that have been copied directly from the Query Builder [webapp](https://app.agrimetrics.co.uk/graph-explorer). The queries used here have been crafted within the webapp and pasted into the notebook to demonstrate how you might generate quick visualisations of data of interest. Should there be an alternate set of data of interest, replace the queries below as needed.
# ## Requirements
#
# You will need a subscription key to run this example. Your subscription key is available through the Agrimetrics developer portal:
#
# * Log in at https://developer.agrimetrics.co.uk/.
# * Open your profile through the menu on the top right of the page with your name on it.
# * Copy one of your subscription keys from the "Field Explorer Subscription" section.
# ## Setup
#
# In the following cell `utils`, `transformData`, and `graphqlClient` helper functions have been imported. These files are included in the [Github repository](https://github.com/agrimetrics/api-examples) along with this notebook. If you want to tweak the notebook in any way and host it locally, be sure to copy the notebook along with these helper function source files.
# + tags=[]
# To reduce time when installing packages, use all cores available
options("stringsAsFactors" = FALSE)
# disable warnings
options(warn=-1)
# source the helper functions used to simplify access to graphQL data
# utils.R - some core utilities
source("utils.R")
# transformData.R - simplifies the transformation of JSON response objects served by Agrimetrics' graphQL Query API
# into more useful dataframes
source("transformData.R")
# graphqlClient.R - a thin wrapper around the [ghql](https://cloud.r-project.org/web/packages/ghql/index.html) library
# providing a connection with which the queries in this notebook are executed.
source("graphqlClient.R")
# installPackages is a helper function optimising time needed to install packages. It does so by making use of
# all your machine's CPU cores.
packagesRequired <- c('stringr', 'dplyr', 'jsonlite', 'geojsonsf', 'sf', 'tmap', 'ggplot2')
installPackages(packagesRequired)
library(geojsonsf)
library(sf)
library(ggplot2)
library(tmap)
# -
# ## Get a connection to Agrimetrics Query API
#
# The following configures the GraphQL client used for all queries within this notebook. Once Jupyter has been installed along with the R kernel, the only other requirement is your API key.
#
# You can provide your API key in one of two ways:
#
# 1. If you are running the Jupyter server locally, add it to your environment variables when you run it (preferred option):
#
# ```bash
# $ API_KEY=abcdefghijklmnopqrstuvwxyz jupyter notebook
# ```
#
# 2. or set the apiKey variable in the notebook:
#
# ```R
# apiKey <- "<KEY>"
# ```
#
# *Important*: Note that using the second method above will embed your API key in the evaluated notebook, so it should not be distributed or published afterwards.
apiKey <- Sys.getenv("API_KEY", "UNSET")
# use the helper function to acquire a connection to the Agrimetrics Query API. This connection is
# used throughout the notebook for all subsequent queries.
connection <- getConnection(apiKey)
# ## GraphQL query
# GraphQL queries have their own query language. The query selects which elements of the schema are to be returned.
#
# For a more comprehensive description of the query language, see the [GraphQL introduction](https://graphql.org/learn/). For interactive online documentation and exploration of the GraphQL schema, visit the [Graph Explorer tool](https://app.agrimetrics.co.uk/#/graph-explorer).
#
# ## Search Area
#
# For demonstration purposes, the queries throughout will be performed for a region covering Rothamsted Farm, one of our Founding Partners, for which we make most of our data free of charge.
# +
# For all the following queries, we will be requesting data for an area of Surrey and Sussex
polygon <- c(
"type"="Polygon",
"coordinates"=list(list(list(
c(-0.401073, 51.80076),
c(-0.356222, 51.80076),
c(-0.356222, 51.819771),
c(-0.401073, 51.819771),
c(-0.401073, 51.80076)
)))
)
# -
# ## Query for soil and temperature data
#
# The [Query Builder](https://app.agrimetrics.co.uk/graph-explorer) serves as a great interactive tool to aid in crafting queries. The following query was produced in this way, retrieving data for soilPh, Abundance of Invertebrates and monthly temperature values. The query has been tailored to demonstrate some of the basic concepts available when writing queries; [variables](https://graphql.org/learn/queries/#variables) and [fragments](https://graphql.org/learn/queries/#fragments) feature in the query to both reduce some code duplication and provide a means of parameterisation of the query.
# +
testQuery <- '
fragment monthlyWeatherMeasure on GeospatialTimeSeriesMeasure {
datapoints {
date
value
}
}
fragment location on GeospatialMeasure {
location {
shape
}
}
query getGridsIntersectingPolygon ($polygon: LocationFilter!, $startDate: Date!, $endDate: Date!) {
geospatialMeasures(geoFilter: {location: $polygon, operation: INTERSECTS}) {
soilPH {
unit
value
...location
}
soilTotalAbundanceOfInvertebrates {
unit
value
...location
}
temperatureMeanMonthly (where: {
datapoints: {
date: {
GE: $startDate,
LE: $endDate
}
}
}) {
...monthlyWeatherMeasure
...location
}
temperatureMaxMonthly (where: {
datapoints: {
date: {
GE: $startDate,
LE: $endDate
}
}
}) {
...monthlyWeatherMeasure
...location
}
temperatureMinMonthly (where: {
datapoints: {
date: {
GE: $startDate,
LE: $endDate
}
}
}) {
...monthlyWeatherMeasure
...location
}
}
}'
variables <- list(polygon=polygon, startDate="2017-01-01", endDate="2018-12-01")
# use library function to get data from Query API / GraphQL API
response <- getData(connection, testQuery, variables)
# convert the data into a dataFrame
data <- geospatialMeasuresToDataFrame(response)
print(attributes(data))
print(str(data))
# -
# ## Prepare data for visualisation
#
# The helper function `griddedTemporalMeasuresToDataFrame` can be used to transform the data into a geospatial dataframe. The source for this function can be found within `transformData.R` within this folder of the repository.
# +
# transform to a dataframe
griddedMonthlyWeatherData <- griddedTemporalMeasuresToDataFrame(
response, c("temperatureMeanMonthly", "temperatureMinMonthly", "temperatureMaxMonthly")
)
print(attributes(griddedMonthlyWeatherData))
# use the geojson_sf library to generate geometries for each of the 5km grids returned
weatherGridShapeGeoms <- griddedMonthlyWeatherData$location %>% geojson_sf
# create a spatial features collection
monthlyTemperature <- st_sf(
cbind(
"date"=griddedMonthlyWeatherData$Date,
"temperatureMeanMonthly"=griddedMonthlyWeatherData$temperatureMeanMonthly,
"temperatureMinMonthly"=griddedMonthlyWeatherData$temperatureMinMonthly,
"temperatureMaxMonthly"=griddedMonthlyWeatherData$temperatureMaxMonthly,
weatherGridShapeGeoms
)
)
print(head(monthlyTemperature))
# Since this data is timeseries, to simplify the plots we will only display min, mean and max for January 2018
filteredDataByDate <- filter(monthlyTemperature, as.Date(date) == as.Date("2018-01-01"))
# Generate an interactive leaflet map of all grids
tmap_mode("plot")
tmap_leaflet(
tm_shape(filteredDataByDate, name="Max Temp") +
tm_polygons(
col="temperatureMaxMonthly",
palette="YlOrRd"
) +
tm_shape(filteredDataByDate, name="Min Temp") +
tm_polygons(
col="temperatureMinMonthly",
palette="YlGnBu"
) +
tm_shape(filteredDataByDate, name="Mean Temp") +
tm_polygons(
col="temperatureMeanMonthly",
palette="Oranges"
) +
tm_scale_bar()
)
# -
# 
# ## Output to shapefile
#
# Shapefiles impose some limitations including the string length of the attributes' names. See [here](http://resources.arcgis.com/en/help/main/10.1/index.html#//005600000013000000) for more details.
# +
# Duplicate the geospatial dataframe as it will be tweaked for output as a shapefile
tempDf <- monthlyTemperature
# shapefiles cannot have attributes with names longer than 10 characters so the columns are renamed here.
names(tempDf) <- c('DATE', 'MEAN', 'MIN', 'MAX', 'geometry')
# the dataframe is written to weatherData.shp
st_write(tempDf, dsn=paste0("weatherData.shp"), delete_layer = TRUE)
# -
# ## Plotting the temperature data
#
# Since the data acquired is of a timeseries format, a x-y chart is used to provide a basic visualisation of this data.
# +
# plot the dataframe as timeseries chart
df <- monthlyTemperature
df$date <- as.Date(df$date)
ggplot(data=monthlyTemperature, aes(x = date)) +
geom_line(aes(y = temperatureMeanMonthly, group=1), color="orange") +
geom_line(aes(y = temperatureMinMonthly, group=1), color="blue") +
geom_line(aes(y = temperatureMaxMonthly, group=1), color="red") +
labs(
x = "Date",
y = "Temperature (°C)",
title = "Historical Temperature Data"
) +
theme(text = element_text(size=12), axis.text.x = element_text(angle = 75))
# -
# ## Plotting soil data
#
# Soil information is data derived from modelling appropriately distributed samples across the UK. The data is available at a 1km grid from the CEH. See [here](https://app.agrimetrics.co.uk/catalog/data-sets/c246affc-9566-4f16-8156-81e0f8f71d79/overview) for more details.
# +
# get the Total Abundance of Invertebrates for the soil grids within the searched area
soilTAOI <- data$soilTotalAbundanceOfInvertebrates
# magic one liner to convert the GeoJSON location object into a list of simple features
geometry <- soilTAOI$location$shape %>% toJSON %>% geojson_sf
# bind the data to these geometries
soilTAOIData <- st_sf(cbind(soilTAOI$value, geometry))
# create a static plot, colouring the cells by Total Abundance Of Invertebrates
ggplot(soilTAOIData) + geom_sf(aes(fill = soilTAOI.value))
# +
# get the Total Abundance of Invertebrates for the soil grids within the searched area
soilPH <- data$soilPH
# magic one liner to convert the GeoJSON location object into a dataframe of simple features
geometry <- soilPH$location$shape %>% toJSON %>% geojson_sf
# create a soilData "simple feature collection"
soilPHData <- st_sf(cbind(soilPH$value, geometry))
# create a static plot, colouring the cells by pH
ggplot(soilPHData) + geom_sf(aes(fill = soilPH.value))
# -
# ## Combining the different datasets in a single plot
#
# Without performing any analysis of the data, this cell simply presents all the following geospatial data in a single plot:
# * soil pH
# * total abundance of invertebrates
# * monthly mean temperature for January 2018 (using previously filtered dataframe)
# +
# Putting it all together
# we are only going to plot the mean temperature along with soil data
tmap_mode("plot")
tmap_leaflet(
tm_shape(filteredDataByDate) +
tm_polygons("temperatureMeanMonthly", title="Temperature Mean Monthly (°C)", palette="-Greys", alpha=0.5) +
tm_shape(soilPHData) +
tm_polygons("soilPH.value", title="Soil pH", palette="-Greys", alpha=0.5) +
tm_shape(soilTAOIData) +
tm_polygons("soilTAOI.value", title="Total Abundance Of Invertebrates", palette="-Greys", alpha=0.5)
)
# -
# 
|
geospatial-workflow-examples/Rothamsted_geospatialMeasures_demo_using_utility_functions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.10 64-bit
# name: python3
# ---
class Dog:
# class attribute
species= 'mammal'
def __init__ (self,breed,name,spots):
self.breed = breed
self.name = name
self.spots = spots
# definimos métodos
def bark(self,number):
print(f'woof! my name is {self.name} and the number is {number} and i\'m ownn to species {self.species}')
my_dog = Dog('lab','samy',False)
type(my_dog)
my_dog.species
my_dog.bark(5)
# +
import math
class Circle:
pi = math.pi
def __init__(self,radius=1):
self.radius= radius
self.area = self.pi*self.radius**2
def printPi():
print(Circle.pi)
# methods
def get_Circumference(self):
return 2* self.pi*self.radius
def get_area(self):
return self.area
# -
my_circle = Circle(30)
my_circle.pi
my_circle.radius
my_circle.get_Circumference()
Circle.printPi()
my_circle.get_area()
class Dog2:
'''
this is my class Dog2
'''
# class attribute
species= 'mammal'
def __init__ (self,breed,name,spots):
self._breed = breed
self.__name = name
self._spots = spots
# definimos métodos
def bark(self,number):
print(f'woof! my name is {self.__name} and the number is {number} and i\'m ownn to species {self.species}')
dog2 = Dog2('raza','max',False)
#print(dog2.__name)
print(Dog2.__doc__)
dog2.__name = 'max_2'
dog2.bark(5)
# +
class Animal:
def __init__(self, name):
self.name = name
print('animal created, with name '+ self.name)
def talk(self):
pass
# CLASE HIJA
class Dog (Animal):
def __init__(self, name, age):
Animal.__init__(self,name)
self.age = age
print(f'dog created, with name {self.name} and i am {self.age } years old')
# overwriting method
def talk(self):
pass
my_naimal = Animal('max')
my_dog = Dog('max',10)
isinstance(my_dog,Dog) # True
isinstance(my_dog,Animal) # True
isinstance(my_dog,object) # True
issubclass(Dog,Animal) # True
issubclass(Animal,Dog) # Flase
# -
dir(my_dog)
print(my_dog.__str__())
print(str(my_dog))
# +
class SuperList(list):
def __len__(self):
return 1000
super_list1 = SuperList()
print(len(super_list1))
super_list1.append(5)
print(super_list1)
# +
class User:
def sign_in(self):
print('logged in')
class Wizard(User):
def __init__(self,name,power):
self.name = name
self.power = power
def attack(self):
print(f'attacking with power of {self.power}')
class Archer(User):
def __init__(self,name,arrows):
self.name = name
self.arrows = arrows
def check_arrows(self):
print(f'arrows left {self.arrows}')
def run(self):
print('run very fast')
# Herencia múltiple
class Hybrid(Wizard, Archer):
def __init__(self,name,power, arrows):
Wizard.__init__(self,name,power)
Archer.__init__(self,name,arrows)
# +
hb1 = Hybrid('hybrido', 50,100)
dir(hb1)
# +
class A:
num= 10
class B(A):
pass
class C(A):
pass
class D(B,C):
pass
# -
D.mro()
type(2/2)
|
Code/2.OOP/1.OOP.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="pz-bA_uhEvK4" outputId="809cccdd-2bab-4f35-ea44-18ecbc2bec3c"
# this will ask you to authenticate with Google
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('/content/drive/MyDrive/Colab Notebooks/Uny/DMT/Assignment 2')
# + id="9rYV_i12EYT-"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import csv
import seaborn as sns
sns.set_style("darkgrid")
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="anpHU968Eip2" outputId="8edef5f9-0e64-4616-8567-c2f774892c2f"
#Read the training data from the provided file and save it in the variable dataset as a DataFrame object
train_dataset = pd.read_csv("data/training_set_VU_DM.csv")
train_dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="tfdTrxIThzC_" outputId="0cd827af-900f-40c4-ba90-055be36e9c62"
#Read the test data from the provided file and save it in the variable dataset as a DataFrame object
test_dataset = pd.read_csv("data/test_set_VU_DM.csv")
test_dataset
# + id="ueB689fXGW7V"
#Obtain the frequency of the number of missing value for each column
def obtain_freq_of_missing_values(dataframe):
"""
Returns the frequency of missing values for each column
"""
freq_of_missing_values = dataframe.isnull().sum() / len(dataframe)
return freq_of_missing_values
# + colab={"base_uri": "https://localhost:8080/"} id="JooXW_l1iy8W" outputId="3b3f5721-f4a9-44a7-ff4b-bc651744cfef"
train_miss_attr = obtain_freq_of_missing_values(train_dataset)
train_miss_attr
# + colab={"base_uri": "https://localhost:8080/"} id="3j8MIpWMi9bU" outputId="e571f29f-2a2c-4695-8df5-fcaf8e6cb025"
test_miss_attr = obtain_freq_of_missing_values(test_dataset)
test_miss_attr
# + id="KWpRyjkOLDu1"
#Eliminates the columns that have more than 10% of theier data missing
def eliminate_columns(dataframe, attr_freq, cutoff_value=0.1):
"""
Eliminates the attributes that have a missing frequency over 10%
"""
attributes_to_del = attr_freq[attr_freq > cutoff_value].axes[0].tolist()
new_dataset = dataframe.drop(columns=attributes_to_del)
return new_dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="gBKRHPXxkQ7q" outputId="81b95953-3673-4236-ee71-49c930630e7f"
#Eliminates the columns from the training dataset that have more 10% of their data missing
train_dataset = eliminate_columns(train_dataset, train_miss_attr)
train_dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="i4JM8mTqlyAY" outputId="5805c2cc-cc7e-4b98-c17a-0df05f98186b"
#Eliminates the columns from the test dataset that have more 10% of their data missing
test_dataset = eliminate_columns(test_dataset, test_miss_attr)
test_dataset
# + colab={"base_uri": "https://localhost:8080/"} id="tJrn_Blpl-QM" outputId="957c2ddd-98f1-4646-a2b8-32536a8757a0"
#Obtain the attributes that are not present in test dataset
train = obtain_freq_of_missing_values(train_dataset).axes[0].tolist()
test = obtain_freq_of_missing_values(test_dataset).axes[0].tolist()
for attr in train:
if attr not in test:
print(attr)
# + id="Uh_3hOHwHDF1"
#Fill all the nan values from the review score column with the median of the dataset for the training and testing dataset
train_dataset['prop_review_score'].fillna(train_dataset['prop_review_score'].median(), inplace=True)
test_dataset['prop_review_score'].fillna(test_dataset['prop_review_score'].median(), inplace=True)
# + colab={"base_uri": "https://localhost:8080/"} id="z7Iye90AHddk" outputId="842e2a40-d19c-4248-d312-466cd05ae828"
obtain_freq_of_missing_values(train_dataset)
# + colab={"base_uri": "https://localhost:8080/"} id="U8RJ1peVHo7e" outputId="96dae171-86b5-4ebc-d67f-fde47a607a08"
obtain_freq_of_missing_values(test_dataset)
# + id="qVYihZc9Hx5c"
def separate_data(data, p=0.3):
"""Separate the data into training data and validation data, based on a specified procentage"""
valid_data_len = int(len(data) * p)
train_data = data[:-valid_data_len]
valid_data = data[-valid_data_len:]
return train_data, valid_data
# + id="86_VdCHMIAkm"
training_data, validation_data = separate_data(train_dataset)
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="k7Y4GGOnIItl" outputId="e7941f94-6594-4150-9e3f-36fd5708bb7e"
training_data
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="oRz8G2qOILQC" outputId="fed4eb35-811b-4b48-af0b-44b83a0a9dee"
validation_data
# + id="Kwuh5pVP9o0j"
#Add the score
def add_score(dataframe, click_val=1, book_val=5):
dataframe["score"] = np.where(dataframe['click_bool'] == 1, 1, 0)
dataframe["score"] = np.where(dataframe['booking_bool'] == 1, 5, 0)
# + colab={"base_uri": "https://localhost:8080/"} id="XLRHm8OfAZZ3" outputId="cd6eae27-deec-4b1f-f9b5-1c0074fb51ca"
add_score(training_data)
add_score(validation_data)
# + id="T9v3p1QLNkzU"
test_dataset["score"] = 0
# + colab={"base_uri": "https://localhost:8080/", "height": 735} id="ZO6cYvxtUdT6" outputId="4315078f-3283-48e0-8e8c-a17c0cdf5577"
fig, ax = plt.subplots(figsize = (15,10))
sns.heatmap(round(training_data.corr(), 2), cmap="YlGnBu", ax=ax, linewidths=2, annot=True)
# + id="78skcUBqsv5H"
training_data.to_csv("data/processed_training_data.csv")
validation_data.to_csv("data/processed_validation_data.csv")
test_dataset.to_csv("data/processed_test_data.csv")
|
assignment02/Assignment_2_preprocess.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python Classes, Inheritence etc
#
# This notebook will include some basics of software engineering.
#
# We will try to cover both theoretical and practical concepts.
# +
class test:
def __init__(self):
print('always runs...')
def maybe_run(self):
print('running now that you\'ve called me.')
test = test()
# -
test.maybe_run()
|
abdis_machine_learning_handbook/_build/html/_sources/5_software_engineering/python_classes_inheritence_more.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
from sklearn.preprocessing import OneHotEncoder, LabelEncoder, StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline, make_pipeline, make_union
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, GridSearchCV
import warnings
warnings.filterwarnings("ignore")
# +
def drop_zero_variance_cols(df):
"""Drop columns which contain only one value"""
cols = list(df)
nunique = df.apply(pd.Series.nunique)
cols_to_drop = nunique[nunique==1].index
print("Dropping columns with zero variance [", cols_to_drop.values, "]")
df.drop(cols_to_drop, axis=1, inplace=True)
def drop_null_cols_with_threshold(df, null_threshold=0.80):
"""Drops columns containing more than len(df[col])*threshold NaN values"""
cols_to_drop = df.columns[df.isna().mean() > null_threshold].to_list()
print("Dropping columns with greater than ", str(null_threshold*100), "% NaN values [", cols_to_drop, "]")
df.drop(cols_to_drop, axis=1, inplace=True)
def preprocess_cols(df) -> pd.DataFrame:
"""Function which preprocesses the columns
"""
# Drop zero variance columns
drop_zero_variance_cols(df)
# drop_null_cols_with_threshold(df, null_threshold=0.80)
return df
# -
train_df = pd.read_csv('input/TrainingWiDS2021.csv')
test_df = pd.read_csv('input/UnlabeledWiDS2021.csv')
data_dict = pd.read_csv('input/DataDictionaryWiDS2021.csv')
train_df.columns, test_df.columns
# check if train and test have same columns
print(set(train_df.columns) - set(test_df.columns))
# Target variable to be predicted
TARGET_VARIABLE = 'diabetes_mellitus'
train_df.shape, test_df.shape
train_df.head(10)
train_df = train_df.drop(columns=['Unnamed: 0', 'encounter_id', 'hospital_id', 'icu_id'])
test_df = test_df.drop(columns=['Unnamed: 0', 'encounter_id', 'hospital_id', 'icu_id'])
train_df.dtypes.value_counts()
Y_train = train_df[TARGET_VARIABLE]
train_df = train_df.drop([TARGET_VARIABLE], axis=1, inplace=False)
# ## Plots of variables
train_df.head()
# ### Dropping columns with zero variance
train_df = preprocess_cols(train_df)
test_df = preprocess_cols(test_df)
train_df.shape, test_df.shape
# ### Dropping columns with > 69% null values
# +
# train_df[(train_df['h1_albumin_max'].isnull()) & \
# (train_df['h1_bilirubin_max'].isnull()) & \
# (train_df['h1_lactate_max'].isnull()) & \
# (train_df['h1_pao2fio2ratio_max'].isnull()) & \
# (train_df['h1_arterial_ph_max'].isnull()) &
# (train_df['h1_arterial_pco2_max'].isnull()) &
# (train_df['h1_arterial_po2_max'].isnull()) & \
# (train_df['h1_hco3_max'].isnull()) & \
# (train_df['h1_calcium_max'].isnull()) & \
# (train_df['h1_wbc_max'].isnull())].shape
# -
null_cols_train = train_df.columns[train_df.isna().mean() > 0.69].to_list()
null_cols_test = test_df.columns[test_df.isna().mean() > 0.69].to_list()
set(null_cols_train) - set(null_cols_test)
drop_null_cols_with_threshold(train_df, null_threshold=0.69)
drop_null_cols_with_threshold(test_df, null_threshold=0.69)
set(list(train_df)) - set(list(test_df))
# ### Treating Catagorical and Catagorical like columns
cat_cols = list(train_df.select_dtypes(include=['object']))
int_cols = list(train_df.select_dtypes(include=['int64']))
int_cols
# +
for col in int_cols:
print(train_df[col].value_counts(), train_df[col].isna().mean())
# The int64 data type columns are clearly categorical and don't need any imputation
# +
for col in cat_cols:
print(train_df[col].value_counts(), train_df[col].isna().mean()*100)
# The object type columns need imputation
# -
train_df['age'].values
test_df['age'].values
# +
a_train = train_df[['age', 'gender', 'bmi', 'height', 'weight']]
a_test = test_df[['age', 'gender', 'bmi', 'height', 'weight']]
print(a_train['bmi'].isna().mean(), a_test['bmi'].isna().mean())
# calculate bmi based on weight and height
a_train['bmi_calc'] = a_train['weight'] / (a_train['height']*a_train['height'])*10000
a_test['bmi_calc'] = a_test['weight'] / (a_test['height']*a_test['height'])*10000
# a_train['bmi_diff'] = a_train['bmi'] - a_train['bmi_calc']
# a_test['bmi_diff'] = a_test['bmi'] - a_test['bmi_calc']
# -
a_train['bmi'].round(2) - a_train['bmi_calc'].round(2)
a_train[(a_train['bmi'].isnull()) & \
((a_train['height'].is_null()) | (a_train['weight'].isnull()))]
# (a_train['weight'].isnull())]
# Further notes in this section -
# - impute male and female according to age, bmi...
# - impute ethnicity according to hospital id (Hospital belonging to region)
# - hospital_admit_source and icu_admit_source imputed with None
# ### Treating Apache columns
apache_cols = list(train_df.filter(regex="apache"))
train_df[apache_cols].dtypes
# ## EDA on Training Data
train_df.describe().transpose()
# Columns containing null values
train_df.columns[train_df.isna().any()], test_df.columns[test_df.isna().any()]
# Columns not containing null values
train_df.columns[~train_df.isna().any()], test_df.columns[~test_df.isna().any()]
set(train_df.dtypes)
num_df = train_df.select_dtypes(include=['int64', 'float64'])
cat_df = train_df.select_dtypes(include=['object'])
num_min_cols = num_df.filter(regex='_min', axis=1).columns.to_list()
num_max_cols = num_df.filter(regex='_max', axis=1).columns.to_list()
num_range_cols = num_max_cols + num_min_cols
len(num_df.columns), len(cat_df.columns)
for col in cat_df.columns:
print("Column:", str(col), cat_df[col].unique(), np.divide(cat_df[col].isna().sum(), cat_df.shape[0])*100)
# +
# nominal - ethnicity, gender,
# seems like ordinal - icu_type, icu_stay_type, hospital_admit_score, icu_admit_score
# -
cat_df['gender'].mode()
cat_df[['ethnicity', 'gender', 'icu_admit_source', 'hospital_admit_source']][10:20]
num_min_columns = train_df.filter(regex = '_min', axis=1).columns
num_max_columns = train_df.filter(regex = '_max', axis=1).columns
num_id_df = train_df.filter(regex = '_id', axis=1)
num_id_df.head(5)
num_id_df['hospital_id'].value_counts()
num_id_df['icu_id'].value_counts()
num_id_df['encounter_id'].value_counts()
# encounter_id is like a patient id, so it can be dropped.
# hospital_id and icu_id can be frequency encoded
train_df = train_df.drop(columns=['encounter_id', 'hospital_id', 'icu_id'])
test_df = test_df.drop(columns=['encounter_id', 'hospital_id', 'icu_id'])
data_dict.head(10)
# ### Feature imputations
num_df = train_df.select_dtypes(include=['int64', 'float64'])
cat_df = train_df.select_dtypes(include=['object'])
# categorical features preprocessing
categorical_features = list(cat_df.columns)
categorical_transformer = Pipeline(steps = [('imputer', SimpleImputer(strategy = 'most_frequent')),
('ohe', OneHotEncoder())])
# categorical_features
# numerical features preprocessing
numerical_features = list(num_df.columns)
numerical_transformer = Pipeline(steps = [('imputer', SimpleImputer(strategy = 'median')),
('scaler', StandardScaler())])
# numerical_features
preprocessor = ColumnTransformer(transformers=[('num', numerical_transformer, numerical_features),
('cat', categorical_transformer, categorical_features)])
# ## Logistic Regression - baseline model
preprocessor.fit_transform(train_df)
clf = Pipeline(steps = [('preprocessor', preprocessor),
('classifier', LogisticRegression())])
# clf = make_pipeline(preprocessor, LogisticRegression())
clf
x_train, x_val, y_train, y_val = train_test_split(train_df, Y_train, test_size=0.2, random_state=0)
x_train.shape, x_val.shape, y_train.shape, y_val.shape
clf.fit(x_train, y_train)
y_val_pred = clf.predict(x_val)
print("model score: %.3f" % clf.score(x_val, y_val))
y_test_pred = clf.predict(test_df)
y_val_pred.shape
# ## Logistic with gridsearch cv
param_grid = {
'logisticregression__penalty': ['l1', 'l2'],
'logisticregression__C': [0.1, 1.0, 1.0],
}
grid_clf = GridSearchCV(clf, param_grid, cv=10, scoring='accuracy')
grid_clf.fit(x_train, y_train)
# # Generate Solution
sol_df = pd.read_csv('input/SolutionTemplateWiDS2021.csv', usecols=['encounter_id'])
sol_df.head(), sol_df.shape
# #### 28 Jan v1 1st submission
sol_df['diabetes_mellitus'] = y_test_pred
sol_df.to_csv('output/v1_logistic_regression_28_jan.csv', index=False)
y_test_pred.shape
# #### h2o submission 30 Jan
x = pd.read_csv('output/3f9aeab185bc.csv', usecols=['encounter_id', 'predict'])
x.head()
x.sort_values('encounter_id').rename({'predict':'diabetes_mellitus'}, axis=1)[['encounter_id', 'diabetes_mellitus']].to_csv('output/v1_h2o_jan_30.csv', index=False)
|
.ipynb_checkpoints/dataset_understanding-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # This is a notebook that runs PFLOTRAN in the backend for the breakthrough curve of a column experiment
# +
# %reset -f
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact_manual
import ipywidgets as wd
from os import system
## New functions
import jupypft.model as mo
import jupypft.parameter as pm
# -
ConcentrationAtInlet = 1.66E-16 #mol/L
InjectTimeInPoreVol = 1.0
TemplateFile = "./template.in"
ColumnModel = mo.Model(templateFile=TemplateFile)
# Parameters with a fixed value:
# +
Porosity = pm.Real(tag="<porosity>", value=0.37, units="adim")
FlowVelocity = pm.Real(tag="<darcyVel>", value=0.7585, units="cm/h")
ColumnLenght = pm.Real(tag="<colLenght>",value=0.50, units="m")
L = ColumnLenght.value*100.
print(L)
# -
# Values can be printed with their corresponding tag:
print(Porosity)
# Parameters with a fixed value but calculated from other parameters
# +
ElutionTime = pm.Real(tag="<elutionTime>",\
value=3600*L*InjectTimeInPoreVol*Porosity.value/FlowVelocity.value,\
units="s")
EndTime = pm.Real(tag="<endTime>",\
value=10.*InjectTimeInPoreVol,\
units="d")
CFL = 1.0
nZ = 50
deltaX = L/nZ
TimeStep = pm.Real(tag="<timeStep>",\
value=CFL*deltaX*Porosity.value/FlowVelocity.value,\
units="h")
# -
# Parameters whose values are set by a widget:
LongDisp = pm.WithSlider(tag="<longDisp>",units="cm")
RateAttachment = pm.WithSlider(tag="<katt>",units="1/h")
RateDetachment = pm.WithSlider(tag="<kdet>",units="1/h")
RateDecayAqueo = pm.WithSlider(tag="<decayAq>",units="1/h")
RateDecayImmob = pm.WithSlider(tag="<decayIm>",units="1/h")
# Have a list of all parameters. This should be the conection between the model and variable classes (to be done).
listOfParameters = pm.Parameter.list_of_vars()
# Plotting stuff
def plotResults(FILE = "./pflotran-obs-0.tec"):
textBoxDimensionless = \
"Péclet = $\\dfrac{\\rm advection}{\\rm dispersion} = $" + \
mo.Peclet(LongDisp.value,L,FlowVelocity.value,asString=True)
ObservationPoint = np.loadtxt(FILE,delimiter=",",skiprows=1)
Cnorm = ObservationPoint[:,1]/ConcentrationAtInlet
TimeInPoreVolumes = 24.0 * ObservationPoint[:,0] * FlowVelocity.value/(L*Porosity.value)
Legend=["$\\dfrac{[V_{(aq)}]}{[V_{(aq)}]_0}$"]
plt.figure(figsize=(12,8),facecolor="white")
## Plot log-scale
ax1 = plt.subplot(2,2,1)
ax1.plot(TimeInPoreVolumes,Cnorm,c="purple",lw=3)
ax1.set_yscale("symlog",\
linthresh=1.0E-6,subs=[1,2,3,4,5,6,7,8,9])
ax1.set_ylim([-1.0E-7,1.15])
ax1.set_xlim([0,6])
ax1.set_xlabel("Pore Volume [$-$]",fontsize="large")
ax1.axvline(x=InjectTimeInPoreVol,ls="dotted",c="gray",lw=1)
ax1.axhline(y=1.0,ls="dashed",c="teal",lw=1)
ax1.axhspan(ymin=-1.0E-7,ymax=1.0E-6,facecolor="pink",alpha=0.2)
## Péclet number
ax1.text(5.5,5.0E-3,textBoxDimensionless,\
bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.25),\
horizontalalignment='right')
## Plot linear-scale
ax2 = plt.subplot(2,2,2)
ax2.plot(TimeInPoreVolumes,Cnorm,c="purple",lw=3,label=Legend[0])
ax2.set_ylim([-1.0E-2,1.02])
ax2.set_xlim([0,6])
ax2.set_xlabel("Pore Volume [$-$]",fontsize="large")
ax2.axvline(x=InjectTimeInPoreVol,ls="dotted",c="gray",lw=1)
ax2.axhline(y=1.0,ls="dashed",c="teal",lw=1)
ax2.legend(fontsize="large",loc="upper right")
ax2.text(4.0,0.9,"max(C)/C₀ = {:.2f}%".format(max(Cnorm*100)),\
bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.25),\
horizontalalignment='right')
## Damköhler II numbers
Damkohler = [mo.DamkII(RateAttachment.value,\
LongDisp.value,\
FlowVelocity.value,\
L,asString=False),
mo.DamkII(RateDetachment.value,\
LongDisp.value,\
FlowVelocity.value,\
L,asString=False),
mo.DamkII(RateDecayAqueo.value,\
LongDisp.value,\
FlowVelocity.value,\
L,asString=False),
mo.DamkII(RateDecayImmob.value,\
LongDisp.value,\
FlowVelocity.value,\
L,asString=False)]
tickLabels = ["$k_{\\rm att}$","$k_{\\rm det}$","λ$_{\\rm aq}$","λ$_{\\rm im}$"]
ax3 = plt.subplot(2,2,3)
ax3.bar(x=range(len(Damkohler)), height=Damkohler, tick_label=tickLabels, log=True, hatch="/")
ax3.set_ylabel("Damköhler(II) = $\\dfrac{\\rm reaction}{\\rm dispersion}$",fontsize='large')
## Parameters of the reaction sandbox
parametersTable = [["$k_{\\rm att}$",RateAttachment.strValue,"h$^{-1}$"],\
["$k_{\\rm det}$",RateDetachment.strValue,"h$^{-1}$"],\
["$\lambda_{\\rm aq}$",RateDecayAqueo.strValue,"h$^{-1}$"],\
["$\lambda_{\\rm im}$",RateDecayImmob.strValue,"h$^{-1}$"],\
["α$_{\\rm L}$",LongDisp.strValue,"cm"]]
ax4 = plt.subplot(2,2,4)
table = ax4.table(cellText=parametersTable,\
colLabels=["Parameter","Value","Unit"],\
loc='center',colWidths=[0.3,0.3,0.3],edges="horizontal")
table.set_fontsize(34)
table.scale(1,2.5)
ax4.axis('off')
plt.tight_layout()
plt.show()
# Dummy function that activates when the widget Interact button is pressed
def RunAll(logLongDisp,logKAtt,logKDet,logDecayAq,logDecayIm):
ColumnModel.cloneTemplate()
RateAttachment.value = logKAtt
RateDetachment.value = logKDet
RateDecayAqueo.value = logDecayAq
RateDecayImmob.value = logDecayIm
LongDisp.value = logLongDisp
for parameter in listOfParameters:
ColumnModel.replaceTagInFile(parameter)
ColumnModel.runModel()
ColumnModel.fixedToCSV(outputFile="pflotran-obs-0.tec")
plotResults()
# Define the sliders we want to use in the WithSlider parameters. This cell assigns Float sliders to all:
# +
RateAttachment.slider = \
wd.FloatSlider(value=0.0400, min=0, max=1, step=0.1,\
description=r'\(k_{\text{att}}\) [1/h]', \
style={'description_width': 'initial'})
RateDetachment.slider = \
wd.FloatSlider(value=0.0026, min=0, max=1, step=0.1,\
description=r'\(k_{\text{de}}\) [1/h]', \
style={'description_width': 'initial'})
RateDecayAqueo.slider = \
wd.FloatSlider(value=0.0070, min=0, max=1, step=0.1,\
description=r'\(\lambda_{\text{aq}}\) [1/h]', \
style={'description_width': 'initial'})
RateDecayImmob.slider = \
wd.FloatSlider(value=0.0350, min=0, max=1, step=0.1,\
description=r'\(\lambda_{\text{im}}\) [1/h]', \
style={'description_width': 'initial'})
# -
# This cell below assigns `FloatLog` sliders to all
# +
RateAttachment.slider = \
wd.FloatLogSlider(value=0.0400,base=10, min=-30, max=1, step=0.1,\
description=r'\(k_{\text{att}}\) [1/h]', \
style={'description_width': 'initial'})
RateDetachment.slider = \
wd.FloatLogSlider(value=0.0026,base=10, min=-30, max=1, step=0.1,\
description=r'\(k_{\text{de}}\) [1/h]', \
style={'description_width': 'initial'})
RateDecayAqueo.slider = \
wd.FloatLogSlider(value=0.0070,base=10, min=-30, max=1, step=0.1,\
description=r'\(\lambda_{\text{aq}}\) [1/h]', \
style={'description_width': 'initial'})
RateDecayImmob.slider = \
wd.FloatLogSlider(value=0.0350,base=10, min=-30, max=1, step=0.1,\
description=r'\(\lambda_{\text{im}}\) [1/h]', \
style={'description_width': 'initial'})
LongDisp.slider = \
wd.FloatLogSlider(value=0.2,base=10, min=-30, max=2, step=0.1,\
description=r'\(\alpha_{L}\) [cm]', \
style={'description_width': 'initial'})
# -
# This cell below assigns Float sliders to only the LongDisp
LongDisp.slider = \
wd.FloatSlider(value=0.2,min=0.0, max=2, step=0.1,\
description=r'\(\alpha_{L}\) [cm]', \
style={'description_width': 'initial'})
interact_manual(RunAll,\
logLongDisp = LongDisp.slider,\
logKAtt = RateAttachment.slider,\
logKDet = RateDetachment.slider,\
logDecayAq = RateDecayAqueo.slider,\
logDecayIm = RateDecayImmob.slider\
);
# **Péclet = Advection Rate/Dispersion Rate**
#
# $
# \text{P}_{é} = \dfrac{LU}{D} = \dfrac{LU}{\alpha_L U} = \dfrac{L}{\alpha_L}
# $
# **Damköhler(II) = Reaction Rate/Dispersion Rate**
#
# $
# \text{D}_{A,II} = \dfrac{L^2k}{D} = \dfrac{L^2k}{\alpha_LU}
# $
# "Damköhler(II) = $\\dfrac{\\rm reaction}{\\rm dispersion}$"+"\n" +\
# "Da$^{\\rm kat} = $"+ PFLO.DamkII(RateAttachment.get_value(),\
# LongDisp.get_value(),\
# FlowVelocity.get_value(),\
# L,asString=True) +"\n" +\
# "Da$^{\\rm kde} = $"+ PFLO.DamkII(RateDetachment.get_value(),\
# LongDisp.get_value(),\
# FlowVelocity.get_value(),\
# L,asString=True) +"\n" +\
# "Da$^{\\rm λaq} = $"+ PFLO.DamkII(RateDecayAqueo.get_value(),\
# LongDisp.get_value(),\
# FlowVelocity.get_value(),\
# L,asString=True) +"\n" +\
# "Da$^{\\rm λim} = $"+ PFLO.DamkII(RateDecayImmob.get_value(),\
# LongDisp.get_value(),\
# FlowVelocity.get_value(),\
# L,asString=True) +"\n" +\
#
#
# ## Rate values
# ax1.text(5.5,5.0E-4,textBoxKin,\
# bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5),\
# horizontalalignment='right')
#
# textBoxKin = \
# "$k_{\\rm att} = $" + RateAttachment.get_strValue() + " h$^{-1}$" +"\n" + \
# "$k_{\\rm det} = $"+ RateDetachment.get_strValue() + " h$^{-1}$" +"\n" + \
# "$\lambda_{\\rm aq} = $ "+ RateDecayAqueo.get_strValue() + " h$^{-1}$" +"\n" + \
# "$\lambda_{\\rm im} = $ "+ RateDecayImmob.get_strValue() + " h$^{-1}$" +"\n" + \
# "α$_{\\rm L} = $ " + LongDisp.get_strValue() + " cm "
# $\lambda_1 = \dfrac{\ln{10}}{D}$
|
notebooks/WithPFLOTRAN/1DColumn/1D_Column.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 1. Import Libs
import torch
from torchvision import transforms
from torch.autograd import Variable
from dataset import DatasetFromFolder
from model import Generator
import utils
import argparse
import os
# ## 2. Setting hyperparameters
# +
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', required=False, default='horse2zebra', help='input dataset')
parser.add_argument('--batch_size', type=int, default=1, help='test batch size')
parser.add_argument('--ngf', type=int, default=32)
parser.add_argument('--num_resnet', type=int, default=6, help='number of resnet blocks in generator')
parser.add_argument('--input_size', type=int, default=256, help='input size')
params = parser.parse_args([])
print(params)
# Directories
data_dir = 'data/' + params.dataset + '/'
save_dir = params.dataset + '_test_results/'
model_dir = params.dataset + '_model/'
if not os.path.exists(save_dir):
os.mkdir(save_dir)
if not os.path.exists(model_dir):
os.mkdir(model_dir)
# -
# ## 3. Load Dataset
# ### 3.1 Preprocessing
transform = transforms.Compose([
transforms.Scale((params.input_size,params.input_size)),
transforms.ToTensor(),
transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))
])
# ### 3.2 Test data
# +
test_data_A = DatasetFromFolder(data_dir, subfolder='testA', transform=transform)
test_data_loader_A = torch.utils.data.DataLoader(
dataset=test_data_A, batch_size=params.batch_size, shuffle=False)
test_data_B = DatasetFromFolder(data_dir, subfolder='testB', transform=transform)
test_data_loader_B = torch.utils.data.DataLoader(
dataset=test_data_B, batch_size=params.batch_size, shuffle=False)
# -
# ## 4. Load Models
G_A = Generator(3, params.ngf, 3, params.num_resnet)
G_B = Generator(3, params.ngf, 3, params.num_resnet)
G_A.cuda()
G_B.cuda()
G_A.load_state_dict(torch.load(model_dir + 'generator_A_param.pkl'))
G_B.load_state_dict(torch.load(model_dir + 'generator_B_param.pkl'))
# ## 5. Model Test
# +
for i, real_A in enumerate(test_data_loader_A):
# input image data
real_A = Variable(real_A.cuda())
# A --> B --> A
fake_B = G_A(real_A)
recon_A = G_B(fake_B)
# Show result for test data
utils.plot_test_result(real_A, fake_B, recon_A, i, save=True, save_dir=save_dir + 'AtoB/')
print('%d images are generated.' % (i + 1))
for i, real_B in enumerate(test_data_loader_B):
# input image data
real_B = Variable(real_B.cuda())
# B -> A -> B
fake_A = G_B(real_B)
recon_B = G_A(fake_A)
# Show result for test data
utils.plot_test_result(real_B, fake_A, recon_B, i, save=True, save_dir=save_dir + 'BtoA/')
print('%d images are generated.' % (i + 1))
# -
|
19) ImageTranslation/CycleGAN/3. CycleGAN_test.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import astropy.units as u
import astropy.constants as c
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from tqdm import tqdm
import sys
sys.path.insert(0, "/home/quadry/binary_planet_host_project")
import kepler
import astropy.units as u
import astropy.constants as c
from scipy import (special, stats)
from functions import single_probability, simulate
import matplotlib.tri as tri
G =c.G.to(u.au ** 3 / u.M_sun / u.day ** 2).value
observing_span = 668
sys.path.insert(0, "/home/quadry/binary_planet_host_project")
proc_kic = pd.read_csv('/home/quadry/binary_planet_host_project/data/bright_kic_data-xm-rc.0.csv')
# +
sns.set_style("ticks")
plt.rcParams['figure.figsize'] = [16, 12]
plt.rcParams['font.family'] = "serif"
plt.rcParams["errorbar.capsize"] = 20
params = {'legend.fontsize': 'xx-large',
'axes.labelsize': 'xx-large',
'axes.titlesize':'xx-large',
'xtick.labelsize':'xx-large',
'ytick.labelsize':'xx-large'}
plt.rcParams['mathtext.fontset'] = 'dejavuserif'
plt.rcParams.update(params)
plt.rc('font',**{'family':'serif','sans-serif':['Times']})
plt.rcParams['font.family'] = "serif"
# -
# Here we convert reported radial_velocity_error to real RV error and bin the real stars into 10x10 bins in Rp visual magnitude and Bp-Rp color
# +
proc_kic = proc_kic [proc_kic ['radial_velocity_error'].notna()]
m_g = proc_kic['phot_rp_mean_mag']
proc_kic['converted_sigma_1'] = np.sqrt((2*proc_kic['radial_velocity_error']**2*proc_kic['rv_nb_transits']-0.0242*proc_kic['rv_nb_transits'])/np.pi)
proc_kic['color bins'] = pd.cut(proc_kic['bp_rp'], 10)
proc_kic['mag bins'] = pd.cut(proc_kic['phot_rp_mean_mag'], 10)
proc_kic['color label'] = pd.cut(proc_kic['bp_rp'], 10, labels=False)
proc_kic['mag label'] = pd.cut(proc_kic['phot_rp_mean_mag'], 10, labels=False)
# +
p = proc_kic[proc_kic['rv_p_single']>.9].dropna().groupby(['color bins', 'mag bins']).agg('mean')[['converted_sigma_1','radial_velocity_error'
,"rv_gp_predict_theta",
"rv_gp_predict_mu_single","rv_gp_predict_sigma_single","rv_gp_predict_sigma_multiple",'color label','mag label']]
p.unstack()
p = p.dropna()
p = p.reset_index(level=['color bins', 'mag bins'])
colors = []
for i in range(len(p)):
colors.append(p['color bins'].loc[i].mid)
p['colors'] = colors
mags = []
for i in range(len(p)):
mags.append(p['mag bins'].loc[i].mid)
p['mags'] = mags
p['ct'] = 1
# -
# Now we simulate single stars using the same machinery as for binaries, but with the mass ratio q set to 0
# +
# #%%time
mag_bins = []
for i in p['mag label'].unique():
mag_bins.append(p[p['mag label']==i].sample(n=100000, replace=True))
bin_info = pd.concat(mag_bins)
N = len(bin_info)
maxn = 10
number_of_obs = np.random.randint(7, maxn, size=N)
mask = np.arange(maxn)[None, :] < number_of_obs[:, None]
t = np.random.uniform(0, observing_span, (N, maxn))
m_1 = np.random.uniform(low=1, high=2.5, size=N)
q = np.zeros(N)
#m_2 = np.random.uniform(low=0.2, high=1, size=N)
e = np.random.random_sample(size=N)
i = np.random.uniform(low=0, high=90, size=N)
a = 10**np.random.uniform(low=-2, high=2, size=N)
phi = np.random.uniform(0,2*np.pi, size=N)
w = np.random.uniform(0,2*np.pi, size=N)
#bin_info = p.sample(n=N, replace=True)
sigma = bin_info['converted_sigma_1'][:,None]
r = simulate.RadialVelocity(m_1[:, None], q[:, None], e[:, None], i[:, None], a[:, None], phi[:, None], w[:, None], t, sigma)[0]
r[~mask] = 0.0
norm = 1 / np.sum(mask, axis=1)
mu = np.sum(r, axis=1) * norm
np.subtract(r, mu[:, None], out=r)
np.square(r, out=r)
r[~mask] = 0.0
np.sum(r, axis=1, out=mu)
np.multiply(mu, norm, out=mu)
std = np.sqrt(mu, out=mu)
single_data = pd.DataFrame()
single_data['m1']=m_1
single_data['q']=q
single_data['e']=e
single_data['i']=i
single_data['a']=a
single_data['t']=np.nan
single_data['t']=number_of_obs
single_data['phi']=phi
single_data['w']=w
single_data['RV'] = np.mean(r, axis=1)
single_data['RV jitter'] = std
singles = pd.concat((single_data,bin_info.reset_index()),axis=1)
# -
# +
p_theta, p_mu_single, p_sigma_single, p_sigma_multiple,color,mag = singles[["rv_gp_predict_theta","rv_gp_predict_mu_single",
"rv_gp_predict_sigma_single","rv_gp_predict_sigma_multiple",'color label','mag label']][:N].values.T
y =(singles['RV jitter'])[:N]
single_p = (single_probability.calculate_rv_p_single(p_theta, p_mu_single, p_sigma_single, p_sigma_multiple, y))
singles['rv_p'] = np.nan
singles['rv_p'][:N]=single_p
# -
# +
h =singles[singles['rv_p']<.8].groupby(['colors', 'mags']).agg('count')['ct']
#h = h.reset_index(level=['colors', 'mags'])
j =singles.groupby(['colors', 'mags']).agg('count')['ct']
#j = j.reset_index(level=['colors', 'mags'])
rate = pd.DataFrame(h/j)
rate1 = rate.reset_index(level=['colors', 'mags',])
x=rate1.dropna()['colors'].values
y=rate1.dropna()['mags'].values
z=rate1.dropna()['ct'].values
# -
triang = tri.Triangulation(x, y)
plt.tricontour(x, y, z, 10, linewidths=1, colors='k')
plt.tricontourf(x, y, z, 10, norm=plt.Normalize(vmax=z.max(), vmin=z.min()),cmap='viridis')
plt.ylabel('Rp [mag]')
plt.xlabel('Bp-Rp [color]')
cbar=plt.colorbar()
cbar.set_label('false positive rate')
#plt.savefig('false_positive_rate.png', bbox_inches='tight')
|
magic_conch/notebooks/false_positive_rate.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch as tc
import matplotlib.pyplot as plt
from sklearn import datasets
import numpy as np
# +
X1, y1=datasets.make_circles(n_samples=5000, factor=.6,
noise=.05)
X2, y2 = datasets.make_blobs(n_samples=1000, n_features=2, centers=[[1.2,1.2]], cluster_std=[[.1]],
random_state=9)
X = np.concatenate((X1, X2))
print(X1)
plt.scatter(X[:, 0], X[:, 1], marker='o')
plt.show()
# -
from sklearn.cluster import DBSCAN
y_pred = DBSCAN().fit_predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
plt.show()
print(tc.__version__)
|
jupyter/python/brokersWay.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Utility to merge two set of nodes together
#
# Sometimes we have got new set of nodes, from different fetching, this utility will help merge them with existing ones.
# +
import pickle
INPUT_NODES_PKL_FILE_1 = "data/100nodes.pkl"
INPUT_NODES_PKL_FILE_2 = "data/20nodes.pkl"
OUTPUT_NODES_PKL_FILE = "data/combined.pkl"
nodes1 = {}
with open(INPUT_NODES_PKL_FILE_1, "rb") as file:
nodes1 = pickle.load(file)
nodes2 = {}
with open(INPUT_NODES_PKL_FILE_2, "rb") as file:
nodes2 = pickle.load(file)
print(f"Input 1: {INPUT_NODES_PKL_FILE_1}: {len(nodes1):,} nodes")
print(f"Input 2: {INPUT_NODES_PKL_FILE_2}: {len(nodes2):,} nodes")
# +
combined_nodes = z = {**nodes1, **nodes2} # simple concat
with open(OUTPUT_NODES_PKL_FILE, "wb") as file:
pickle.dump(combined_nodes, file)
print(f"Output: {OUTPUT_NODES_PKL_FILE}: {len(combined_nodes):,} nodes")
# +
# EXTRAS - Not necessary to run
###############################
# Only keeps nodes that are in the sample set
import pandas as pd
# Accounts input
USERS_FILENAME = "data/samples116-info-with-labels.xlsx"
USERS_SHEET = "samples"
SCREEN_NAME_COL = "screen_name"
users_df = pd.read_excel(USERS_FILENAME, sheet_name=USERS_SHEET)
screen_names = users_df[SCREEN_NAME_COL].to_list()
del users_df
not_in_samples_screen_names = set(combined_nodes.keys()) - set(screen_names)
for name in not_in_samples_screen_names:
del combined_nodes[name]
with open(OUTPUT_NODES_PKL_FILE, "wb") as file:
pickle.dump(combined_nodes, file)
print(f"Output: {OUTPUT_NODES_PKL_FILE}: {len(combined_nodes):,} nodes")
|
combine-nodes.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/cverluise/PatCit/blob/nightly/notebook/frontpage_wiki.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="CVnwI8SrNJ5-"
from google.colab import files
from google.cloud import bigquery
from google.oauth2 import service_account
# + [markdown] id="a452qJ5CQ-yI"
# # Set up
# + [markdown] id="_0Z6CcpLSAgk"
# > If you don't have the JSON file that contains your key yet, follow the instructions to [create a service account](https://cloud.google.com/docs/authentication/getting-started#creating_a_service_account)
# + id="qjhb04pcNRng" outputId="686423f6-61bc-4527-e8e1-d841784f09b4" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 72}
# Upload your credentials
uploaded = files.upload()
# + id="KRTOUU1KQhpC"
key_path = "./npl-parsing-3412b4cafd20.json" # change the name to match your credentials file name
# + id="LDRrtylmPv4h"
credentials = service_account.Credentials.from_service_account_file(
key_path,
scopes=["https://www.googleapis.com/auth/cloud-platform"],
)
client = bigquery.Client(
credentials=credentials,
project=credentials.project_id,
)
# + id="mIs-pmepLLeC"
wiki_table = "patcit-public-data.frontpage.wiki"
# + [markdown] id="HYySwI1MSvrO"
# # Graph
# + [markdown] id="vY1bNjXNfcO5"
# The graph can be seen as a bipartite graph where the 2 disjoint set of nodes are:
#
# 1. wiki pages
# 1. patents
#
# Graphic representation of a bipartite directed graph [here](https://www.researchgate.net/publication/333505764/figure/fig5/AS:764459744247808@1559272814030/A-complete-directed-bipartite-graph-between-the-sets-of-likes-and-dislikes_W640.jpg).
#
# **Warning** This is a simplified representation. IRL, wiki pages can cite patents as well and both patents and wiki pages exhibit (a lot of) "within" citations.
#
#
# + [markdown] id="3RCUopw8YACj"
# ### Nodes
# + id="81dSDu4ASyPx"
query = f"""
SELECT
COUNT(patcit_id) as nb_wiki_nodes
FROM
`{wiki_table}`
"""
# + id="XzTYXRWfUBT3"
tmp = client.query(query).to_dataframe()
# + id="O0n_souqUR2n" outputId="8b95e11a-b4eb-4a6c-a7a9-1577f0b01349" colab={"base_uri": "https://localhost:8080/", "height": 35}
print(f"There are {int(tmp['nb_wiki_nodes'].values)} wiki nodes")
# + [markdown] id="bmvA0NObUx6g"
# > Note: we know that there are still duplicates
# + id="xAw_P17YeFOv"
query = f"""SELECT
COUNT(DISTINCT(cited_by.publication_number)) as nb_patent_nodes
FROM
`{wiki_table}`,
UNNEST(cited_by) as cited_by
"""
# + id="LINvchIFeMqn"
tmp = client.query(query).to_dataframe()
# + id="OPxFZdvNebxh" outputId="21e7559f-0864-437c-fae8-464969864983" colab={"base_uri": "https://localhost:8080/", "height": 35}
print(f"There are {int(tmp['nb_patent_nodes'].values)} patent nodes")
# + [markdown] id="IOXc37MRYDRX"
# ### Edges
# + id="UqbcY9I3X9Zg"
query = f"""
SELECT
SUM(is_cited_by_count) as nb_edges
FROM
`{wiki_table}`
"""
# + id="9hMwrwYUmz0U"
tmp = client.query(query).to_dataframe()
# + id="zG3BzWj8mz0Y" outputId="917c2234-4f15-4765-96f4-6c881eef39fd" colab={"base_uri": "https://localhost:8080/", "height": 35}
print(f"There are {int(tmp['nb_edges'].values)} edges")
# + [markdown] id="8l3u0WD4YIA4"
# ### Degrees
# + [markdown] id="lm2pGakgoHG5"
# #### In-degree
# + id="5wnKCE8TY90W"
query = f"""
SELECT
is_cited_by_count,
COUNT(patcit_id) as nb_occ,
FROM
`{wiki_table}`
GROUP BY
is_cited_by_count
"""
# + id="G7-W4ra1bttN"
tmp = client.query(query).to_dataframe()
# + id="O-XG-T7PcLSo" outputId="decb674a-95cc-4431-d805-35ea04f3581d" colab={"base_uri": "https://localhost:8080/", "height": 299}
(tmp
.set_index("is_cited_by_count")
.iloc[:20] # truncated
.plot(kind="bar", title="Degree distribution (in-degree)"));
# + [markdown] id="yzlvoWb8QADP"
# There is a fat tail
# + [markdown] id="6AfrdpHvoPgL"
# #### Out degree
# + id="Ko8xxiVGn7nH"
query = f"""WITH
tmp AS (
SELECT
COUNT(cited_by.publication_number) AS nb_cit_wiki,
cited_by.publication_number
FROM
`{wiki_table}`,
UNNEST(cited_by) AS cited_by
GROUP BY
cited_by.publication_number )
SELECT
nb_cit_wiki,
COUNT(nb_cit_wiki) as nb_occ
FROM
tmp
GROUP BY
nb_cit_wiki"""
# + id="wZGyL3zxoOlL"
tmp = client.query(query).to_dataframe()
# + id="E6RMlDp1oOlO" outputId="a3ffa950-a970-49f6-fc1e-5fa08c04732a" colab={"base_uri": "https://localhost:8080/", "height": 299}
(tmp
.set_index("nb_cit_wiki")
.iloc[:20] # truncated
.plot(kind="bar", title="Degree distribution (out-degree)"));
# + [markdown] id="wVBThkoYY-kH"
# ### Centrality
# + id="NtL9CGBNaVth"
query = f"""SELECT
item,
SUM(is_cited_by_count) AS nb_cit
FROM
{wiki_table},
UNNEST(item) AS item
GROUP BY
item"""
# + id="1noeATk4aaxH"
tmp = client.query(query).to_dataframe()
# + id="ArhP4rT7awhz" outputId="28782b88-4201-4208-890e-0823f2dfd485" colab={"base_uri": "https://localhost:8080/", "height": 348}
# Top n most cited items (rough measure of centrality)
(tmp
.sort_values("nb_cit", ascending=False)
.head(10))
# + [markdown] id="947rIHfoRFli"
# # Characteristics of wiki citations
# + [markdown] id="GWw56YIQOsEj"
# In short, the "average" wiki citation is coming from:
#
# - the applicants
# - of a patent published at the USPTO
# - in the 2010s
#
# > The technological class of the citing patents is biased towards cpc codes which don't appear anymore in the cpc scheme (see https://www.cooperativepatentclassification.org/cpcSchemeAndDefinitions/table)
# + [markdown] id="aB_Knx4h0Uw8"
# ### By year
# + id="6FUriUOyVNet"
query = """
SELECT
CAST(cited_by.publication_date/10000 AS INT64) as publication_year,
COUNT(cited_by.publication_number) as nb_citations
FROM
`patcit-public-data.frontpage.wiki`,
UNNEST(cited_by) as cited_by
GROUP BY
publication_year
"""
# + id="lRI16o3G37tE"
tmp = client.query(query).to_dataframe()
# + id="LgvJAYuWNk2P" outputId="00631387-9775-41bb-d2d1-0ca8ba45fcbf" colab={"base_uri": "https://localhost:8080/", "height": 296}
(tmp
.sort_values("publication_year")
.set_index("publication_year")
.plot(title="Nb wiki citations by publication_year", xlim=(2000, 2020)));
# + [markdown] id="xH_Y9-CC0kvW"
# ### By country
# + id="6rFnYQQCWmn1"
query = f"""
SELECT
SPLIT(cited_by.publication_number, "-")[
OFFSET
(0)] AS country_code,
COUNT(cited_by.publication_number) AS nb_cit
FROM
{wiki_table},
UNNEST(cited_by) AS cited_by
GROUP BY
country_code
"""
# + id="GrdH9JEf4dEB" outputId="42c1e8a8-d901-43de-a50e-aa4698b5bf0f" colab={"base_uri": "https://localhost:8080/", "height": 304}
(tmp
.sort_values("nb_cit", ascending=False)
.set_index("country_code")
.plot(kind="bar", title="Nb wiki citations by patent office"));
# + [markdown] id="XVVU6VFE0r0I"
# ### By origin (App, ISR, etc)
# + id="6jHtoSAKUQ4g"
query = f"""SELECT
cited_by.origin AS origin,
COUNT(cited_by.publication_number) AS nb_cit
FROM
{wiki_table},
UNNEST(cited_by) AS cited_by
GROUP BY
origin"""
# + id="x7ch8GrB6R7R"
tmp = client.query(query).to_dataframe()
# + id="pexUtXC06WqJ" outputId="6a09f8d6-8a09-49f7-e174-205aa2753e59" colab={"base_uri": "https://localhost:8080/", "height": 305}
(tmp.sort_values("nb_cit", ascending=False)
.set_index("origin")
.plot(kind="bar", title="Nb wiki citations by origin"));
# + [markdown] id="zZ8cGiMj0Hc-"
# ### By technological class
# + id="M5sovbmwVflH"
query_all = """SELECT
cpc.code AS cpc_code,
COUNT(publication_number) AS nb # not distinct, just the nb of lines
FROM
`patents-public-data.patents.publications`,
UNNEST(cpc) AS cpc
WHERE
country_code="US"
AND publication_date>=20100000
GROUP BY
cpc_code
ORDER BY
nb DESC """
query = """WITH
cpcs AS (
WITH
pubnums AS (
WITH
tmp AS (
SELECT
cited_by.publication_number AS cited_by_pubnum_
FROM
`npl-parsing.patcit.fp_wiki_dev_beta`,
UNNEST(cited_by) AS cited_by)
SELECT
cited_by_pubnum
FROM
tmp,
UNNEST(cited_by_pubnum_) AS cited_by_pubnum )
SELECT
pubnums.cited_by_pubnum,
p.cpc
FROM
pubnums
LEFT JOIN
`patents-public-data.patents.publications` p
ON
pubnums.cited_by_pubnum=p.publication_number)
SELECT
cpc.code AS cpc_code,
COUNT(cited_by_pubnum) AS nb # not distinct, just the nb of lines
FROM
cpcs,
UNNEST(cpc) AS cpc
GROUP BY
cpc_code
ORDER BY
nb DESC"""
# + id="QOjXZOZt5Z6z"
cpc_df = client.query(query_all).to_dataframe()
cpc_tmp = client.query(query).to_dataframe()
# + id="JcTPYxX65ydz"
def get_odds(lvl:str):
"""Compute the odds ratio by cpc code at level <lvl>"""
cpc_level = {"section":1, "class":3, "subclass":4}
assert lvl in list(cpc_level.keys())
cpc_df[f"cpc_{lvl}"] = (cpc_df["cpc_code"]
.apply(lambda x: x[:cpc_level[lvl]]))
cpc_tmp[f"cpc_{lvl}"] = (cpc_tmp["cpc_code"]
.apply(lambda x: x[:cpc_level[lvl]]))
cpc_df_sub = (cpc_df.groupby(f"cpc_{lvl}").sum()/cpc_df["nb"].sum()).rename(columns={"nb":"share"})
cpc_tmp_sub = (cpc_tmp.groupby(f"cpc_{lvl}").sum()/cpc_tmp["nb"].sum()).rename(columns={"nb":"share"})
out = cpc_df_sub.merge(cpc_tmp_sub, how="left",
right_index=True, left_index=True,
suffixes=("", "_wiki"))
out["odds"] = out["share_wiki"]/out["share"]
out = out.sort_values(by="odds", ascending=False)
return out
# + [markdown] id="ZedW8yiLOO0l"
# #### Class
# + id="L9ueeNMS8lVJ" outputId="94ab15e3-fa4e-453c-8d87-83d991b57a92" colab={"base_uri": "https://localhost:8080/", "height": 378}
get_odds("class").head(10)
# + [markdown] id="71i7J7KOD8Ew"
# > There seems to be something weird happening there. The cpc classes with the highest odds ratios do not exist (anymore) in official tables (e.g. https://www.cooperativepatentclassification.org/cpcSchemeAndDefinitions/table)
# + [markdown] id="pi384uuxQD2K"
# TODO:
#
# - Unravel the cpc code mystery
# - Look at how wikipedia categories and cpc codes relate together
# - Look for characteristics of cited wiki pages (ent, concept, etc) that could help us better understand the citation intent
#
# Current guess is that applicants use wiki definitions to standardize emerging concepts which are still not clearly defined by official organizations/ the scientific world.
|
notebook/frontpage_wiki.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import hashlib # for grading purposes
import math
import numpy as np
import pandas as pd
import requests
import sqlalchemy
from bs4 import BeautifulSoup
# -
# # Authentication
base_url = 'https://api.hackathon-03.batch2.lisbondatascience.org/'
auth_req = base_url+'api/token-auth'
response = requests.get(auth_req,params={'username':'ldsa_student'})
token = response.json()
token
# # Get ALL THE DATA!
# +
headers={'Authorization':'token '+ token['token']}
status=200
page=1
while status==200:
query = base_url+'api/data?page='+str(page)+'&page_size=2000'
query_resp = requests.get(query, headers=headers)
query_result = query_resp.json()
if query_resp.status_code ==200:
page_df=pd.DataFrame(query_result['results'])
if page==1:
total=pd.DataFrame(page_df)
else:
total=total.append(page_df)
status=query_resp.status_code
page+=1
# -
# ### Export
total.to_csv('data/api/api_data.csv')
# # wrangling
total = pd.read_csv('data/api/api_data.csv')
total['emergencystate_mode'] = total['emergencystate_mode'].fillna(0)
total.info()
|
Api_data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Summary: Use song lyrics to predict age ratings.
#
# After basic text preprocessing (tokenization, lemmiztization, removing stop words), the processed lyrics are then feed into a model pipeline consisting of `TfIdfVectorizer` and `RidgeRegressor`. `GridSearchCV` is used on a smaller subset to select the paramters: `min_df` for `TfIdfVectorizer`, and `alpha` for `RidgeRegressor`. These parameteres will be used later for song recommendation using Lyrics with KNN model.
#
# The model achieves an $R^2$ score of 0.4.
# # Load Lyrics and Preprocess
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import spotipy
import os
# %matplotlib inline
# +
df = pd.read_csv('../data/all.csv')
df = df.dropna(subset=['lyrics'])
df = df.reset_index(drop=True)
import re
def remove_disclaimer(string):
s = re.sub(r'(\*)+ This Lyrics is NOT for Commercial use .*\r\n\([0-9]*\)',"", string)
return s
df['lyrics'] = df['lyrics'].apply(remove_disclaimer)
# +
import spacy
import re
nlp = spacy.load("en_core_web_sm", disable=['parser','ner', 'textcat'])
def my_lemmatizer(doc):
doc_cleaned = ' '.join(re.findall(r'\b\w\w\w+\b', doc))
return [ w.lemma_.lower() for w in nlp(doc_cleaned)
if len(w.lemma_)>2 ]
def process_text(text, stop_words=set()):
if not text:
return []
text = ' '.join(re.findall(r'\b\w\w\w+\b', text)) #only keep word with at least 3 chars
doc = nlp(text)
result= [token.lemma_.lower() for token in doc if token.lemma_.lower() not in stop_words]
result = " ".join(result)
return result
stopwords = spacy.lang.en.STOP_WORDS.union(['-pron-', 'oh','ooh','la'])
stopwords = set(my_lemmatizer(' '.join(list(stopwords))))
# +
df['processed_lyrics'] = df['lyrics'].apply(process_text, args=(stopwords,))
df['processed_lyrics'].head(3)
# -
# ## Tuning Parameters using a smaller subset
# +
from sklearn.preprocessing import FunctionTransformer
from sklearn.compose import ColumnTransformer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import Ridge
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.utils import shuffle
from sklearn.tree import DecisionTreeRegressor
from sklearn.feature_extraction.text import TfidfVectorizer
X = df
y = df['Age']
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42)
bag_of_words_model = Pipeline([
('vectorizer', ColumnTransformer([('tfidf', TfidfVectorizer(min_df=2), 'processed_lyrics')])),
('regressor', Ridge())
], verbose = True)
param_grid = {'regressor__alpha':[0.1, 1, 5],
'vectorizer__tfidf__max_df': [0.95]}
gs_est = GridSearchCV(bag_of_words_model, param_grid, cv=3, n_jobs=2, verbose=5, return_train_score = True)
model = gs_est.fit(X_train[0:len(X_train)//3], y_train[0:len(X_train)//3]) #
print (gs_est.best_estimator_)
# -
gs_est.best_estimator_
for i in range(len(gs_est.cv_results_['params'])):
print (gs_est.cv_results_['params'][i], "score:", gs_est.cv_results_['mean_test_score'][i])
# # Age-Rating Model, using full dataset
# +
from sklearn.preprocessing import FunctionTransformer
from sklearn.compose import ColumnTransformer
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import Ridge
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.utils import shuffle
from sklearn.tree import DecisionTreeRegressor
from sklearn.feature_extraction.text import TfidfVectorizer
X = df
y = df['Age']
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42)
bag_of_words_model = Pipeline([
('vectorizer', ColumnTransformer([('tfidf', TfidfVectorizer(min_df=2, max_df=0.95), 'processed_lyrics')])),
('regressor', Ridge(1))
], verbose = True)
model = bag_of_words_model.fit(X_train, y_train)
# -
y_pred = model.predict(X_test)
# +
from sklearn.metrics import mean_squared_error, r2_score
mean = np.mean(y_train)
baseline_score = r2_score(y_test, [mean for _ in range(len(y_test))])
baseline_error = mean_squared_error(y_test, [mean for _ in range(len(y_test))],squared=False)
print (f"Baseline model: Use the mean age (age={mean}) as the prediction")
print ("Baseline Error: ", baseline_error)
print ("Baseline R^2 Score: ", baseline_score, "\n")
print ("Lyrics Model: ")
print ("Error (RMSE): ", mean_squared_error(y_test, y_pred, squared=False))
print ("R^2 Score: " , r2_score(y_test, y_pred))
# -
|
notebooks/3.2_Model_Age_Ratings_with_Lyrics.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="Ndo4ERqnwQOU"
# ##### Copyright 2020 The TensorFlow Authors.
# + cellView="form" colab={} colab_type="code" id="MTKwbguKwT4R"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="xfNT-mlFwxVM"
# # Intro to Autoencoders
# + [markdown] colab_type="text" id="0TD5ZrvEMbhZ"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/tutorials/generative/autoencoder">
# <img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
# View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/autoencoder.ipynb">
# <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
# Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/autoencoder.ipynb">
# <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
# View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/generative/autoencoder.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="ITZuApL56Mny"
# This tutorial introduces autoencoders with three examples: the basics, image denoising, and anomaly detection.
#
# An autoencoder is a special type of neural network that is trained to copy its input to its output. For example, given an image of a handwritten digit, an autoencoder first encodes the image into a lower dimensional latent representation, then decodes the latent representation back to an image. An autoencoder learns to compress the data while minimizing the reconstruction error.
#
# To learn more about autoencoders, please consider reading chapter 14 from [Deep Learning](https://www.deeplearningbook.org/) by <NAME>, <NAME>, and <NAME>.
# + [markdown] colab_type="text" id="e1_Y75QXJS6h"
# ## Import TensorFlow and other libraries
# + colab={} colab_type="code" id="YfIk2es3hJEd"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.model_selection import train_test_split
from tensorflow.keras import layers, losses
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras.models import Model
# + [markdown] colab_type="text" id="iYn4MdZnKCey"
# ## Load the dataset
# To start, you will train the basic autoencoder using the Fashon MNIST dataset. Each image in this dataset is 28x28 pixels.
# + colab={} colab_type="code" id="YZm503-I_tji"
(x_train, _), (x_test, _) = fashion_mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
print (x_train.shape)
print (x_test.shape)
# + [markdown] colab_type="text" id="VEdCXSwCoKok"
# ## First example: Basic autoencoder
# 
#
# Define an autoencoder with two Dense layers: an `encoder`, which compresses the images into a 64 dimensional latent vector, and a `decoder`, that reconstructs the original image from the latent space.
#
# To define your model, use the [Keras Model Subclassing API](https://www.tensorflow.org/guide/keras/custom_layers_and_models).
#
# + colab={} colab_type="code" id="0MUxidpyChjX"
latent_dim = 64
class Autoencoder(Model):
def __init__(self, encoding_dim):
super(Autoencoder, self).__init__()
self.latent_dim = latent_dim
self.encoder = tf.keras.Sequential([
layers.Flatten(),
layers.Dense(latent_dim, activation='relu'),
])
self.decoder = tf.keras.Sequential([
layers.Dense(784, activation='sigmoid'),
layers.Reshape((28, 28))
])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = Autoencoder(latent_dim)
# + colab={} colab_type="code" id="9I1JlqEIDCI4"
autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())
# + [markdown] colab_type="text" id="7oJSeMTroABs"
# Train the model using `x_train` as both the input and the target. The `encoder` will learn to compress the dataset from 784 dimensions to the latent space, and the `decoder` will learn to reconstruct the original images.
# .
# + colab={} colab_type="code" id="h1RI9OfHDBsK"
autoencoder.fit(x_train, x_train,
epochs=10,
shuffle=True,
validation_data=(x_test, x_test))
# + [markdown] colab_type="text" id="wAM1QBhtoC-n"
# Now that the model is trained, let's test it by encoding and decoding images from the test set.
# + colab={} colab_type="code" id="Pbr5WCj7FQUi"
encoded_imgs = autoencoder.encoder(x_test).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
# + colab={} colab_type="code" id="s4LlDOS6FUA1"
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i])
plt.title("original")
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i])
plt.title("reconstructed")
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# + [markdown] colab_type="text" id="r4gv6G8PoRQE"
# ## Second example: Image denoising
#
#
# 
#
# An autoencoder can also be trained to remove noise from images. In the following section, you will create a noisy version of the Fashion MNIST dataset by applying random noise to each image. You will then train an autoencoder using the noisy image as input, and the original image as the target.
#
# Let's reimport the dataset to omit the modifications made earlier.
# + colab={} colab_type="code" id="gDYHJA2PCQ3m"
(x_train, _), (x_test, _) = fashion_mnist.load_data()
# + colab={} colab_type="code" id="uJZ-TcaqDBr5"
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
print(x_train.shape)
# + [markdown] colab_type="text" id="aPZl_6P65_8R"
# Adding random noise to the images
# + colab={} colab_type="code" id="axSMyxC354fc"
noise_factor = 0.2
x_train_noisy = x_train + noise_factor * tf.random.normal(shape=x_train.shape)
x_test_noisy = x_test + noise_factor * tf.random.normal(shape=x_test.shape)
x_train_noisy = tf.clip_by_value(x_train_noisy, clip_value_min=0., clip_value_max=1.)
x_test_noisy = tf.clip_by_value(x_test_noisy, clip_value_min=0., clip_value_max=1.)
# + [markdown] colab_type="text" id="wRxHe4XXltNd"
# Plot the noisy images.
#
# + colab={} colab_type="code" id="thKUmbVVCQpt"
n = 10
plt.figure(figsize=(20, 2))
for i in range(n):
ax = plt.subplot(1, n, i + 1)
plt.title("original + noise")
plt.imshow(tf.squeeze(x_test_noisy[i]))
plt.gray()
plt.show()
# + [markdown] colab_type="text" id="Sy9SY8jGl5aP"
# ### Define a convolutional autoencoder
# + [markdown] colab_type="text" id="vT_BhZngWMwp"
# In this example, you will train a convolutional autoencoder using [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) layers in the `encoder`, and [Conv2DTranspose](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2DTranspose) layers in the `decoder`.
# + colab={} colab_type="code" id="R5KjoIlYCQko"
class Denoise(Model):
def __init__(self):
super(Denoise, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Input(shape=(28, 28, 1)),
layers.Conv2D(16, (3,3), activation='relu', padding='same', strides=2),
layers.Conv2D(8, (3,3), activation='relu', padding='same', strides=2)])
self.decoder = tf.keras.Sequential([
layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2D(1, kernel_size=(3,3), activation='sigmoid', padding='same')])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = Denoise()
# + colab={} colab_type="code" id="QYKbiDFYCQfj"
autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())
# + colab={} colab_type="code" id="IssFr1BNCQX3"
autoencoder.fit(x_train_noisy, x_train,
epochs=10,
shuffle=True,
validation_data=(x_test_noisy, x_test))
# + [markdown] colab_type="text" id="G85xUVBGTAKp"
# Let's take a look at a summary of the encoder. Notice how the images are downsampled from 28x28 to 7x7.
# + colab={} colab_type="code" id="oEpxlX6sTEQz"
autoencoder.encoder.summary()
# + [markdown] colab_type="text" id="DDZBfMx1UtXx"
# The decoder upsamples the images back from 7x7 to 28x28.
# + colab={} colab_type="code" id="pbeQtYMaUpro"
autoencoder.decoder.summary()
# + [markdown] colab_type="text" id="A7-VAuEy_N6M"
# Plotting both the noisy images and the denoised images produced by the autoencoder.
# + colab={} colab_type="code" id="t5IyPi1fCQQz"
encoded_imgs = autoencoder.encoder(x_test).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
# + colab={} colab_type="code" id="sfxr9NdBCP_x"
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original + noise
ax = plt.subplot(2, n, i + 1)
plt.title("original + noise")
plt.imshow(tf.squeeze(x_test_noisy[i]))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
bx = plt.subplot(2, n, i + n + 1)
plt.title("reconstructed")
plt.imshow(tf.squeeze(decoded_imgs[i]))
plt.gray()
bx.get_xaxis().set_visible(False)
bx.get_yaxis().set_visible(False)
plt.show()
# + [markdown] colab_type="text" id="ErGrTnWHoUYl"
# ## Third example: Anomaly detection
#
# ## Overview
#
#
# In this example, you will train an autoencoder to detect anomalies on the [ECG5000 dataset](http://www.timeseriesclassification.com/description.php?Dataset=ECG5000). This dataset contains 5,000 [Electrocardiograms](https://en.wikipedia.org/wiki/Electrocardiography), each with 140 data points. You will use a simplified version of the dataset, where each example has been labeled either `0` (corresponding to an abnormal rhythm), or `1` (corresponding to a normal rhythm). You are interested in identifying the abnormal rhythms.
#
# Note: This is a labeled dataset, so you could phrase this as a supervised learning problem. The goal of this example is to illustrate anomaly detection concepts you can apply to larger datasets, where you do not have labels available (for example, if you had many thousands of normal rhythms, and only a small number of abnormal rhythms).
#
# How will you detect anomalies using an autoencoder? Recall that an autoencoder is trained to minimize reconstruction error. You will train an autoencoder on the normal rhythms only, then use it to reconstruct all the data. Our hypothesis is that the abnormal rhythms will have higher reconstruction error. You will then classify a rhythm as an anomaly if the reconstruction error surpasses a fixed threshold.
# + [markdown] colab_type="text" id="i5estNaur_Mh"
# ### Load ECG data
# + [markdown] colab_type="text" id="y35nsXLPsDNX"
# The dataset you will use is based on one from [timeseriesclassification.com](http://www.timeseriesclassification.com/description.php?Dataset=ECG5000).
#
# + colab={} colab_type="code" id="KmKRDJWgsFYa"
# Download the dataset
dataframe = pd.read_csv('http://storage.googleapis.com/download.tensorflow.org/data/ecg.csv', header=None)
raw_data = dataframe.values
dataframe.head()
# + colab={} colab_type="code" id="UmuCPVYKsKKx"
# The last element contains the labels
labels = raw_data[:, -1]
# The other data points are the electrocadriogram data
data = raw_data[:, 0:-1]
train_data, test_data, train_labels, test_labels = train_test_split(
data, labels, test_size=0.2, random_state=21
)
# + [markdown] colab_type="text" id="byK2vP7hsMbz"
# Normalize the data to `[0,1]`.
#
# + colab={} colab_type="code" id="tgMZVWRKsPx6"
min_val = tf.reduce_min(train_data)
max_val = tf.reduce_max(train_data)
train_data = (train_data - min_val) / (max_val - min_val)
test_data = (test_data - min_val) / (max_val - min_val)
train_data = tf.cast(train_data, tf.float32)
test_data = tf.cast(test_data, tf.float32)
# + [markdown] colab_type="text" id="BdSYr2IPsTiz"
# You will train the autoencoder using only the normal rhythms, which are labeled in this dataset as `1`. Separate the normal rhythms from the abnormal rhythms.
# + colab={} colab_type="code" id="VvK4NRe8sVhE"
train_labels = train_labels.astype(bool)
test_labels = test_labels.astype(bool)
normal_train_data = train_data[train_labels]
normal_test_data = test_data[test_labels]
anomalous_train_data = train_data[~train_labels]
anomalous_test_data = test_data[~test_labels]
# + [markdown] colab_type="text" id="wVcTBDo-CqFS"
# Plot a normal ECG.
# + colab={} colab_type="code" id="ZTlMIrpmseYe"
plt.grid()
plt.plot(np.arange(140), normal_train_data[0])
plt.title("A Normal ECG")
plt.show()
# + [markdown] colab_type="text" id="QpI9by2ZA0NN"
# Plot an anomalous ECG.
# + colab={} colab_type="code" id="zrpXREF2siBr"
plt.grid()
plt.plot(np.arange(140), anomalous_train_data[0])
plt.title("An Anomalous ECG")
plt.show()
# + [markdown] colab_type="text" id="0DS6QKZJslZz"
# ### Build the model
# + colab={} colab_type="code" id="bf6owZQDsp9y"
class AnomalyDetector(Model):
def __init__(self):
super(AnomalyDetector, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Dense(32, activation="relu"),
layers.Dense(16, activation="relu"),
layers.Dense(8, activation="relu")])
self.decoder = tf.keras.Sequential([
layers.Dense(16, activation="relu"),
layers.Dense(32, activation="relu"),
layers.Dense(140, activation="sigmoid")])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = AnomalyDetector()
# + colab={} colab_type="code" id="gwRpBBbg463S"
autoencoder.compile(optimizer='adam', loss='mae')
# + [markdown] colab_type="text" id="zuTy60STBEy4"
# Notice that the autoencoder is trained using only the normal ECGs, but is evaluated using the full test set.
# + colab={} colab_type="code" id="V6NFSs-jsty2"
history = autoencoder.fit(normal_train_data, normal_train_data,
epochs=20,
batch_size=512,
validation_data=(test_data, test_data),
shuffle=True)
# + colab={} colab_type="code" id="OEexphFwwTQS"
plt.plot(history.history["loss"], label="Training Loss")
plt.plot(history.history["val_loss"], label="Validation Loss")
plt.legend()
# + [markdown] colab_type="text" id="ceI5lKv1BT-A"
# You will soon classify an ECG as anomalous if the reconstruction error is greater than one standard deviation from the normal training examples. First, let's plot a normal ECG from the training set, the reconstruction after it's encoded and decoded by the autoencoder, and the reconstruction error.
# + colab={} colab_type="code" id="hmsk4DuktxJ2"
encoded_imgs = autoencoder.encoder(normal_test_data).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
plt.plot(normal_test_data[0],'b')
plt.plot(decoded_imgs[0],'r')
plt.fill_between(np.arange(140), decoded_imgs[0], normal_test_data[0], color='lightcoral' )
plt.legend(labels=["Input", "Reconstruction", "Error"])
plt.show()
# + [markdown] colab_type="text" id="ocA_q9ufB_aF"
# Create a similar plot, this time for an anomalous test example.
# + colab={} colab_type="code" id="vNFTuPhLwTBn"
encoded_imgs = autoencoder.encoder(anomalous_test_data).numpy()
decoded_imgs = autoencoder.decoder(encoded_imgs).numpy()
plt.plot(anomalous_test_data[0],'b')
plt.plot(decoded_imgs[0],'r')
plt.fill_between(np.arange(140), decoded_imgs[0], anomalous_test_data[0], color='lightcoral' )
plt.legend(labels=["Input", "Reconstruction", "Error"])
plt.show()
# + [markdown] colab_type="text" id="ocimg3MBswdS"
# ### Detect anomalies
# + [markdown] colab_type="text" id="Xnh8wmkDsypN"
# Detect anomalies by calculating whether the reconstruction loss is greater than a fixed threshold. In this tutorial, you will calculate the mean average error for normal examples from the training set, then classify future examples as anomalous if the reconstruction error is higher than one standard deviation from the training set.
#
# + [markdown] colab_type="text" id="TeuT8uTA5Y_w"
# Plot the reconstruction error on normal ECGs from the training set
# + colab={} colab_type="code" id="gwLuxrb-s0ss"
reconstructions = autoencoder.predict(normal_train_data)
train_loss = tf.keras.losses.mae(reconstructions, normal_train_data)
plt.hist(train_loss, bins=50)
plt.xlabel("Train loss")
plt.ylabel("No of examples")
plt.show()
# + [markdown] colab_type="text" id="mh-3ChEF5hog"
# Choose a threshold value that is one standard deviations above the mean.
# + colab={} colab_type="code" id="82hkl0Chs3P_"
threshold = np.mean(train_loss) + np.std(train_loss)
print("Threshold: ", threshold)
# + [markdown] colab_type="text" id="uEGlA1Be50Nj"
# Note: There are other strategies you could use to select a threshold value above which test examples should be classified as anomalous, the correct approach will depend on your dataset. You can learn more with the links at the end of this tutorial.
# + [markdown] colab_type="text" id="zpLSDAeb51D_"
# If you examine the recontruction error for the anomalous examples in the test set, you'll notice most have greater reconstruction error than the threshold. By varing the threshold, you can adjust the [precision](https://developers.google.com/machine-learning/glossary#precision) and [recall](https://developers.google.com/machine-learning/glossary#recall) of your classifier.
# + colab={} colab_type="code" id="sKVwjQK955Wy"
reconstructions = autoencoder.predict(anomalous_test_data)
test_loss = tf.keras.losses.mae(reconstructions, anomalous_test_data)
plt.hist(test_loss, bins=50)
plt.xlabel("Test loss")
plt.ylabel("No of examples")
plt.show()
# + [markdown] colab_type="text" id="PFVk_XGE6AX2"
# Classify an ECG as an anomaly if the reconstruction error is greater than the threshold.
# + colab={} colab_type="code" id="mkgJZfhh6CHr"
def predict(model, data, threshold):
reconstructions = model(data)
loss = tf.keras.losses.mae(reconstructions, data)
return tf.math.less(loss, threshold)
def print_stats(predictions, labels):
print("Accuracy = {}".format(accuracy_score(labels, preds)))
print("Precision = {}".format(precision_score(labels, preds)))
print("Recall = {}".format(recall_score(labels, preds)))
# + colab={} colab_type="code" id="sOcfXfXq6FBd"
preds = predict(autoencoder, test_data, threshold)
print_stats(preds, test_labels)
# + [markdown] colab_type="text" id="HrJRef8Ln945"
# ## Next steps
#
# To learn more about anomaly detection with autoencoders, check out this excellent [interactive example](https://anomagram.fastforwardlabs.com/#/) built with TensorFlow.js by <NAME>. For a real-world use case, you can learn how [Airbus Detects Anomalies in ISS Telemetry Data](https://blog.tensorflow.org/2020/04/how-airbus-detects-anomalies-iss-telemetry-data-tfx.html) using TensorFlow. To learn more about the basics, consider reading this [blog post](https://blog.keras.io/building-autoencoders-in-keras.html) by <NAME>. For more details, check out chapter 14 from [Deep Learning](https://www.deeplearningbook.org/) by <NAME>, <NAME>, and <NAME>.
#
|
site/en/tutorials/generative/autoencoder.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorch
#
# Это упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/
# Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU!
#
# Авторы курса благодарят компанию Google и надеятся, что праздник не закончится.
#
# Туториал по настройке Google Colab:
# https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d
# (Keras инсталлировать не нужно, наш notebook сам установит PyTorch)
#
# + colab={} colab_type="code" id="FcXBeP1O7cnY"
# Intstall PyTorch and download data
# #!pip3 install torch torchvision
# #!wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat
# + colab={} colab_type="code" id="-afwWw-Q85vD"
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import PIL
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as dset
from torch.utils.data.sampler import SubsetRandomSampler
from torchvision import transforms
# + colab={} colab_type="code" id="NNU-OD9O9ltP"
device = torch.device("cuda:0") # Let's make sure GPU is available!
# -
PATH_TO_DATA = 'D:/Py/DataFrames/dlcourse_ai/SVHN'
# # Загружаем данные
# + colab={} colab_type="code" id="YAvkoRx-9FsP"
# First, lets load the dataset
data_train = dset.SVHN(PATH_TO_DATA,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
])
)
data_test = dset.SVHN(PATH_TO_DATA, split='test',
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47],
std=[0.20,0.20,0.20])
]))
# -
# Разделяем данные на training и validation.
#
# На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
# + colab={} colab_type="code" id="YRnr8CPg7Hli"
batch_size = 64
data_size = data_train.data.shape[0]
validation_split = .2
split = int(np.floor(validation_split * data_size))
indices = list(range(data_size))
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(data_train, batch_size=batch_size,
sampler=val_sampler)
# + colab={} colab_type="code" id="LyYvt-T67PBG"
# We'll use a special helper module to shape it into a flat tensor
class Flattener(nn.Module):
def forward(self, x):
batch_size, *_ = x.shape
return x.view(batch_size, -1)
# -
# Создадим простейшую сеть с новыми слоями:
# Convolutional - `nn.Conv2d`
# MaxPool - `nn.MaxPool2d`
# + colab={} colab_type="code" id="w9SFVGZP7SQd"
nn_model = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
nn.Conv2d(64, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(4),
Flattener(),
nn.Linear(64*2*2, 10),
)
nn_model.type(torch.cuda.FloatTensor)
nn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(nn_model.parameters(), lr=1e-1, weight_decay=1e-4)
# -
# Восстановите функцию `compute_accuracy` из прошлого задания.
# Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for (x, y) in loader:
x_gpu, y_gpu = x.to(device), y.to(device)
prediction = model(x_gpu)
indices = torch.argmax(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y_gpu.shape[0]
accuracy = float(correct_samples) / total_samples
return accuracy
# + colab={} colab_type="code" id="2ek3KVQK7hJ6"
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
# -
loss_history, train_history, val_history = train_model(nn_model, train_loader, val_loader, loss, optimizer, 5)
# + [markdown] colab_type="text" id="6a-3a1ZFGEw_"
# # Аугментация данных (Data augmentation)
#
# В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных.
# Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.
# Важно, чтобы аугментированные данные были похожи на те, которые могут встретиться в реальной жизни, иначе польза от аугментаций уменьшается и может ухудшить работу сети.
#
# С PyTorch идут несколько таких алгоритмов, называемых `transforms`. Более подробно про них можно прочитать тут -
# https://pytorch.org/tutorials/beginner/data_loading_tutorial.html#transforms
#
# Ниже мы используем следующие алгоритмы генерации:
# - ColorJitter - случайное изменение цвета
# - RandomHorizontalFlip - горизонтальное отражение с вероятностью 50%
# - RandomVerticalFlip - вертикальное отражение с вероятностью 50%
# - RandomRotation - случайный поворот
# + colab={} colab_type="code" id="jCWMUWmr7t5g"
tfs = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(50, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47], std=[0.20,0.20,0.20])
])
# Create augmented train dataset
data_aug_train = dset.SVHN(PATH_TO_DATA, transform=tfs)
train_aug_loader = torch.utils.data.DataLoader(data_aug_train, batch_size=batch_size,
sampler=train_sampler)
# -
# Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
# + colab={} colab_type="code" id="YlJJEro1KZ45"
# TODO: Visualize some augmented images!
# hint: you can create new datasets and loaders to accomplish this
# Based on the visualizations, should we keep all the augmentations?
tfs = transforms.Compose([
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
data_aug_vis = dset.SVHN(PATH_TO_DATA, transform=tfs)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(data_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
# + [markdown] colab_type="text" id="o2LrmsYHoguB"
# Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?
#
# Выберите из них только корректные
# +
# TODO: Add good augmentations
tfs_for_vis = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50, brightness=0.50, contrast=0.50),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
])
train_aug_vis = dset.SVHN(PATH_TO_DATA, transform=tfs_for_vis)
plt.figure(figsize=(30, 3))
for i, (x, y) in enumerate(train_aug_vis):
if i == 10:
break
plt.subplot(1, 10, i+1)
plt.grid(False)
plt.imshow(x)
plt.axis('off')
# + colab={} colab_type="code" id="evro9ksXGs9u"
tfs_for_train = transforms.Compose([
transforms.ColorJitter(hue=.50, saturation=.50, brightness=0.50, contrast=0.50),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.ToTensor(),
transforms.Normalize(mean=[0.43,0.44,0.47], std=[0.20,0.20,0.20])
])
# TODO create new instances of loaders with the augmentations you chose
train_aug_data = dset.SVHN(PATH_TO_DATA, transform=tfs_for_train)
train_aug_loader = torch.utils.data.DataLoader(train_aug_data, batch_size=batch_size,
sampler=train_sampler)
# + colab={} colab_type="code" id="PeO6Zw0DHqPR"
# Finally, let's train with augmentations!
# Note we shouldn't use augmentations on validation
loss_history, train_history, val_history = train_model(nn_model, train_aug_loader, val_loader, loss, optimizer, 5)
# + [markdown] colab_type="text" id="r0bcioK6JBDK"
# # LeNet
# Попробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?
# Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf), попробуйте прочитать ключевые части и имплементировать предложенную архитетуру на PyTorch.
#
# Реализовывать слои и функцию ошибки LeNet, которых нет в PyTorch, **не нужно** - просто возьмите их размеры и переведите в уже известные нам Convolutional, Pooling и Fully Connected layers.
#
# Если в статье не очень понятно, можно просто погуглить LeNet и разобраться в деталях :)
# + colab={} colab_type="code" id="ieEzZUglJAUB"
# TODO: Implement LeNet-like architecture for SVHN task
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(400, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10),
nn.LogSoftmax(dim=-1)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4)
# + colab={} colab_type="code" id="WMmaPfdeKk9H"
# Let's train it!
loss_history, train_history, val_history = train_model(lenet_model, train_aug_loader, val_loader, loss, optimizer, 10)
# + [markdown] colab_type="text" id="u_O9qiYySvuj"
# # Подбор гиперпараметров
# + colab={} colab_type="code" id="i6mhfdQ9K-N3"
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization
# We also encourage you to try different optimizers as well
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy'])
learning_rates = [1e0, 1e-1, 1e-2, 1e-3, 1e-4]
regularizers = [1e-3, 1e-4, 1e-5, 1e-7]
batch_size = 64
epoch_num = 5
# Record all the runs here
# Key should be Hyperparams and values should be RunResult
run_record = {}
# TODO: Use grid search or random search and record all runs in run_record dictionnary
# Important: perform search in logarithmic space!
i = 0
for lr in learning_rates:
for reg in regularizers:
i += 1
print('Cycle: {0} | LR: {1} | REG: {2}'.format(i, lr, reg))
lenet_model = nn.Sequential(
nn.Conv2d(3, 6, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(400, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10),
nn.LogSoftmax(dim=-1)
)
lenet_model.type(torch.cuda.FloatTensor)
lenet_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
optimizer = optim.SGD(lenet_model.parameters(), lr=lr, weight_decay=reg)
loss_history, train_history, val_history = train_model(lenet_model,
train_aug_loader, val_loader,
loss, optimizer, epoch_num)
hp = Hyperparams(lr, reg)
rr = RunResult(lenet_model, loss_history, val_history, val_history[-1])
run_record[hp] = rr
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Y6xExdw8JB1l" outputId="a9ad86f8-3e29-45cc-d33f-e6170018a4ed"
best_val_accuracy = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_val_accuracy is None or best_val_accuracy < run_result.final_val_accuracy:
best_val_accuracy = run_result.final_val_accuracy
best_hyperparams = hyperparams
best_run = run_result
print("Best validation accuracy: %4.2f, best hyperparams: %s" % (best_val_accuracy, best_hyperparams))
# + [markdown] colab_type="text" id="LOmsR0uVgtgf"
# # Свободное упражнение - догоним и перегоним LeNet!
#
# Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.
#
# Что можно и нужно попробовать:
# - BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.html#batchnorm2d))
# - Изменить количество слоев и их толщину
# - Изменять количество эпох тренировки
# - Попробовать и другие агментации
# + colab={} colab_type="code" id="tSVhD747icoc"
# TODO: Overtake LeNet!
in_channels = 3
conv1_channels = 18
conv2_channels = 24
kernel_size = 5
linear1_size = 24*5*5
linear2_size = 200
num_output_classes = 10
cnn_model = nn.Sequential(
nn.Conv2d(in_channels, conv1_channels, kernel_size),
nn.BatchNorm2d(conv1_channels),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
nn.Conv2d(conv1_channels, conv2_channels, kernel_size),
nn.BatchNorm2d(conv2_channels),
nn.ReLU(inplace=True),
nn.MaxPool2d(2),
Flattener(),
nn.Linear(linear1_size, linear2_size),
nn.BatchNorm1d(linear2_size),
nn.ReLU(inplace=True),
nn.Linear(linear2_size, num_output_classes)
)
cnn_model.type(torch.cuda.FloatTensor)
cnn_model.to(device)
loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor)
#optimizer = optim.Adam(cnn_model.parameters(), lr=1e-2, weight_decay=1e-3, amsgrad=True)
optimizer = optim.SGD(cnn_model.parameters(), lr=1e-2, weight_decay=1e-3)
# -
loss_history, train_history, val_history = train_model(cnn_model, train_aug_loader, val_loader, loss, optimizer, 15)
# and 5 epochs more
loss_history, train_history, val_history = train_model(cnn_model, train_aug_loader, val_loader, loss, optimizer, 5)
# + [markdown] colab_type="text" id="ubeKgBcnhx7N"
# # Финальный аккорд - проверим лучшую модель на test set
#
# В качестве разнообразия - напишите код для прогона модели на test set.
#
# В результате вы должны натренировать модель, которая покажет более **90%** точности на test set.
# Как водится, лучший результат в группе получит дополнительные баллы!
# + colab={} colab_type="code" id="EIqM1kdeh-hd"
# TODO Write the code to compute accuracy on test set
test_loader = torch.utils.data.DataLoader(data_test, batch_size=batch_size)
final_test_accuracy = compute_accuracy(cnn_model, test_loader)
print("Final test accuracy -", final_test_accuracy)
# -
|
solutions/assignment3/PyTorch_CNN.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="sUtoed20cRJJ"
# # How to Load CSV and Numpy File Types in TensorFlow 2.0
#
#
#
# ## Learning Objectives
#
# 1. Load a CSV file into a `tf.data.Dataset`.
# 2. Load Numpy data
#
#
#
#
# ## Introduction
#
# In this lab, you load CSV data from a file into a `tf.data.Dataset`. This tutorial provides an example of loading data from NumPy arrays into a `tf.data.Dataset` you also load text data.
#
# Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review [the solution notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/load_diff_filedata.ipynb).
# + [markdown] colab_type="text" id="fgZ9gjmPfSnK"
# ## Load necessary libraries
# We will start by importing the necessary libraries for this lab.
# + colab={} colab_type="code" id="baYFZMW_bJHh"
import functools
import numpy as np
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
# + colab={} colab_type="code" id="Ncf5t6tgL5ZI"
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
# + colab={} colab_type="code" id="4ONE94qulk6S"
# Make numpy values easier to read.
np.set_printoptions(precision=3, suppress=True)
# + [markdown] colab_type="text" id="Wuqj601Qw0Ml"
# ## Load data
#
# This section provides an example of how to load CSV data from a file into a `tf.data.Dataset`. The data used in this tutorial are taken from the Titanic passenger list. The model will predict the likelihood a passenger survived based on characteristics like age, gender, ticket class, and whether the person was traveling alone.
#
# To start, let's look at the top of the CSV file to see how it is formatted.
# + colab={} colab_type="code" id="54Dv7mCrf9Yw"
# !head {train_file_path}
# + [markdown] colab_type="text" id="jC9lRhV-q_R3"
# You can [load this using pandas](pandas_dataframe.ipynb), and pass the NumPy arrays to TensorFlow. If you need to scale up to a large set of files, or need a loader that integrates with [TensorFlow and tf.data](../../guide/data.ipynb) then use the `tf.data.experimental.make_csv_dataset` function:
# + [markdown] colab_type="text" id="67mfwr4v-mN_"
# The only column you need to identify explicitly is the one with the value that the model is intended to predict.
# + colab={} colab_type="code" id="iXROZm5f3V4E"
# TODO 1: Add string name for label column
LABEL_COLUMN = 'survived'
LABELS = [0, 1]
# + [markdown] colab_type="text" id="t4N-plO4tDXd"
# Now read the CSV data from the file and create a dataset.
#
# (For the full documentation, see `tf.data.experimental.make_csv_dataset`)
#
# + colab={} colab_type="code" id="yIbUscB9sqha"
# get_dataset() retrieve a Dataverse dataset or its metadata
def get_dataset(file_path, **kwargs):
# TODO 2
# Use `tf.data.experimental.make_csv_dataset()` to read CSV files into a dataset.
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size=5, # Artificially small to make examples easier to show.
label_name=LABEL_COLUMN,
na_value="?",
num_epochs=1,
ignore_errors=True,
**kwargs)
return dataset
raw_train_data = get_dataset(train_file_path)
raw_test_data = get_dataset(test_file_path)
# + colab={} colab_type="code" id="v4oMO9MIxgTG"
def show_batch(dataset):
for batch, label in dataset.take(1):
for key, value in batch.items():
print("{:20s}: {}".format(key,value.numpy()))
# + [markdown] colab_type="text" id="vHUQFKoQI6G7"
# Each item in the dataset is a batch, represented as a tuple of (*many examples*, *many labels*). The data from the examples is organized in column-based tensors (rather than row-based tensors), each with as many elements as the batch size (5 in this case).
#
# It might help to see this yourself.
# + colab={} colab_type="code" id="HjrkJROoxoll"
show_batch(raw_train_data)
# + [markdown] colab_type="text" id="YOYKQKmMj3D6"
# As you can see, the columns in the CSV are named. The dataset constructor will pick these names up automatically. If the file you are working with does not contain the column names in the first line, pass them in a list of strings to the `column_names` argument in the `make_csv_dataset` function.
# + colab={} colab_type="code" id="2Av8_9L3tUg1"
CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone']
temp_dataset = get_dataset(train_file_path, column_names=CSV_COLUMNS)
show_batch(temp_dataset)
# + [markdown] colab_type="text" id="gZfhoX7bR9u4"
# This example is going to use all the available columns. If you need to omit some columns from the dataset, create a list of just the columns you plan to use, and pass it into the (optional) `select_columns` argument of the constructor.
#
# + colab={} colab_type="code" id="S1TzSkUKwsNP"
SELECT_COLUMNS = ['survived', 'age', 'n_siblings_spouses', 'class', 'deck', 'alone']
temp_dataset = get_dataset(train_file_path, select_columns=SELECT_COLUMNS)
show_batch(temp_dataset)
# + [markdown] colab_type="text" id="9cryz31lxs3e"
# ## Data preprocessing
#
# A CSV file can contain a variety of data types. Typically you want to convert from those mixed types to a fixed length vector before feeding the data into your model.
#
# TensorFlow has a built-in system for describing common input conversions: `tf.feature_column`, see [this tutorial](https://www.tensorflow.org/tutorials/structured_data/feature_columns) for details.
#
#
# You can preprocess your data using any tool you like (like [nltk](https://www.nltk.org/) or [sklearn](https://scikit-learn.org/stable/)), and just pass the processed output to TensorFlow.
#
#
# The primary advantage of doing the preprocessing inside your model is that when you export the model it includes the preprocessing. This way you can pass the raw data directly to your model.
# + [markdown] colab_type="text" id="9AsbaFmCeJtF"
# ### Continuous data
# + [markdown] colab_type="text" id="Xl0Q0DcfA_rt"
# If your data is already in an appropriate numeric format, you can pack the data into a vector before passing it off to the model:
# + colab={} colab_type="code" id="4Yfji3J5BMxz"
SELECT_COLUMNS = ['survived', 'age', 'n_siblings_spouses', 'parch', 'fare']
DEFAULTS = [0, 0.0, 0.0, 0.0, 0.0]
temp_dataset = get_dataset(train_file_path,
select_columns=SELECT_COLUMNS,
column_defaults = DEFAULTS)
show_batch(temp_dataset)
# + colab={} colab_type="code" id="zEUhI8kZCfq8"
example_batch, labels_batch = next(iter(temp_dataset))
# + [markdown] colab_type="text" id="IP45_2FbEKzn"
# Here's a simple function that will pack together all the columns:
# + colab={} colab_type="code" id="JQ0hNSL8CC3a"
def pack(features, label):
return tf.stack(list(features.values()), axis=-1), label
# + [markdown] colab_type="text" id="75LA9DisEIoE"
# Apply this to each element of the dataset:
# + colab={} colab_type="code" id="VnP2Z2lwCTRl"
packed_dataset = temp_dataset.map(pack)
for features, labels in packed_dataset.take(1):
print(features.numpy())
print()
print(labels.numpy())
# + [markdown] colab_type="text" id="1VBvmaFrFU6J"
# If you have mixed datatypes you may want to separate out these simple-numeric fields. The `tf.feature_column` api can handle them, but this incurs some overhead and should be avoided unless really necessary. Switch back to the mixed dataset:
# + colab={} colab_type="code" id="ad-IQ_JPFQge"
show_batch(raw_train_data)
# + colab={} colab_type="code" id="HSrYNKKcIdav"
example_batch, labels_batch = next(iter(temp_dataset))
# + [markdown] colab_type="text" id="p5VtThKfGPaQ"
# So define a more general preprocessor that selects a list of numeric features and packs them into a single column:
# + colab={} colab_type="code" id="5DRishYYGS-m"
class PackNumericFeatures(object):
def __init__(self, names):
self.names = names
def __call__(self, features, labels):
numeric_features = [features.pop(name) for name in self.names]
numeric_features = [tf.cast(feat, tf.float32) for feat in numeric_features]
numeric_features = tf.stack(numeric_features, axis=-1)
features['numeric'] = numeric_features
return features, labels
# + colab={} colab_type="code" id="1SeZka9AHfqD"
NUMERIC_FEATURES = ['age','n_siblings_spouses','parch', 'fare']
packed_train_data = raw_train_data.map(
PackNumericFeatures(NUMERIC_FEATURES))
packed_test_data = raw_test_data.map(
PackNumericFeatures(NUMERIC_FEATURES))
# + colab={} colab_type="code" id="wFrw0YobIbUB"
show_batch(packed_train_data)
# + colab={} colab_type="code" id="_EPUS8fPLUb1"
example_batch, labels_batch = next(iter(packed_train_data))
# + [markdown] colab_type="text" id="o2maE8d2ijsq"
# #### Data Normalization
#
# Continuous data should always be normalized.
# + colab={} colab_type="code" id="WKT1ASWpwH46"
import pandas as pd
desc = pd.read_csv(train_file_path)[NUMERIC_FEATURES].describe()
desc
# + colab={} colab_type="code" id="cHHstcKPsMXM"
# TODO 1
MEAN = np.array(desc.T['mean'])
STD = np.array(desc.T['std'])
# + colab={} colab_type="code" id="REKqO_xHPNx0"
def normalize_numeric_data(data, mean, std):
# Center the data
return (data-mean)/std
# -
print(MEAN, STD)
# + [markdown] colab_type="text" id="VPsoMUgRCpUM"
# Now create a numeric column. The `tf.feature_columns.numeric_column` API accepts a `normalizer_fn` argument, which will be run on each batch.
#
# Bind the `MEAN` and `STD` to the normalizer fn using [`functools.partial`](https://docs.python.org/3/library/functools.html#functools.partial).
# + colab={} colab_type="code" id="Bw0I35xRS57V"
# See what you just created.
normalizer = functools.partial(normalize_numeric_data, mean=MEAN, std=STD)
numeric_column = tf.feature_column.numeric_column('numeric', normalizer_fn=normalizer, shape=[len(NUMERIC_FEATURES)])
numeric_columns = [numeric_column]
numeric_column
# + [markdown] colab_type="text" id="HZxcHXc6LCa7"
# When you train the model, include this feature column to select and center this block of numeric data:
# + colab={} colab_type="code" id="b61NM76Ot_kb"
example_batch['numeric']
# + colab={} colab_type="code" id="j-r_4EAJAZoI"
numeric_layer = tf.keras.layers.DenseFeatures(numeric_columns)
numeric_layer(example_batch).numpy()
# + [markdown] colab_type="text" id="M37oD2VcCO4R"
# The mean based normalization used here requires knowing the means of each column ahead of time.
# + [markdown] colab_type="text" id="tSyrkSQwYHKi"
# ### Categorical data
#
# Some of the columns in the CSV data are categorical columns. That is, the content should be one of a limited set of options.
#
# Use the `tf.feature_column` API to create a collection with a `tf.feature_column.indicator_column` for each categorical column.
#
# + colab={} colab_type="code" id="mWDniduKMw-C"
CATEGORIES = {
'sex': ['male', 'female'],
'class' : ['First', 'Second', 'Third'],
'deck' : ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],
'embark_town' : ['Cherbourg', 'Southhampton', 'Queenstown'],
'alone' : ['y', 'n']
}
# + colab={} colab_type="code" id="kkxLdrsLwHPT"
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = tf.feature_column.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab)
categorical_columns.append(tf.feature_column.indicator_column(cat_col))
# + colab={} colab_type="code" id="H18CxpHY_Nma"
# See what you just created.
categorical_columns
# + colab={} colab_type="code" id="p7mACuOsArUH"
categorical_layer = tf.keras.layers.DenseFeatures(categorical_columns)
print(categorical_layer(example_batch).numpy()[0])
# + [markdown] colab_type="text" id="R7-1QG99_1sN"
# This will be become part of a data processing input later when you build the model.
# + [markdown] colab_type="text" id="kPWkC4_1l3IG"
# ### Combined preprocessing layer
# + [markdown] colab_type="text" id="R3QAjo1qD4p9"
# Add the two feature column collections and pass them to a `tf.keras.layers.DenseFeatures` to create an input layer that will extract and preprocess both input types:
# + colab={} colab_type="code" id="3-OYK7GnaH0r"
# TODO 1
preprocessing_layer = tf.keras.layers.DenseFeatures(categorical_columns+numeric_columns)
# + colab={} colab_type="code" id="m7_U_K0UMSVS"
print(preprocessing_layer(example_batch).numpy()[0])
# + [markdown] colab_type="text" id="DlF_omQqtnOP"
# ### Next Step
#
# A next step would be to build a build a `tf.keras.Sequential`, starting with the `preprocessing_layer`, which is beyond the scope of this lab. We will cover the Keras Sequential API in the next Lesson.
# -
# # Load NumPy data
# ## Load necessary libraries
# First, restart the Kernel. Then, we will start by importing the necessary libraries for this lab.
# +
import numpy as np
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
# -
# ### Load data from `.npz` file
#
# We use the MNIST dataset in Keras.
# +
DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'
path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
# TODO 1
train_examples = data['x_train']
train_labels = data['y_train']
test_examples = data['x_test']
test_labels = data['y_test']
# -
# ## Load NumPy arrays with `tf.data.Dataset`
# Assuming you have an array of examples and a corresponding array of labels, pass the two arrays as a tuple into `tf.data.Dataset.from_tensor_slices` to create a `tf.data.Dataset`.
# TODO 2
train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))
# ### Next Step
#
# A next step would be to build a build a `tf.keras.Sequential`, starting with the `preprocessing_layer`, which is beyond the scope of this lab. We will cover the Keras Sequential API in the next Lesson.
# ### Resources
# 1. Load text data - this link: https://www.tensorflow.org/tutorials/load_data/text
# 2. TF.text - this link: https://www.tensorflow.org/tutorials/tensorflow_text/intro
# 3. Load image daeta - https://www.tensorflow.org/tutorials/load_data/images
# 4. Read data into a Pandas DataFrame - https://www.tensorflow.org/tutorials/load_data/pandas_dataframe
# 5. How to represent Unicode strings in TensorFlow - https://www.tensorflow.org/tutorials/load_data/unicode
# 6. TFRecord and tf.Example - https://www.tensorflow.org/tutorials/load_data/tfrecord
# Copyright 2020 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
preparing-gcp-ml-engineer/introduction-to-tensorflow/load_diff_filedata.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/LambdaTheda/DS-Unit-2-Linear-Models/blob/master/1am_SUN_Mar_8_build_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="sugh27O6FWTw" colab_type="code" colab={}
# read in dataset 1: pokemon Go character sightings
import pandas as pd
#import zipfile
#predictemall.read_csv
#predictemall.zip.extractall(path=None, members=None, pwd=None)
df = pd.read_csv('300k.csv')
pd.set_option("display.max_rows", 91835)
#pd.set_option("display.max_columns", 300)
df
# + id="LUYfSx_CWOJt" colab_type="code" colab={}
df.shape
# + id="fAcTWsznWQ4l" colab_type="code" colab={}
df.head()
# + id="UPpL0A0-O37g" colab_type="code" colab={}
# read in dataset 2: encoded names and corresponding id#s
df_Names = pd.read_csv('pokemonNumbers.csv', header= None)
pd.set_option("display.max_rows", 91835)
df_Names.columns=['pokemonId', 'pokemonName'] # change df_Names column names to align with the proper columns for merge.
df_Names
# + id="l6cdXakqWo-1" colab_type="code" colab={}
df_Names.head(20)
# + id="UKgzEw27Wb3c" colab_type="code" colab={}
df_Names.shape
# + id="5D5tFb2ePIVQ" colab_type="code" colab={}
#EXPLORATORY DATA ANALYSIS/DATA WRANGLING
'''
join(df2) does a left join by default (keeps all rows of df1 ), but df. merge does an inner join by default (returns only matching rows of df1 and df2 ).Jun 18, 2016
What is the difference between join and merge in Pandas? - Stack ...stackoverflow.com › questions › what-is-the-di
# merge df and df_Names to make df3
df3 = pd.merge(df_Names, df, on = 'pokemonId')
pd.set_option("display.max_rows", 91835)
df3
'''
# try df.join(other.set_index('key'), on='key')
df3 = df.join(df_Names.set_index('pokemonId'), on='pokemonId')
df3
# + id="K97Vz9FDH4DE" colab_type="code" colab={}
df3.head()
# + [markdown] id="z3h9pCFJCryg" colab_type="text"
# # **check if merge preserved pokemonId:pokemonName**
# + id="JqcQo5K98nWC" colab_type="code" colab={}
# ATTEMPT 1: locate row where latitude = 20.525745, longitude = -97.460829 to check if merge preserved pokemonId:pokemonName
#df.iloc[lambda x: df3['latitude'] == 20.525745] - GET: NotImplementedError: iLocation based boolean indexing on an integer type is not available
# + id="stIqMZlu-8nu" colab_type="code" colab={}
# ATTEMPT 2: locate row of before and after merge..
#df.query('latitude = 20.525745 and longitude = -97.460829')
# + id="tVCKZUZmClaT" colab_type="code" colab={}
# ATTEMPT 3: df.loc[df['column_name'] == some_value]
#df.loc[df['latitude'] == 20.525745] and df.loc[df['longitude'] == -97.460829]
'''
To select rows whose column value equals a scalar, some_value, use ==:
Combine multiple conditions with &:
df.loc[(df['column_name'] >= A) & (df['column_name'] <= B)]
'''
#USE LOCATED ROW TO COMPARE w/col vals in df3
df.loc[(df['latitude'] == 20.525745) & (df['longitude'] == -97.460829) & (df['pokemonId'] == 19)]
# + id="5hAspQJ6E7qi" colab_type="code" colab={}
# try to locate and compare with analogous row in df3- found NO ROWS
#df3.loc[(df3['latitude'] == 20.525745) & (df3['longitude'] == -97.460829) & (df3['pokemonName'] == 'Raticate')] - FOUND NO ROWS
#RETURNS A ROW BUT ROW INDEX OFF BY 1; "Raticate" is id# 20, but queried pokemonId was 19 in df_Names - MAY FIX LATER
df3.loc[(df3['latitude'] == 20.525745) & (df3['longitude'] == -97.460829) & (df3['pokemonId'] == 19)]
# + id="0huqvInRDhKp" colab_type="code" colab={}
# ATTEMPT 4:
df.loc[df['latitude'] == 20.525745]
# + id="txb6Es1HNxr3" colab_type="code" colab={}
# MERGED DATAFRAME df3... ROW INDEX OFF BY 1; "Raticate" is id# 20 in df_Names - fix by dropping pokemonId as 300k.csv dataset creator advised
#df.drop(['B', 'C'], axis=1)- SAMPLE
#df.drop(['pokemonId'], axis=1) - DOESN'T WORK ?!#$
#df.drop(columns=['B', 'C'])- SAMPLE
df.drop(columns=['pokemonId'])
df_Names.drop(columns=['pokemonId']) #- NOT DROPPED EITHER; MAYBE IT'S A TUPLE?..
#df_Names.dtypes()
df_Names.head(5)
# + id="O9q2C237R_d8" colab_type="code" colab={}
# STILL CAN'T DROP 'pokemonID' from ANY df!!
'''
df3.drop(columns=['pokemonId'])
df3.head(5)
'''
# + id="J8uB_gw8Y2Vc" colab_type="code" colab={}
#Choosing'appearedDayOfWeek' as target
df['appearedDayOfWeek'].nunique()
# + id="SwZam-oxbs9_" colab_type="code" colab={}
df['appearedDayOfWeek'].value_counts()
# + id="xZAcL6WOZQkj" colab_type="code" colab={}
#df3['appearedDayOfWeek'].nunique()
# + [markdown] id="GJXS0knVOS0B" colab_type="text"
# # #if df['appearedDayOfWeek'] changes from 7 next time I reload file(s), I will CHOOSe the NEW TARGET TO PREDICT to be 'Continent' (nunique=11?!)!
# + id="y_gaAZ1yZPYb" colab_type="code" colab={}
# check df Nulls- returns 0 for all columns
df.isnull().sum()
# + id="jslz4d9GWLYn" colab_type="code" colab={}
# check df_Names Nulls- returns 0 for all columns
df_Names.isnull().sum()
# + id="aTfEczWLVW_I" colab_type="code" colab={}
# check df3 Nulls- returns 0 for all columns
df3.isnull().sum()
# + id="bIMOJ8PCW10H" colab_type="code" colab={}
# THIS TIME AROUND GETTING NO NULLS...
#drop df cols with nulls
#df.dropna() - not work right
'''
df.dropna(axis='columns') # also not work right
df
'''
# + id="EzmogW5KScRD" colab_type="code" colab={}
#df.columns.to_list()
# + id="f59jULiMcEcm" colab_type="code" colab={}
#View correlation matrix
df.corr()
# + id="0qr2JGASdJym" colab_type="code" colab={}
# THIS TIME AROUND GETTING NO NULLS...
#trim df to reduce noise including nulls
'''
df_trimmed = df.drop(columns=['gymIn100m',
'gymIn250m',
'gymIn500m',
'gymIn1000m',
'gymIn2500m',
'gymIn5000m',
'cooc_114',
'cooc_115',
'cooc_116',
'cooc_117',
'cooc_118',
'cooc_119',
'cooc_120',
'cooc_121',
'cooc_122',
'cooc_123',
'cooc_124',
'cooc_125',
'cooc_126',
'cooc_127',
'cooc_128',
'cooc_129',
'cooc_130',
'cooc_131',
'cooc_132',
'cooc_133',
'cooc_134',
'cooc_135',
'cooc_136',
'cooc_137',
'cooc_138',
'cooc_139',
'cooc_140',
'cooc_141',
'cooc_142',
'cooc_143',
'cooc_144',
'cooc_145',
'cooc_146',
'cooc_147',
'cooc_148',
'cooc_149',
'cooc_150',
'cooc_151',
])
'''
# + id="jMxKVn2res1e" colab_type="code" colab={}
#trim df to reduce noise:
''' 1st do
1) CORRELATION MATRIX &
2) HEATMAP to try to determine which columns we may drop that may not or
not significantly affect predicting TARGET = 'continent'
'''
# + id="lBy5BFX-XEZu" colab_type="code" colab={}
df3['continent'].nunique()
# + id="dekbUc4NXQc-" colab_type="code" colab={}
df3['continent'].value_counts()
# + id="M2RNz_HnXnCS" colab_type="code" colab={}
df['continent'].nunique()
# + id="NJC8tT3-cRqQ" colab_type="code" colab={}
#df_trimmed.sort_by('pokemonId', ascending = True)
# + id="wPUDkS06pHZV" colab_type="code" colab={}
#check data distribution to see if linear model is appropriate- IT IS NOT; THIS IS MULTI-CLASS CLASSIFICATiON PROBLEM
df_trimmed.plot.hist()
# + id="d8hHCaODZQRd" colab_type="code" colab={}
#Get BASELINE- the first way to get the highest accuracy by assuming majority, [by] assum[ing] majority:
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_row', 300000)
'''
average for regression should be the majority value in a sense;
classification is assuming all values are the highest majority value- BASELINE IS PREDICTING MODE CLASS (largest # of observations
'''
df['appearedDayOfWeek'].value_counts()
# our BASELINE IS SATURDAY, WITH 72201 appearances on that day
# + id="Il-kyMlVjUvj" colab_type="code" colab={}
df['appearedDayOfWeek'].nunique()
# + id="vzKtGQnzT6Yr" colab_type="code" colab={}
# Fit a Classification model
# Try a shallow decision tree as a fast, first model
#pip install --upgrade category_encoders
# from sklearn.preprocessing import CategoricalEncoder
#pip install category_encoders
# !pip install category_encoders==2.*
import numpy as np
import category_encoders as ce
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
#encoder = ce.OrdinalEncoder(cols=[...])
target = 'appearedDayOfWeek'
features = ['appearedMonth', 'appearedYear', 'appearedDay', 'pokestopDistanceKm', 'weather', 'latitude', 'longitude']
#train = df
X_train, X_test, y_train, y_test = train_test_split(df[features], df[target])
train, test = train_test_split(df)
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
RandomForestClassifier()
)
# + id="trD2nGSHnbCn" colab_type="code" colab={}
# function to Convert 'appearedDayOfWeek' string values into floats
'''
def convertDayToFloat():
week = list(df['appearedDayOfWeek'].replace('1.0', '2.0', '3.0','4.0','5.0','6.0','7.0'))
print(week)
convertDayToFloat
'''
# + id="rlx1UjBH0CWj" colab_type="code" colab={}
#dictionary to Convert 'appearedDayOfWeek' string values into floats
dict = {'Sunday':'1.0', 'dummy_day':'2.0', 'Tuesday':'3.0','Wednesday':'4.0','Thursday':'5.0', 'Friday':'6.0', 'Saturday':'7.0' }
# + id="ETwfRWzQnNdh" colab_type="code" colab={}
#fit model 1: Random Forest Classifier and score accuracy
# + id="VzB3nmXFnOzC" colab_type="code" colab={}
pipeline.fit(X_train, y_train)
acc_score = pipeline.score(test[features], test[target])
#ra_score = roc_auc_score(test[target], pipeline.predict(test[features]))
print(f'Test Accuracy: {pipeline.score(X_test, y_test)}')
print(f'Test ROC AUC: {roc_auc_score(y_test, pipeline.predict(X_test))}\n')
'''
print(f'Val Accuracy: {acc_score}')
print(f'Val ROC AUC: {ra_score}')
'''
|
1am_SUN_Mar_8_build_2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python
# name: conda-env-python-py
# ---
# <a href="https://www.skills.network/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01"><img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DL0120ENedX/labs/Template%20for%20Instructional%20Hands-on%20Labs/images/IDSNlogo.png" width="400px" align="center"></a>
#
# <h1 align="center"><font size="5">RESTRICTED BOLTZMANN MACHINES</font></h1>
#
# <h3>Introduction</h3>
# <b>Restricted Boltzmann Machine (RBM):</b> RBMs are shallow neural nets that learn to reconstruct data by themselves in an unsupervised fashion.
#
# <h4>Why are RBMs important?</h4>
# An RBM are a basic form of autoencoder. It can automatically extract <b>meaningful</b> features from a given input.
#
# <h4>How does it work?</h4>
# RBM is a 2 layer neural network. Simply, RBM takes the inputs and translates those into a set of binary values that represents them in the hidden layer. Then, these numbers can be translated back to reconstruct the inputs. Through several forward and backward passes, the RBM will be trained, and a trained RBM can reveal which features are the most important ones when detecting patterns.
#
# <h4>What are the applications of an RBM?</h4>
# RBM is useful for <a href='http://www.cs.utoronto.ca/~hinton/absps/netflixICML.pdf?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01'> Collaborative Filtering</a>, dimensionality reduction, classification, regression, feature learning, topic modeling and even <b>Deep Belief Networks</b>.
#
# <h4>Is RBM a generative or Discriminative model?</h4>
# RBM is a generative model. Let me explain it by first, see what is different between discriminative and generative models:
#
# <b>Discriminative:</b> Consider a classification problem where we want to learn to distinguish between Sedan cars (y = 1) and SUV cars (y = 0), based on some features of cars. Given a training set, an algorithm like logistic regression tries to find a straight line, or <i>decision boundary</i>, that separates the suv and sedan.
#
# <b>Generative:</b> looking at cars, we can build a model of what Sedan cars look like. Then, looking at SUVs, we can build a separate model of what SUV cars look like. Finally, to classify a new car, we can match the new car against the Sedan model, and match it against the SUV model, to see whether the new car looks more like the SUV or Sedan.
#
# Generative Models specify a probability distribution over a dataset of input vectors. We can carry out both supervised and unsupervised tasks with generative models:
#
# <ul>
# <li>In an unsupervised task, we try to form a model for $P(x)$, where $P$ is the probability given $x$ as an input vector.</li>
# <li>In the supervised task, we first form a model for $P(x|y)$, where $P$ is the probability of $x$ given $y$(the label for $x$). For example, if $y = 0$ indicates that a car is an SUV, and $y = 1$ indicates that a car is a sedan, then $p(x|y = 0)$ models the distribution of SUV features, and $p(x|y = 1)$ models the distribution of sedan features. If we manage to find $P(x|y)$ and $P(y)$, then we can use <b>Bayes rule</b> to estimate $P(y|x)$, because:
# $$p(y|x) = \frac{p(x|y)p(y)}{p(x)}$$</li>
# </ul>
# Now the question is, can we build a generative model, and then use it to create synthetic data by directly sampling from the modeled probability distributions? Lets see.
#
# <h2>Table of Contents</h2>
# <ol>
# <li><a href="https://#ref1">Initialization</a></li>
# <li><a href="https://#ref2">RBM layers</a></li>
# <li><a href="https://#ref3">What RBM can do after training?</a></li>
# <li><a href="https://#ref4">How to train the model?</a></li>
# <li><a href="https://#ref5">Learned features</a></li>
# </ol>
# <p></p>
# </div>
# <br>
#
# <hr>
#
# <a id="ref1"></a>
#
# <h3>Initialization</h3>
#
# First, we have to load the utility file which contains different utility functions that are not connected
# in any way to the networks presented in the tutorials, but rather help in
# processing the outputs into a more understandable way.
#
import urllib.request
with urllib.request.urlopen("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DL0120EN-SkillsNetwork/labs/Week4/data/utils.py") as url:
response = url.read()
target = open('utils.py', 'w')
target.write(response.decode('utf-8'))
target.close()
# <h2>Installing TensorFlow </h2>
#
# We will installing TensorFlow version 2.2.0 and its required prerequistes. Also installing pillow\...
#
# !pip install grpcio==1.24.3
# !pip install tensorflow==2.2.0
# !pip install pillow
# <b>Notice:</b> This notebook has been created with TensorFlow version 2.2, and might not work with other versions. Therefore we check:
#
# +
import tensorflow as tf
from IPython.display import Markdown, display
def printmd(string):
display(Markdown('# <span style="color:red">'+string+'</span>'))
if not tf.__version__ == '2.2.0':
printmd('<<<<<!!!!! ERROR !!!! please upgrade to TensorFlow 2.2.0, or restart your Kernel (Kernel->Restart & Clear Output)>>>>>')
# -
# Now, we load in all the packages that we use to create the net including the TensorFlow package:
#
# +
import tensorflow as tf
import numpy as np
from PIL import Image
from utils import tile_raster_images
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# <hr>
#
# <a id="ref2"></a>
#
# <h3>RBM layers</h3>
#
# An RBM has two layers. The first layer of the RBM is called the <b>visible</b> (or input layer). Imagine that our toy example, has only vectors with 7 values, so the visible layer must have $V=7$ input nodes.
# The second layer is the <b>hidden</b> layer, which has $H$ neurons in our case. Each hidden node takes on values of either 0 or 1 (i.e., $h_i = 1$ or $h_i$ = 0), with a probability that is a logistic function of the inputs it receives from the other $V$ visible units, called for example, $p(h_i = 1)$. For our toy sample, we'll use 2 nodes in the hidden layer, so $H = 2$.
#
# <center><img src="https://ibm.box.com/shared/static/eu26opvcefgls6vnwuo29uwp0nudmokh.png" alt="RBM Model" style="width: 400px;"></center>
#
# Each node in the first layer also has a <b>bias</b>. We will denote the bias as $v\_{bias}$, and this single value is shared among the $V$ visible units.
#
# The <b>bias</b> of the second is defined similarly as $h\_{bias}$, and this single value among the $H$ hidden units.
#
v_bias = tf.Variable(tf.zeros([7]), tf.float32)
h_bias = tf.Variable(tf.zeros([2]), tf.float32)
# We have to define weights among the input layer and hidden layer nodes. In the weight matrix, the number of rows are equal to the input nodes, and the number of columns are equal to the output nodes. We define a tensor $\mathbf{W}$ of shape = (7,2), where the number of visible neurons = 7, and the number of hidden neurons = 2.
#
W = tf.constant(np.random.normal(loc=0.0, scale=1.0, size=(7, 2)).astype(np.float32))
# <hr>
#
# <a id="ref3"></a>
#
# <h3>What RBM can do after training?</h3>
# Think of RBM as a model that has been trained based on images of a dataset of many SUV and sedan cars. Also, imagine that the RBM network has only two hidden nodes, where one node encodes the weight and, and the other encodes the size.
# In a sense, the different configurations represent different cars, where one is an SUV and the other is Sedan. In a training process, through many forward and backward passes, the RBM adjust its weights to send a stronger signal to either the SUV node (0, 1) or the sedan node (1, 0) in the hidden layer, given the pixels of images. Now, given an SUV in hidden layer, which distribution of pixels should we expect? RBM can give you 2 things. First, it encodes your images in hidden layer. Second, it gives you the probability of observing a case, given some hidden values.
#
# <h3>The Inference Process</h3>
#
# RBM has two phases:
#
# <ul>
# <li>Forward Pass</li>
# <li>Backward Pass or Reconstruction</li>
# </ul>
#
# <b>Phase 1) Forward pass:</b>
#
# Input one training sample (one image) $\mathbf{x}$ through all visible nodes, and pass it to all hidden nodes. Processing happens in each node in the hidden layer. This computation begins by making stochastic decisions about whether to transmit that input or not (i.e. to determine the state of each hidden layer). First, the probability vector is computed using the input feature vector $\mathbf{x}$, the weight matrix $\mathbf{W}$, and the bias term $h\_{bias}$, as
#
# $$p({h_j}|\mathbf x)= \sigma( \sum\_{i=1}^V W\_{ij} x_i + h\_{bias} )$$,
#
# where $\sigma(z) = (1+e^{-z})^{-1}$ is the logistic function.
#
# So, what does $p({h_j})$ represent? It is the <b>probability distribution</b> of the hidden units. That is, RBM uses inputs $x_i$ to make predictions about hidden node activations. For example, imagine that the hidden node activation values are \[0.51 0.84] for the first training item. It tells you that the conditional probability for each hidden neuron for Phase 1 is:
#
# $$p(h\_{1} = 1|\mathbf{v}) = 0.51$$
# $$p(h\_{2} = 1|\mathbf{v}) = 0.84$$
#
# As a result, for each row in the training set, vector of probabilities is generated. In TensorFlow, this is referred to as a `tensor` with a shape of (1,2).
#
# We then turn unit $j$ with probability $p(h\_{j}|\mathbf{v})$, and turn it off with probability $1 - p(h\_{j}|\mathbf{v})$ by generating a uniform random number vector $\mathbf{\xi}$, and comparing it to the activation probability as
#
# <center>If $\xi_j>p(h_{j}|\mathbf{v})$, then $h_j=1$, else $h_j=0$.</center>
#
# Therefore, the conditional probability of a configuration of $\mathbf{h}$ given $\mathbf{v}$ (for a training sample) is:
#
# $$p(\mathbf{h} \mid \mathbf{v}) = \prod\_{j=1}^H p(h_j \mid \mathbf{v})$$
#
# where $H$ is the number of hidden units.
#
# Before we go further, let's look at a toy example for one case out of all input. Assume that we have a trained RBM, and a very simple input vector, such as \[1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0].\
# Let's see what the output of forward pass would look like:
#
# +
X = tf.constant([[1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0]], tf.float32)
v_state = X
print ("Input: ", v_state)
h_bias = tf.constant([0.1, 0.1])
print ("hb: ", h_bias)
print ("w: ", W)
# Calculate the probabilities of turning the hidden units on:
h_prob = tf.nn.sigmoid(tf.matmul(v_state, W) + h_bias) #probabilities of the hidden units
print ("p(h|v): ", h_prob)
# Draw samples from the distribution:
h_state = tf.nn.relu(tf.sign(h_prob - tf.random.uniform(tf.shape(h_prob)))) #states
print ("h0 states:", h_state)
# -
# <b>Phase 2) Backward Pass (Reconstruction):</b>
# The RBM reconstructs data by making several forward and backward passes between the visible and hidden layers.
#
# So, in the second phase (i.e. reconstruction phase), the samples from the hidden layer (i.e. $\mathbf h$) becomes the input in the backward pass. The same weight matrix and visible layer biases are used to passed to the sigmoid function. The reproduced output is a reconstruction which is an approximation of the original input.
#
vb = tf.constant([0.1, 0.2, 0.1, 0.1, 0.1, 0.2, 0.1])
print ("b: ", vb)
v_prob = tf.nn.sigmoid(tf.matmul(h_state, tf.transpose(W)) + vb)
print ("p(vi∣h): ", v_prob)
v_state = tf.nn.relu(tf.sign(v_prob - tf.random.uniform(tf.shape(v_prob))))
print ("v probability states: ", v_state)
# RBM learns a probability distribution over the input, and then, after being trained, the RBM can generate new samples from the learned probability distribution. As you know, <b>probability distribution</b>, is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment.
#
# The (conditional) probability distribution over the visible units v is given by
#
# $$p(\mathbf{v} \mid \mathbf{h}) = \prod\_{i=1}^V p(v_i \mid \mathbf{h}),$$
#
# where,
#
# $$p(v_i \mid \mathbf{h}) = \sigma\left(\sum\_{j=1}^H W\_{ji} h_j + v\_{bias} \right)$$
#
# so, given current state of hidden units and weights, what is the probability of generating \[1. 0. 0. 1. 0. 0. 0.] in reconstruction phase, based on the above <b>probability distribution</b> function?
#
# +
inp = X
print("input X:" , inp.numpy())
print("probablity vector:" , v_prob[0].numpy())
v_probability = 1
for elm, p in zip(inp[0],v_prob[0]) :
if elm ==1:
v_probability *= p
else:
v_probability *= (1-p)
print("probability of generating X: " , v_probability.numpy())
# -
# How similar are vectors $\mathbf{x}$ and $\mathbf{v}$? Of course, the reconstructed values most likely will not look anything like the input vector, because our network has not been trained yet. Our objective is to train the model in such a way that the input vector and reconstructed vector to be same. Therefore, based on how different the input values look to the ones that we just reconstructed, the weights are adjusted.
#
# <hr>
#
# <h2>MNIST</h2>
#
# We will be using the MNIST dataset to practice the usage of RBMs. The following cell loads the MNIST dataset.
#
# +
#loading training and test data
mnist = tf.keras.datasets.mnist
(trX, trY), (teX, teY) = mnist.load_data()
# showing an example of the Flatten class and operation
from tensorflow.keras.layers import Flatten
flatten = Flatten(dtype='float32')
trX = flatten(trX/255.0)
trY = flatten(trY/255.0)
# -
# Lets look at the dimension of the images.
#
# MNIST images have 784 pixels, so the visible layer must have 784 input nodes. For our case, we'll use 50 nodes in the hidden layer, so i = 50.
#
vb = tf.Variable(tf.zeros([784]), tf.float32)
hb = tf.Variable(tf.zeros([50]), tf.float32)
# Let $\mathbf W$ be the Tensor of 784x50 (784 - number of visible neurons, 50 - number of hidden neurons) that represents weights between the neurons.
#
W = tf.Variable(tf.zeros([784,50]), tf.float32)
# Lets define the visible layer:
#
# +
v0_state = tf.Variable(tf.zeros([784]), tf.float32)
#testing to see if the matrix product works
tf.matmul( [v0_state], W)
# -
# Now, we can define hidden layer:
#
# +
#computing the hidden nodes probability vector and checking shape
h0_prob = tf.nn.sigmoid(tf.matmul([v0_state], W) + hb) #probabilities of the hidden units
print("h0_state shape: " , tf.shape(h0_prob))
#defining a function to return only the generated hidden states
def hidden_layer(v0_state, W, hb):
h0_prob = tf.nn.sigmoid(tf.matmul([v0_state], W) + hb) #probabilities of the hidden units
h0_state = tf.nn.relu(tf.sign(h0_prob - tf.random.uniform(tf.shape(h0_prob)))) #sample_h_given_X
return h0_state
h0_state = hidden_layer(v0_state, W, hb)
print("first 15 hidden states: ", h0_state[0][0:15])
# -
# Now, we define reconstruction part:
#
# +
def reconstructed_output(h0_state, W, vb):
v1_prob = tf.nn.sigmoid(tf.matmul(h0_state, tf.transpose(W)) + vb)
v1_state = tf.nn.relu(tf.sign(v1_prob - tf.random.uniform(tf.shape(v1_prob)))) #sample_v_given_h
return v1_state[0]
v1_state = reconstructed_output(h0_state, W, vb)
print("hidden state shape: ", h0_state.shape)
print("v0 state shape: ", v0_state.shape)
print("v1 state shape: ", v1_state.shape)
# -
# <h3>What is the objective function?</h3>
#
# <b>Goal</b>: Maximize the likelihood of our data being drawn from that distribution
#
# <b>Calculate error:</b>\
# In each epoch, we compute the "error" as a sum of the squared difference between step 1 and step n,
# e.g the error shows the difference between the data and its reconstruction.
#
# <b>Note:</b> tf.reduce_mean computes the mean of elements across dimensions of a tensor.
#
# +
def error(v0_state, v1_state):
return tf.reduce_mean(tf.square(v0_state - v1_state))
err = tf.reduce_mean(tf.square(v0_state - v1_state))
print("error" , err.numpy())
# -
# <a id="ref4"></a>
#
# <h3>Training the Model</h3>
# <b>Warning...</b> The following part is math-heavy, but you can skip it if you just want to run the cells in the next section.
#
# As mentioned, we want to give a high probability to the input data we train on. So, in order to train an RBM, we have to maximize the product of probabilities assigned to all rows $\mathbf{v}$ (images) in the training set $\mathbf{V}$ (a matrix, where each row of it is treated as a visible vector $\mathbf{v}$)
#
# $$\arg \max_W \prod\_{\mathbf{v}\in\mathbf{V}\_T} p(\mathbf{v})$$
#
# which is equivalent to maximizing the expectation of the log probability, given as
#
# $$\arg\max_W\left\[ \mathbb{E} \left(\prod\_{\mathbf v\in \mathbf V}\text{log} \left(p(\mathbf v)\right) \right) \right].$$
#
# So, we have to update the weights $W\_{ij}$ to increase $p(\mathbf{v})$ for all $\mathbf{v}$ in our training data during training. So we have to calculate the derivative:
#
# $$\frac{\partial \log p(\mathbf v)}{\partial W\_{ij}}$$
#
# This cannot be easily done by typical <b>gradient descent (SGD)</b>, so we can use another approach, which has 2 steps:
#
# <ol>
# <li>Gibbs Sampling</li>
# <li>Contrastive Divergence</li>
# </ol>
#
# <h3>Gibbs Sampling</h3>
#
# <h4>Gibbs Sampling Step 1</h4>
# Given an input vector $\mathbf{v}$, we are using $p(\mathbf{h}|\mathbf{v})$ to predict the hidden values $\mathbf{h}$.
# $$p({h_j}|\mathbf v)= \sigma\left(\sum_{i=1}^V W_{ij} v_i + h_{bias} \right)$$
# The samples are generated from this distribution by generating the uniform random variate vector $\mathbf{\xi} \sim U[0,1]$ of length $H$ and comparing to the computed probabilities as
#
# <center>If $\xi_j>p(h_{j}|\mathbf{v})$, then $h_j=1$, else $h_j=0$.</center>
#
# <h4>Gibbs Sampling Step 2</h4>
# Then, knowing the hidden values, we use $p(\mathbf v| \mathbf h)$ for reconstructing of new input values v.
#
# $$p({v_i}|\mathbf h)= \sigma\left(\sum\_{j=1}^H W^{T}*{ij} h_j + v*{bias} \right)$$
#
# The samples are generated from this distribution by generating a uniform random variate vector $\mathbf{\xi} \sim U\[0,1]$ of length $V$ and comparing to the computed probabilities as
#
# <center>If $\xi_i>p(v_{i}|\mathbf{h})$, then $v_i=1$, else $v_i=0$.</center>
#
# Let vectors $\mathbf v_k$ and $\mathbf h_k$ be for the $k$th iteration. In general, the $kth$ state is generrated as:
#
# <b>Iteration</b> $k$:
#
# $$\mathbf v\_{k-1} \Rightarrow p(\mathbf h\_{k-1}|\mathbf v\_{k-1})\Rightarrow \mathbf h\_{k-1}\Rightarrow p(\mathbf v\_{k}|\mathbf h\_{k-1})\Rightarrow \mathbf v_k$$
#
# <h3>Contrastive Divergence (CD-k)</h3>
# The update of the weight matrix is done during the Contrastive Divergence step.
#
# Vectors v0 and vk are used to calculate the activation probabilities for hidden values h0 and hk. The difference between the outer products of those probabilities with input vectors v0 and vk results in the update matrix:
#
# $$\Delta \mathbf W_k =\mathbf v_k \otimes \mathbf h_k - \mathbf v\_{k-1} \otimes \mathbf h\_{k-1}$$
#
# Contrastive Divergence is actually matrix of values that is computed and used to adjust values of the $\mathbf W$ matrix. Changing $\mathbf W$ incrementally leads to training of the $\mathbf W$ values. Then, on each step (epoch), $\mathbf W$ is updated using the following:
#
# $$\mathbf W_k = \mathbf W\_{k-1} + \alpha \* \Delta \mathbf W_k$$
#
# Reconstruction steps:
#
# <ul>
# <li> Get one data point from data set, like <i>x</i>, and pass it through the following steps:</li>
#
# <b>Iteration</b> $k=1$:
#
# Sampling (starting with input image)
# $$\mathbf x = \mathbf v\_0 \Rightarrow p(\mathbf h\_0|\mathbf v\_0)\Rightarrow \mathbf h\_0 \Rightarrow p(\mathbf v\_1|\mathbf h\_0)\Rightarrow \mathbf v\_1$$\
# followed by the CD-k step
# $$\Delta \mathbf W\_1 =\mathbf v\_1 \otimes \mathbf h\_1 - \mathbf v\_{0} \otimes \mathbf h\_{0}$$\
# $$\mathbf W\_1 = \mathbf W\_{0} + \alpha \* \Delta \mathbf W\_1$$
#
# <li> $\mathbf v_1$ is the reconstruction of $\mathbf x$ sent to the next iteration).</li>
#
# <b>Iteration</b> $k=2$:
#
# Sampling (starting with $\mathbf v\_1$)
#
# $$\mathbf v\_1 \Rightarrow p(\mathbf h\_1|\mathbf v\_1)\Rightarrow \mathbf h\_1\Rightarrow p(\mathbf v\_2|\mathbf h\_1)\Rightarrow \mathbf v\_2$$
#
# followed by the CD-k step
# $$\Delta \mathbf W\_2 =\mathbf v\_2 \otimes \mathbf h\_2 - \mathbf v\_{1} \otimes \mathbf h\_{1}$$\
# $$\mathbf W\_2 = \mathbf W\_{1} + \alpha \* \Delta \mathbf W\_2$$
#
# <li> $\mathbf v_2$ is the reconstruction of $\mathbf v_1$ sent to the next iteration).</li>
#
# <b>Iteration</b> $k=K$:
#
# Sampling (starting with $\mathbf v\_{K-1}$)
#
# $$\mathbf v\_{K-1} \Rightarrow p(\mathbf h\_{K-1}|\mathbf v\_{K-1})\Rightarrow \mathbf h\_{K-1}\Rightarrow p(\mathbf v_K|\mathbf h\_{K-1})\Rightarrow \mathbf v_K$$
#
# followed by the CD-k step
# $$\Delta \mathbf W_K =\mathbf v_K \otimes \mathbf h_K - \mathbf v\_{K-1} \otimes \mathbf h\_{K-1}$$\
# $$\mathbf W_K = \mathbf W\_{K-1} + \alpha \* \Delta \mathbf W_K$$
#
# <b>What is $\alpha$?</b>\
# Here, alpha is some small step size, and is also known as the "learning rate".
#
# $K$ is adjustable, and good performance can be achieved with $K=1$, so that we just take one set of sampling steps per image.
#
h1_prob = tf.nn.sigmoid(tf.matmul([v1_state], W) + hb)
h1_state = tf.nn.relu(tf.sign(h1_prob - tf.random.uniform(tf.shape(h1_prob)))) #sample_h_given_X
# Lets look at the error of the first run:
#
print("error: ", error(v0_state, v1_state))
# +
#Parameters
alpha = 0.01
epochs = 1
batchsize = 200
weights = []
errors = []
batch_number = 0
K = 1
#creating datasets
train_ds = \
tf.data.Dataset.from_tensor_slices((trX, trY)).batch(batchsize)
for epoch in range(epochs):
for batch_x, batch_y in train_ds:
batch_number += 1
for i_sample in range(batchsize):
for k in range(K):
v0_state = batch_x[i_sample]
h0_state = hidden_layer(v0_state, W, hb)
v1_state = reconstructed_output(h0_state, W, vb)
h1_state = hidden_layer(v1_state, W, hb)
delta_W = tf.matmul(tf.transpose([v0_state]), h0_state) - tf.matmul(tf.transpose([v1_state]), h1_state)
W = W + alpha * delta_W
vb = vb + alpha * tf.reduce_mean(v0_state - v1_state, 0)
hb = hb + alpha * tf.reduce_mean(h0_state - h1_state, 0)
v0_state = v1_state
if i_sample == batchsize-1:
err = error(batch_x[i_sample], v1_state)
errors.append(err)
weights.append(W)
print ( 'Epoch: %d' % epoch,
"batch #: %i " % batch_number, "of %i" % int(60e3/batchsize),
"sample #: %i" % i_sample,
'reconstruction error: %f' % err)
# -
# Let's take a look at the errors at the end of each batch:
#
plt.plot(errors)
plt.xlabel("Batch Number")
plt.ylabel("Error")
plt.show()
# What is the final weight matrix $W$ after training?
#
print(W.numpy()) # a weight matrix of shape (50,784)
# <a id="ref5"></a>
#
# <h3>Learned features</h3>
#
# We can take each hidden unit and visualize the connections between that hidden unit and each element in the input vector. In our case, we have 50 hidden units. Lets visualize those.
#
# Let's plot the current weights: <b>tile_raster_images</b> helps in generating an easy to grasp image from a set of samples or weights. It transforms the <b>uw</b> (with one flattened image per row of size 784), into an array (of size $28\times28$) in which images are reshaped and laid out like tiles on a floor.
#
tile_raster_images(X=W.numpy().T, img_shape=(28, 28), tile_shape=(5, 10), tile_spacing=(1, 1))
import matplotlib.pyplot as plt
from PIL import Image
# %matplotlib inline
image = Image.fromarray(tile_raster_images(X=W.numpy().T, img_shape=(28, 28) ,tile_shape=(5, 10), tile_spacing=(1, 1)))
### Plot image
plt.rcParams['figure.figsize'] = (18.0, 18.0)
imgplot = plt.imshow(image)
imgplot.set_cmap('gray')
# Each tile in the above visualization corresponds to a vector of connections between a hidden unit and visible layer's units.
#
# Let's look at one of the learned weights corresponding to one of hidden units for example. In this particular square, the gray color represents weight = 0, and the whiter it is, the more positive the weights are (closer to 1). Conversely, the darker pixels are, the more negative the weights. The positive pixels will increase the probability of activation in hidden units (after multiplying by input/visible pixels), and negative pixels will decrease the probability of a unit hidden to be 1 (activated). So, why is this important? So we can see that this specific square (hidden unit) can detect a feature (e.g. a "/" shape) and if it exists in the input.
#
from PIL import Image
image = Image.fromarray(tile_raster_images(X =W.numpy().T[10:11], img_shape=(28, 28),tile_shape=(1, 1), tile_spacing=(1, 1)))
### Plot image
plt.rcParams['figure.figsize'] = (4.0, 4.0)
imgplot = plt.imshow(image)
imgplot.set_cmap('gray')
# Let's look at the reconstruction of an image now. Imagine that we have a destructed image of figure 3. Lets see if our trained network can fix it:
#
# First we plot the image:
#
# !wget -O destructed3.jpg https://ibm.box.com/shared/static/vvm1b63uvuxq88vbw9znpwu5ol380mco.jpg
img = Image.open('destructed3.jpg')
img
# Now let's pass this image through the neural net:
#
# +
# convert the image to a 1d numpy array
sample_case = np.array(img.convert('I').resize((28,28))).ravel().reshape((1, -1))/255.0
sample_case = tf.cast(sample_case, dtype=tf.float32)
# -
# Feed the sample case into the network and reconstruct the output:
#
# +
hh0_p = tf.nn.sigmoid(tf.matmul(sample_case, W) + hb)
hh0_s = tf.round(hh0_p)
print("Probability nodes in hidden layer:" ,hh0_p)
print("activated nodes in hidden layer:" ,hh0_s)
# reconstruct
vv1_p = tf.nn.sigmoid(tf.matmul(hh0_s, tf.transpose(W)) + vb)
print(vv1_p)
#rec_prob = sess.run(vv1_p, feed_dict={ hh0_s: hh0_s_val, W: prv_w, vb: prv_vb})
# -
# Here we plot the reconstructed image:
#
img = Image.fromarray(tile_raster_images(X=vv1_p.numpy(), img_shape=(28, 28),tile_shape=(1, 1), tile_spacing=(1, 1)))
plt.rcParams['figure.figsize'] = (4.0, 4.0)
imgplot = plt.imshow(img)
imgplot.set_cmap('gray')
# <hr>
#
# ## Want to learn more?
#
# Also, you can use **Watson Studio** to run these notebooks faster with bigger datasets.**Watson Studio** is IBM’s leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, **Watson Studio** enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of **Watson Studio** users today with a free account at [Watson Studio](https://cocl.us/ML0120EN_DSX).This is the end of this lesson. Thank you for reading this notebook, and good luck on your studies.
#
# ### Thanks for completing this lesson!
#
# Notebook created by: <a href = "https://ca.linkedin.com/in/saeedaghabozorgi?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01"><NAME></a>
#
# Updated to TF 2.X by <a href="https://ca.linkedin.com/in/nilmeier?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01"> <NAME></a><br />
#
# ### References:
#
# [https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine](https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01)\
# [http://deeplearning.net/tutorial/rbm.html](http://deeplearning.net/tutorial/rbm.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01)\
# [http://www.cs.utoronto.ca/\~hinton/absps/netflixICML.pdf](http://www.cs.utoronto.ca/\~hinton/absps/netflixICML.pdf?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01)<br>
# <http://imonad.com/rbm/restricted-boltzmann-machine/>
#
# <hr>
#
# Copyright © 2018 [Cognitive Class](https://cocl.us/DX0108EN_CC). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01).
#
|
ML0120EN-4.1-Review-RBMMNIST.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Activity recognition with healthy older people using a battery less wearable sensor Data Set (d1p01M)
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
# # Importing datasets
data = pd.read_csv("d1p01M.csv")
data.head()
data.columns
# # Spliting Data for training and testing
from sklearn.model_selection import train_test_split
X_Data=data[['Time in seconds starting from 0 rounded to the closest 0.025s','Acceleration reading in G for frontal axis','Acceleration reading in G for vertical axis','Acceleration reading in G for lateral axis','Id of antenna reading sensor','Received signal strength indicator (RSSI)', 'Phase', 'Frequency']]
y_Data=data[['Label of activity']]
X_train,X_test,y_train,y_test = train_test_split(X_Data,y_Data,test_size=0.2,random_state=42)
# # Decision Trees
#CART
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
modelDT = DecisionTreeClassifier()
modelDT.fit(X_train, np.ravel(y_train,order='C'))
predictDT=modelDT.predict(X_test)
# +
##evaluation of metrices
#Confusion Matrix
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
cm = confusion_matrix(y_test,predictDT)
cm_display = ConfusionMatrixDisplay(cm).plot()
#Classification Report
from sklearn.metrics import classification_report
print(classification_report(y_test, predictDT, ))
# -
# # Random Forest
from sklearn.ensemble import RandomForestClassifier
modelRFC=RandomForestClassifier()
modelRFC.fit(X_train, np.ravel(y_train,order='C'))
predictRFC=modelRFC.predict(X_test)
# +
##evaluation of metrices
#Confusion Matrix
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
cm = confusion_matrix(y_test,predictRFC)
cm_display = ConfusionMatrixDisplay(cm).plot()
#Classification Report
from sklearn.metrics import classification_report
print(classification_report(y_test, predictRFC))
# -
# # Naive Bayes
from sklearn.naive_bayes import GaussianNB
modelGNB = GaussianNB()
modelGNB.fit(X_train, np.ravel(y_train,order='C'))
predictGNB=modelGNB.predict(X_test)
# +
##evaluation of metrices
#Confusion Matrix
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
cm = confusion_matrix(y_test,predictGNB)
cm_display = ConfusionMatrixDisplay(cm).plot()
#Classification Report
from sklearn.metrics import classification_report
print(classification_report(y_test, predictGNB))
# -
# # AdaBoost
from sklearn.ensemble import AdaBoostClassifier
modelAda = AdaBoostClassifier()
modelAda.fit(X_train, np.ravel(y_train,order='C'))
predictAda=modelAda.predict(X_test)
# +
##evaluation of metrices
#Confusion Matrix
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
cm = confusion_matrix(y_test,predictAda)
cm_display = ConfusionMatrixDisplay(cm).plot()
#Classification Report
from sklearn.metrics import classification_report
print(classification_report(y_test, predictAda))
# -
# # Data normalization
# +
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X_train)
X_train_normalized = scaler.transform(X_train)
#scaler.fit(y_train)
#y_train_normalized = scaler.transform(y_train)
scaler.fit(X_test)
X_test_normalized = scaler.transform(X_test)
#scaler.fit(y_train)
#y_test_normalized = scaler.transform(y_test)
# -
# # K-Nearest Neighbours
from sklearn.neighbors import KNeighborsClassifier
modelKNN = KNeighborsClassifier()
modelKNN.fit(X_train_normalized, np.ravel(y_train,order='C'))
predictKNN=modelKNN.predict(X_test_normalized)
# +
##evaluation of metrices
#Confusion Matrix
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
cm = confusion_matrix(y_test,predictKNN)
cm_display = ConfusionMatrixDisplay(cm).plot()
#Classification Report
from sklearn.metrics import classification_report
print(classification_report(y_test, predictKNN))
# -
# # Support Vector Machine
#Support Vector Classifier
from sklearn.svm import SVC
modelSVC = SVC(probability=True)
modelSVC.fit(X_train_normalized, np.ravel(y_train,order='C'))
predictSVC=modelSVC.predict(X_test_normalized)
# +
##evaluation of metrices
#Confusion Matrix
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
cm = confusion_matrix(y_test,predictSVC)
cm_display = ConfusionMatrixDisplay(cm).plot()
#Classification Report
from sklearn.metrics import classification_report
print(classification_report(y_test, predictSVC))
# -
# Linear Support Vector Classifier
from sklearn.svm import LinearSVC
modelLSVC = LinearSVC()
modelLSVC.fit(X_train_normalized, np.ravel(y_train,order='C'))
predictLSVC=modelLSVC.predict(X_test_normalized)
# +
##evaluation of metrices
#Confusion Matrix
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
cm = confusion_matrix(y_test,predictLSVC)
cm_display = ConfusionMatrixDisplay(cm).plot()
#Classification Report
from sklearn.metrics import classification_report
print(classification_report(y_test, predictLSVC))
# -
# # Multi-Layer Perceptron
from sklearn.neural_network import MLPClassifier
modelMLP = MLPClassifier()
modelMLP.fit(X_train_normalized, np.ravel(y_train,order='C'))
predictMLP=modelMLP.predict(X_test_normalized)
# +
##evaluation of metrices
#Confusion Matrix
from sklearn.metrics import confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
cm = confusion_matrix(y_test,predictMLP)
cm_display = ConfusionMatrixDisplay(cm).plot()
#Classification Report
from sklearn.metrics import classification_report
print(classification_report(y_test, predictMLP))
|
Codes/d1p01M.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lesson 3 Demo 4: Using the WHERE Clause
#
# ### In this demo we are going to walk through the basics of using the WHERE clause in Apache Cassandra.
# #### We will use a python wrapper/ python driver called cassandra to run the Apache Cassandra queries. This library should be preinstalled but in the future to install this library you can run this command in a notebook to install locally:
# # ! pip install cassandra-driver
# #### More documentation can be found here: https://datastax.github.io/python-driver/
# #### Import Apache Cassandra python package
import cassandra
# ### First let's create a connection to the database
from cassandra.cluster import Cluster
try:
cluster = Cluster(['127.0.0.1']) #If you have a locally installed Apache Cassandra instance
session = cluster.connect()
except Exception as e:
print(e)
# ### Let's create a keyspace to do our work in
# +
try:
session.execute("""
CREATE KEYSPACE IF NOT EXISTS udacity
WITH REPLICATION =
{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }"""
)
except Exception as e:
print(e)
# -
# #### Connect to our Keyspace. Compare this to how we had to create a new session in PostgreSQL.
try:
session.set_keyspace('udacity')
except Exception as e:
print(e)
# ### Let's imagine we would like to start creating a new Music Library of albums.
# ### We want to ask 4 question of our data
# #### 1. Give me every album in my music library that was released in a given year
# `select * from music_library WHERE YEAR=1970`
# #### 2. Give me the album that is in my music library that was released in a given year by "The Beatles"
# `select * from music_library WHERE YEAR = 1970 AND ARTIST_NAME = "The Beatles"`
# #### 3. Give me all the albums released in a given year in a give location
# `select * from music_library WHERE YEAR = 1970 AND LOCATION = "Liverpool"`
# #### 4. Give me city that the albumn "Let It Be" was recorded
# `select city from music_library WHERE YEAR = "1970" AND ARTIST_NAME = "The Beatles" AND ALBUM_NAME="Let it Be"`
#
# ### Here is our Collection of Data
# Please refer to table 3 in the video
# ### How should we model this data? What should be our Primary Key and Partition Key? Since our data is looking for the YEAR let's start with that. From there we will add clustering columns on Artist Name and Album Name.
#
# `Table Name: music_library
# column 1: Year
# column 2: Artist Name
# column 3: Album Name
# Column 4: City
# PRIMARY KEY(year, artist_name, album_name)`
query = "CREATE TABLE IF NOT EXISTS music_library "
query = query + "(year int, artist_name text, album_name text, city text, PRIMARY KEY (year, artist_name, album_name))"
try:
session.execute(query)
except Exception as e:
print(e)
# ### Let's insert our data into of table
# +
query = "INSERT INTO music_library (year, artist_name, album_name, city)"
query = query + " VALUES (%s, %s, %s, %s)"
try:
session.execute(query, (1970, "The Beatles", "Let it Be", "Liverpool"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Beatles", "Rubber Soul", "Oxford"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Who", "My Generation", "London"))
except Exception as e:
print(e)
try:
session.execute(query, (1966, "The Monkees", "The Monkees", "Los Angeles"))
except Exception as e:
print(e)
try:
session.execute(query, (1970, "The Carpenters", "Close To You", "San Diego"))
except Exception as e:
print(e)
# -
# ### Let's Validate our Data Model with our 4 queries.
#
# `select * from music_library WHERE YEAR=1970`
# +
query = "select * from music_library WHERE YEAR=1970"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name, row.city)
# -
# ### Success it worked! Let's try the 2nd query.
# `select * from music_library WHERE YEAR = 1970 AND ARTIST_NAME = "The Beatles"`
# +
query = "select * from music_library WHERE YEAR=1970 AND ARTIST_NAME = 'The Beatles'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name, row.city)
# -
# ### Success it worked! Let's try the 3rd query.
# `select * from music_library WHERE YEAR = 1970 AND LOCATION = "Liverpool"`
# +
query = "select * from music_library WHERE YEAR = 1970 AND LOCATION = 'Liverpool'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name, row.city)
# -
# ### Error! You can not try to access a column or a clustering column if you have not used the other defined clustering column. Let's see if we can try it a different way.
# `select city from music_library WHERE YEAR = "1970" AND ARTIST_NAME = "The Beatles" AND ALBUM_NAME="Let it Be"`
#
# +
query = "select city from music_library WHERE YEAR = 1970 AND ARTIST_NAME = 'The Beatles' AND ALBUM_NAME='Let it Be'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.city)
# -
# ### For the sake of the demo, I will drop the table.
query = "drop table music_library"
try:
rows = session.execute(query)
except Exception as e:
print(e)
# ### And Finally close the session and cluster connection
session.shutdown()
cluster.shutdown()
|
01-datamodeling/03-nosql-data-models/lesson-3-demo-4-using-the-where-clause.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.6 64-bit
# name: python3
# ---
# <!-- <font color="#30e4ac"> -->
# <h1>Python function returns</h1>
# <hr>
#
# We use the `return` statement in functions when we want the function to **output a value** but not necessarily display it.
#
# When we return an object in a function, the data stored in the object can be assigned to a variable or used in an operation
#
#
# +
def add(n):
return n+5
k=int(input("Enter:"))
if add(k)>10:
print("yes")
# -
# You can think of a return value giving us an output on which we can perform operations
#
# By default, a function returns `None` if no return statement is used
# +
#Prime number detection
def isprime(n):
for i in range(2,n):
if n%i==0:
return False
return True
k=int(input("Enter number:"))
if isprime(k):
print("Prime")
else:
print("Composite")
# -
# **Multiple Returns**
#
# A function can also return many values, and we will have to seperate the values by commas
#
# It comes in the form of a `tuple`
# +
def add(a,b,c):
return a+1,b+2,c+3
print(type(add(2,3,4)))
# -
|
functions/4)return_statements.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using the Scheduler
#
# ## Basic usage
# +
import numpy as np
from qiskit import IBMQ, QuantumCircuit, transpile, schedule as build_schedule
from qiskit import circuit
from qiskit.pulse import Schedule, Gaussian, SamplePulse, DriveChannel, ControlChannel
from qiskit.test.mock import FakeAlmaden
# +
circ = QuantumCircuit(1, 1)
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0, 1], [0, 1])
circ.draw()
# -
# TODO
# +
backend = FakeAlmaden()
transpiled_circ = transpile(circ, backend)
schedule = build_schedule(transpiled_circ, backend)
schedule.draw()
# -
# ## Scheduling with custom gate definitions
# +
custom_q0_h_schedule = Schedule(Gaussian(duration=128, amp=0.045, sigma=8)(DriveChannel(0)))
inst_map = backend.defaults().instruction_schedule_map
inst_map.add('h', 0, custom_q0_h_schedule)
schedule = build_schedule(circ, backend, inst_map=inst_map)
# -
# ## Building circuits with non-standard gates
#
# Design a new gate by giving it a name, specific qubits, and a pulse `Schedule` definition.
# +
from qiskit.pulse.schedule import ParameterizedSchedule
random_pulse = lambda x: SamplePulse(np.random.random(max(x, 1)))
def my_perfect_2q_gate_schedule_func(x):
sched = Schedule()
sched += random_pulse(x)(DriveChannel(0))
sched += random_pulse(x)(ControlChannel(0))
sched += random_pulse(x)(DriveChannel(1))
return sched
my_perfect_2q_gate_schedule = ParameterizedSchedule(my_perfect_2q_gate_schedule_func, parameters=('x'))
# -
# Add definition to instruction map
inst_map.add('my_perfect_2q_gate', (0, 1), my_perfect_2q_gate_schedule)
# Build schedule as usual
# +
my_perfect_2q_gate = circuit.Gate('my_perfect_2q_gate', 2, [100])
my_perfect_2q_circ = QuantumCircuit(2, 2)
my_perfect_2q_circ.h(0)
my_perfect_2q_circ.append(my_perfect_2q_gate, [0, 1])
my_perfect_2q_circ.h(0)
my_perfect_2q_circ.measure(0, 0)
my_perfect_2q_circ.draw()
# -
# Transpile with the new gate
my_perfect_2q_circ_transpiled = transpile(my_perfect_2q_circ,
coupling_map=[[0, 1]],
basis_gates=backend.configuration().basis_gates + ['my_perfect_2q_gate'])
my_perfect_2q_circ_transpiled.draw()
my_perfect_2q_pulse_program = build_schedule(my_perfect_2q_circ_transpiled,
backend,
inst_map=inst_map)
my_perfect_2q_pulse_program.draw(plot_range=(0, 2000))
|
docs/tutorials/advanced/terra/pulse/pulse_scheduler.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2
img=cv2.imread('0.jpeg')
# initilization of window
cv2.namedWindow('IMAGE',0)
cv2.imshow('IMAGE',img)
cv2.imwrite('output.jpeg',img)
cv2.waitKey()
cv2.destroyAllWindows()
# +
#access and understand pixel data
import cv2
import numpy as np
img=cv2.imread('0.jpeg')
print(img)
print(type(img))
print(len(img))
print(len(img[0]))
print(len(img[0][0]))
print(img.shape)
print(img.dtype)
print(img[:,:,0]) #red channel
print(img[10][15])
# +
# Data types and structures
import numpy as np
import cv2
black=np.zeros([150,150,1],'uint8')
cv2.imshow('black',black)
print(black[0,0,:])
ones=np.ones([150,150,3],'uint8')
# ones*=(2**8-1)
cv2.imshow('ones',ones)
print(ones[0,0,:])
white=np.ones([150,150,3],'uint16')
white*=(2**16-1)
cv2.imshow('white',white)
print(white[0,0,:])
#
color=ones.copy()
color[:,:]=(255,0,0) #blue
cv2.imshow('blue',color)
print(color[0,0,:])
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# image types and color channels
import numpy as np
import cv2
img=cv2.imread('0.jpeg',1)
cv2.imshow('image',img)
cv2.moveWindow('image',0,0)
print(img.shape)
width,height,channel=img.shape
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# pixel manipulation and filtering
import cv2
img=cv2.imread('0.jpeg',1)
gray=cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)
cv2.imshow('KESHAV',gray)
r=img[:,:,0]
g=img[:,:,1]
b=img[:,:,2]
rgba=cv2.merge((r,g,b,g))
cv2.imshow('rgba',rgba)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# Gussian Blur,dilation and erosion
import numpy as np
import cv2
img=cv2.imread('0.jpeg',1)
cv2.imshow('original',img)
# blur
blur=cv2.GaussianBlur(img,(5,55),0)
cv2.imshow('Gaussian Blur',blur)
kernel=np.ones((5,5),'uint8')
dilate=cv2.dilate(img,kernel,iterations=1)
erode=cv2.erode(img,kernel,iterations=1)
cv2.imshow('dilate',dilate)
cv2.imshow('erode',erode)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# scale and rotate image
import numpy as np
import cv2
img=cv2.imread('0.jpeg',1)
cv2.imshow('original',img)
# scale
half=cv2.resize(img,(0,0),fx=0.5,fy=0.5,)
cv2.imshow('half',half)
stretch_=cv2.resize(img,(600,600),fx=0.5,fy=0.5)
cv2.imshow('stretch_',stretch_)
stretch_near_=cv2.resize(img,(600,600),fx=0.5,fy=0.5,interpolation = cv2.INTER_NEAREST)
cv2.imshow('stretch_near_',stretch_near_)
# rotate
M=cv2.getRotationMatrix2D((0,0),-30,1)
rotated=cv2.warpAffine(img,M,(img.shape[1],img.shape[0]))
cv2.imshow('rotated',rotated)
cv2.waitKey(0)
cv2.destroyAllWindows()
# -
# video inputs
import numpy as np
import cv2
cap=cv2.VideoCapture(0)
while True:
ret,frame=cap.read()
cv2.imshow('keshav',frame)
if cv2.waitKey(1)&0xff==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
# +
# create custom interface
import numpy as np
import cv2
cap=cv2.VideoCapture(0)
# circle
color=(0,255,0)
point=(200,200)
radius=40
border_width=6
while True:
ret,frame=cap.read()
cv2.circle(frame,point,radius,color,border_width)
cv2.imshow('keshav',frame)
if cv2.waitKey(1)&0xff==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
# -
import numpy as np
import cv2
cap=cv2.VideoCapture(0)
# circle
color=(0,255,0)
point=(200,200)
radius=40
border_width=6
def click(event,x,y,flags,param):
global point,pressed
if event==cv2.EVENT_LBUTTONDOWN:
print('Pressed: ',x,y)
point=(x,y)
cv2.namedWindow('keshav')
cv2.setMouseCallback('keshav',click)
while True:
ret,frame=cap.read()
cv2.circle(frame,point,radius,color,border_width)
cv2.imshow('keshav',frame)
if cv2.waitKey(1)&0xff==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
# +
# Drawing App
import numpy as np
import cv2
# cap=cv2.VideoCapture(0)
keshav=np.ones([500,500,3],'uint8')*255
radius=3
color=(0,255,0)
pressed=False
def click(event,x,y,flags,param):
global keshav,pressed
if event==cv2.EVENT_LBUTTONDOWN:
pressed=True
cv2.circle(keshav,(x,y),radius,color,-1)
elif event==cv2.EVENT_MOUSEMOVE and pressed==True:
cv2.circle(keshav,(x,y),radius,color,-1)
elif event==cv2.EVENT_LBUTTONUP:
pressed=False
cv2.namedWindow('keshav')
cv2.setMouseCallback('keshav',click)
while True:
cv2.imshow('keshav',keshav)
if cv2.waitKey(1)&0xff==ord('q'):
break
elif cv2.waitKey(1)&0xff==ord('b'):
color=(255,0,0)
elif cv2.waitKey(1)&0xff==ord('g'):
color=(0,255,0)
elif cv2.waitKey(1)&0xff==ord('r'):
color=(0,0,255)
cv2.destroyAllWindows()
# +
# chapter 3
# segmentation
# simple threading
import numpy as np
import cv2
bw=cv2.imread('0.jpeg',0)
cv2.imshow('Original',bw)
'method1'
height,width=bw.shape[0:2]
binary=np.zeros([height,width,3],'uint8')
thresh=85
for row in range(0,height):
for col in range(0,width):
if bw[row][col]>thresh:
binary[row][col]=255
cv2.imshow('binary',binary)
'method2'
ret,thresh=cv2.threshold(bw,thresh,255,cv2.THRESH_BINARY)
cv2.imshow('thresh',thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# adaptive threading ---> https://www.geeksforgeeks.org/python-thresholding-techniques-using-opencv-set-2-adaptive-thresholding
import cv2
img=cv2.imread('sudoku.png',0)
cv2.imshow('Original',img)
# simaple threading ---> problem ---> uneven binding
ret,frame=cv2.threshold(img,85,255,cv2.THRESH_BINARY)
cv2.imshow('Binary',frame)
# adaptive threading ---> remove uneven binding
adap_thresh=cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,115,1)
cv2.imshow('adaptive thresh',adap_thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# otsu threading ---> https://www.geeksforgeeks.org/python-thresholding-techniques-using-opencv-set-3-otsu-thresholding/
'We use the Traditional cv2.threshold function and use cv2.THRESH_OTSU as an extra flag.'
import cv2
img=cv2.imread('sudoku.png',0)
cv2.imshow('Original',img)
# simaple threading ---> problem ---> uneven binding
ret,frame=cv2.threshold(img,85,255,cv2.THRESH_BINARY)
cv2.imshow('Binary',frame)
# otsu
ret,frame=cv2.threshold(img,85,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
cv2.imshow('Otsu',frame)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# skin detection --->
import cv2
import numpy as np
img=cv2.imread('faces.jpeg',1)
hsv=cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
cv2.imshow('HSV',hsv)
h=hsv[:,:,0]
s=hsv[:,:,1]
v=hsv[:,:,2]
hsv_split=np.concatenate((h,s,v),axis=1)
cv2.imshow('HSV_SPLIT',hsv_split)
# detect
ret,min_set=cv2.threshold(s,40,255,cv2.THRESH_BINARY)
ret,max_set=cv2.threshold(h,15,255,cv2.THRESH_BINARY)
final=cv2.bitwise_and(min_set,max_set)
cv2.imshow('Final',final)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# object detection
import cv2
img=cv2.imread('detect_blob.png')
cv2.imshow('Original colored',img)
gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
cv2.imshow('Original Gray',gray)
thresh=cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,115,1)
cv2.imshow('Thresh',thresh)
_,contours,hierarchy=cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
img2=img.copy()
index=-1
thickness=5
color=(255,0,255)
cv2.drawContours(img2,contours,index,color,thickness)
cv2.imshow('Contours',img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# Area, Perimeter, Center, Curvature
import numpy as np
import cv2
img=cv2.imread('detect_blob.png')
gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
thresh=cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,115,1)
_,contours,hierarchy=cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
img2=img.copy()
color=(123,255,61)
index=-1
tk=2
frame=np.zeros([img.shape[0],img.shape[1],3],'uint8')
i=1
for c in contours:
cv2.drawContours(frame,[c],index,color,tk)
area=cv2.contourArea(c)
arclength=cv2.arcLength(c,True)
print('{} Area: {}, perimeter: {}'.format(i,area,arclength))
i+=1
m=cv2.moments(c)
cx=int(m['m10']/m['m00'])
cy=int(m['m01']/m['m00'])
cv2.circle(frame,(cx,cy),1,(0,0,255),-1)
cv2.imshow('Original',frame)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# Canny edge detection
import cv2
import numpy
img=cv2.imread('tomatoes.jpg')
HSV=cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
ret,thresh=cv2.threshold(HSV[:,:,0],25,255,cv2.THRESH_BINARY_INV)
cv2.imshow('thresh',thresh)
edges=cv2.Canny(img,100,70)
cv2.imshow('Edges',edges)
final=cv2.bitwise_or(thresh,edges)
cv2.imshow('Final',final)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# CH-4 --> Face Detection
# Template Matching
import numpy as np
import cv2
template=cv2.imread('template.jpg')
frame=cv2.imread('players.jpg')
cv2.imshow('Template',template)
cv2.imshow('Frame',frame)
result=cv2.matchTemplate(frame,template,cv2.TM_CCOEFF_NORMED)
cv2.imshow('result1',result)
min_val,max_val,min_loc,max_loc=cv2.minMaxLoc(result)
cv2.circle(result,max_loc,15,255,-1)
cv2.imshow('result2',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# -
max_loc
min_loc
min_val
max_val
# +
# Face detection
import numpy as np
import cv2
img=cv2.imread('faces.jpeg')
rimg=cv2.resize(img,(img.shape[0]//3,img.shape[1]//5),interpolation=cv2.INTER_AREA)
gray=cv2.cvtColor(rimg,cv2.COLOR_BGR2GRAY)
path='haarcascade_frontalface_default.xml'
face_cascade=cv2.CascadeClassifier(path)
faces=face_cascade.detectMultiScale(gray,scaleFactor=1.25,minSize=(40,40))
for (x,y,w,h) in faces:
cv2.rectangle(rimg,(x,y),(x+w,y+h),(0,255,0),1)
cv2.imshow('Img',rimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
# eye detection
import numpy as np
import cv2
img=cv2.imread('faces.jpeg')
rimg=cv2.resize(img,(img.shape[0]//3,img.shape[1]//5),interpolation=cv2.INTER_AREA)
gray=cv2.cvtColor(rimg,cv2.COLOR_BGR2GRAY)
path='haarcascade_eye.xml'
eye_cascade=cv2.CascadeClassifier(path)
eyes=eye_cascade.detectMultiScale(gray,scaleFactor=1.04,minNeighbors=20,minSize=(10,10))
for (x,y,w,h) in eyes:
cx=(x+x+w)//2
cy=(y+y+h)//2
cv2.circle(rimg,(cx,cy),w//2,(255,0,0),2)
cv2.imshow('Img',rimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
# -
|
Linkedin/Opencv/Untitled.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 12807, "status": "ok", "timestamp": 1552773429425, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15717585103977390251"}, "user_tz": 0} id="ZZG4BqkENEyd" outputId="0b8bd808-eba5-4de2-9c01-64058b721fba"
# Taken from
# https://stackoverflow.com/questions/48750199/google-colaboratory-misleading-information-about-its-gpu-only-5-ram-available
# memory footprint support libraries/code
# !ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
# !pip install gputil
# !pip install psutil
# !pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# Colab only provides one GPU and it is not always guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" executionInfo={"elapsed": 2475, "status": "ok", "timestamp": 1552773429426, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15717585103977390251"}, "user_tz": 0} id="BBvIvBoyg68g" outputId="64e90310-ddee-4cb4-a672-41a831cc0503"
printm()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 4124, "status": "ok", "timestamp": 1552773455053, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15717585103977390251"}, "user_tz": 0} id="yV1m-9ZGuKGj" outputId="32fde251-f5a5-4b68-9b12-9e42974bffae"
# Clone repo
# !git clone https://github.com/MatchLab-Imperial/keras_triplet_descriptor
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 978, "status": "ok", "timestamp": 1552773459405, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15717585103977390251"}, "user_tz": 0} id="pyZSqhZ5LACT" outputId="ce0a5f2e-73e9-43f0-f80b-41950680731a"
# Change directory
# %cd keras_triplet_descriptor
# + colab={"base_uri": "https://localhost:8080/", "height": 462} colab_type="code" executionInfo={"elapsed": 190209, "status": "ok", "timestamp": 1552773652267, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15717585103977390251"}, "user_tz": 0} id="307CBCL-FjX4" outputId="9b10a27f-fe9c-4492-a089-784596f08ac0"
# Download data
# !wget -O hpatches_data.zip https://imperialcollegelondon.box.com/shared/static/ah40eq7cxpwq4a6l4f62efzdyt8rm3ha.zip
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" executionInfo={"elapsed": 249292, "status": "ok", "timestamp": 1552773724185, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15717585103977390251"}, "user_tz": 0} id="36mBTFvPCxY9" outputId="b0b44eeb-fe9c-4c11-e2e1-8a67725abd68"
# Extract data
# !unzip -q ./hpatches_data.zip
# !rm ./hpatches_data.zip
# + [markdown] colab_type="text" id="Rjyr96hR_4wS"
# ## Importing Necessary Modules
#
# We now import the modules we will use in this baseline code.
# + colab={} colab_type="code" id="o0KYfe-at9KN"
import sys
import json
import os
import glob
import time
import tensorflow as tf
import numpy as np
import cv2
import random
import keras
from keras import backend as K
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda, Reshape
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization, Conv2DTranspose
from keras.layers import Input, UpSampling2D, concatenate
from read_data import HPatches, DataGeneratorDesc, hpatches_sequence_folder, DenoiseHPatches, tps
from utils import generate_desc_csv, plot_denoise, plot_triplet
# + colab={} colab_type="code" id="NXL31ez-AT5h"
random.seed(1234)
np.random.seed(1234)
tf.set_random_seed(1234)
# + colab={} colab_type="code" id="ABKDHB9RApZk"
hpatches_dir = './hpatches'
splits_path = 'splits.json'
splits_json = json.load(open(splits_path, 'rb'))
split = splits_json['a']
train_fnames = split['train']
test_fnames = split['test']
seqs = glob.glob(hpatches_dir+'/*')
seqs = [os.path.abspath(p) for p in seqs]
# -
seqs_train = list(filter(lambda x: x.split('\\')[-1] in train_fnames, seqs))
seqs_test = list(filter(lambda x: x.split('\\')[-1] in split['test'], seqs))
len(seqs)
len(seqs_train)
len(seqs_test)
# + [markdown] colab_type="text" id="qeWik0vMEtuC"
# ## Models and loss
# + colab={} colab_type="code" id="W6QbkHnbuIUD"
def get_denoise_model(shape, do = 0, activate = 'selu'):
inputs = Input(shape)
conv1 = Dropout(do)((Conv2D(32, (3, 3), activation = activate, padding='same')(inputs)))
conv1 = Dropout(do)((Conv2D(32, (3, 3), activation = activate, padding='same')(conv1)))
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Dropout(do)((Conv2D(64, (3, 3), activation = activate, padding='same')(pool1)))
conv2 = Dropout(do)((Conv2D(64, (3, 3), activation = activate, padding='same')(conv2)))
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Dropout(do)((Conv2D(128, (3, 3), activation = activate, padding='same')(pool2)))
conv3 = Dropout(do)((Conv2D(128, (3, 3), activation = activate, padding='same')(conv3)))
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Dropout(do)((Conv2D(256, (3, 3), activation = activate, padding='same')(pool3)))
conv4 = Dropout(do)((Conv2D(256, (3, 3), activation = activate , padding='same')(conv4)))
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
conv5 = Dropout(do)((Conv2D(512, (3, 3), activation = activate, padding='same')(pool4)))
conv5 = (Conv2D(512, (3, 3), activation = activate, padding='same'))(UpSampling2D(size = (2,2))(conv5))
up6 = concatenate([conv5, conv4], axis=3)
conv6 = Dropout(do)((Conv2D(256, (3, 3), activation = activate, padding='same')(up6)))
conv6 = (Conv2D(256, (3, 3), activation = activate, padding='same'))(UpSampling2D(size = (2,2))(conv6))
up7 = concatenate([conv6, conv3], axis=3)
conv7 = Dropout(do)((Conv2D(128, (3, 3), activation = activate, padding='same')(up7)))
conv7 = (Conv2D(128, (3, 3), activation = activate, padding='same'))(UpSampling2D(size = (2,2))(conv7))
up8 = concatenate([conv7, conv2], axis=3)
conv8 = Dropout(do)((Conv2D(64, (3, 3), activation = activate, padding='same')(up8)))
conv8 = (Conv2D(64, (3, 3), activation = activate, padding='same'))(UpSampling2D(size = (2,2))(conv8))
up9 = concatenate([conv8, conv1], axis=3)
conv9 = Dropout(do)((Conv2D(32, (3, 3), activation = activate, padding='same')(up9)))
conv9 = Dropout(do)((Conv2D(32, (3, 3), activation = activate, padding='same')(conv9)))
conv10 = Dropout(do)(Conv2D(1, (1, 1))(conv9))
model = Model(inputs=[inputs], outputs=[conv10])
return model
def get_descriptor_model(shape, activate= 'elu'):
'''Architecture copies HardNet architecture'''
init_weights = keras.initializers.he_normal()
descriptor_model = Sequential()
descriptor_model.add(Conv2D(32, 3, padding='same', input_shape=shape, use_bias = True, kernel_initializer=init_weights))
descriptor_model.add(BatchNormalization(axis = -1))
descriptor_model.add(Activation(activate))
descriptor_model.add(Conv2D(32, 3, padding='same', use_bias = True, kernel_initializer=init_weights))
descriptor_model.add(BatchNormalization(axis = -1))
descriptor_model.add(Activation(activate))
descriptor_model.add(Conv2D(64, 3, padding='same', strides=2, use_bias = True, kernel_initializer=init_weights))
descriptor_model.add(BatchNormalization(axis = -1))
descriptor_model.add(Activation(activate))
descriptor_model.add(Conv2D(64, 3, padding='same', use_bias = True, kernel_initializer=init_weights))
descriptor_model.add(BatchNormalization(axis = -1))
descriptor_model.add(Activation(activate))
descriptor_model.add(Conv2D(128, 3, padding='same', strides=2, use_bias = True, kernel_initializer=init_weights))
descriptor_model.add(BatchNormalization(axis = -1))
descriptor_model.add(Activation(activate))
descriptor_model.add(Conv2D(128, 3, padding='same', use_bias = True, kernel_initializer=init_weights))
descriptor_model.add(BatchNormalization(axis = -1))
descriptor_model.add(Activation(activate))
descriptor_model.add(Dropout(0.5))
descriptor_model.add(Conv2D(128, 8, padding='valid', use_bias = True, kernel_initializer=init_weights))
# Final descriptor reshape
descriptor_model.add(Reshape((128,)))
return descriptor_model
def triplet_loss(x):
output_dim = 128
a, p, n = x
_alpha = 1.0
positive_distance = K.mean(K.square(a - p), axis=-1)
negative_distance = K.mean(K.square(a - n), axis=-1)
return K.expand_dims(K.maximum(0.0, positive_distance - negative_distance + _alpha), axis = 1)
# + [markdown] colab_type="text" id="RlS5zcV7EJgp"
# ## Denoising Image Patches
#
# + colab={"base_uri": "https://localhost:8080/", "height": 88} colab_type="code" executionInfo={"elapsed": 1293, "status": "ok", "timestamp": 1552773922597, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "15717585103977390251"}, "user_tz": 0} id="-eUSba93Dttj" outputId="9888dc14-c015-469b-f13c-d1576f667cbb"
from keras.layers import LeakyReLU
shape = (32, 32, 1)
denoise_model = keras.models.load_model('./denoise_modified.h5')
# + [markdown] colab_type="text" id="SyABaCvkEPDR"
# ## Training a Descriptor Network
# In the last section we trained a model that given a noisy patch, outputs a denoised version of it. We hoped that by doing so, we will improve the performance of the second part, which is training a network that outputs the descriptor. As we mentioned, a descriptor is a numerical vector that represents the small images we have. The dataset consists of a large number of small images, which are cropped patches from other larger images. Hence, they represent some local part of a scene. That is why there are no objects represented, only corners or textures. Each of these patches is related to a subset of other patches of the dataset by some kind of geometric transformation (e.g. rotation). For a given patch, we want the network to output a vector that is close to the vectors of the patches that represent the same local part of a scene, while being far from patches do not represent that local part of a scene.
#
# To do so, we will build a convolutional neural network that takes the input of $32\times32$ and outputs a descriptor of size $128$. For the loss, we use the triplet loss, which takes an anchor patch, a negative patch and a positive patch. The idea is to train the network so the descriptors from the anchor and positive patch have a low distance between them, and the negative and anchor patch have a large distance between them.
#
# In this cell we generate a triplet network, which is a network formed by three copies of the same network. That means that the descriptor model will compute the descriptor for the input `'a'` (anchor), the same descriptor model (with the same weights) will compute the descriptor for the input `'p'` (positive), and again the same model will compute the descriptor for the input `'n'` (negative).
#
# **Updated explanation**: Due to the way Keras handles the compile method, it needs a loss as an argument in that compile method. However, our loss is computed in the lambda layer, so we want to minimize the output of that layer. As we want to minimize the output of the Lambda function (in this case the triplet loss), we output as the label in the training_generator a vector of zeros and we compute the mean absolute error of the triplet loss and this vector of zeros. To give you an intuition, what we aim to minimize is
# $$ |\text{triplet_loss} - 0| = |\text{triplet_loss}| = \text{triplet_loss} $$
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 88} colab_type="code" id="DVmDZIRTHPDa" outputId="94bda2ad-e40e-4c44-e74f-be30d956eed6"
from keras.layers import Lambda
shape = (32, 32, 1)
xa = Input(shape=shape, name='a')
xp = Input(shape=shape, name='p')
xn = Input(shape=shape, name='n')
descriptor_model_relu = get_descriptor_model( shape)
ea = descriptor_model(xa)
ep = descriptor_model(xp)
en = descriptor_model(xn)
loss = Lambda(triplet_loss)([ea, ep, en])
adam_amsgrad = keras.optimizers.Adam(lr=1e-4, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=True)
sgd = keras.optimizers.SGD(lr=0.00001, momentum=0.9, nesterov=True)
adadelta = keras.optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=None, decay=0.0)
nadam = keras.optimizers.Nadam(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=None, schedule_decay=0.004)
adam = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
descriptor_model_trip_adam_amsgrad = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_sgd = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_adadelta = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_nadam = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_adam = Model(inputs=[xa, xp, xn], outputs=loss)
descriptor_model_trip_adam_amsgrad.compile(loss='mean_absolute_error', optimizer=adam)
descriptor_model_trip_sgd.compile(loss='mean_absolute_error', optimizer=adam_amsgrad)
descriptor_model_trip_adadelta.compile(loss='mean_absolute_error', optimizer=adadelta)
descriptor_model_trip_nadam.compile(loss='mean_absolute_error', optimizer=sgd)
descriptor_model_trip_adam.compile(loss='mean_absolute_error', optimizer=nadam)
# + colab={"base_uri": "https://localhost:8080/", "height": 187} colab_type="code" id="YIR1cH4fDwKj" outputId="b17c0b36-e90e-4df0-e969-0ab13136786d"
### Descriptor loading and training
# Loading images
hPatches = HPatches(train_fnames=train_fnames, test_fnames=test_fnames,
denoise_model=denoise_model, use_clean=False)
# Creating training generator
training_generator = DataGeneratorDesc(*hPatches.read_image_file(hpatches_dir, train=1), num_triplets=100000)
# Creating validation generator
val_generator = DataGeneratorDesc(*hPatches.read_image_file(hpatches_dir, train=0), num_triplets=10000)
# + colab={"base_uri": "https://localhost:8080/", "height": 183} colab_type="code" id="3RQmOMU92csu" outputId="648c64db-a797-4bb6-c7de-5eb45d27da25"
plot_triplet(training_generator)
# + colab={"base_uri": "https://localhost:8080/", "height": 122} colab_type="code" id="QPyc8as42WTQ" outputId="69ea09ee-8198-42d5-a654-2bd3e339e5b2"
#epochs = 1
### As with the denoising model, we use a loop to save for each epoch
## #the weights in an external website in case colab stops.
### reset, so e.g. calling 5 times fit(epochs=1) behave as fit(epochs=5)
### If you have a model saved from a previous training session
### Load it in the next line
# descriptor_model_trip.set_weights(keras.models.load_model('./descriptor.h5').get_weights())
# descriptor_model_trip.optimizer = keras.models.load_model('./descriptor.h5').optimizer
#for e in range(epochs):
descriptor_history_adam_amsgrad = descriptor_model_trip_adam_amsgrad.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
descriptor_history_sgd = descriptor_model_trip_sgd.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
descriptor_history_adadelta = descriptor_model_trip_adadelta.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
descriptor_history_nadam = descriptor_model_trip_nadam.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
descriptor_history_adam = descriptor_model_trip_adam.fit_generator(generator=training_generator, epochs=5, verbose=1, validation_data=val_generator)
# + colab={} colab_type="code" id="XFA_8uN4Eb3B"
import matplotlib.pyplot as plt
# +
def plot_history(history, history2, history3, history4, history5, metric = None):
# Plots the loss history of training and validation (if existing)
# and a given metric
if metric != None:
fig, axes = plt.subplots(2,1, figsize=(8, 10))
#axes[0].plot(history.history[metric])
#axes[0].plot(history2.history[metric])
#axes[0].plot(history3.history[metric])
#axes[0].plot(history4.history[metric])
#axes[0].plot(history5.history[metric])
#axes[0].plot(history6.history[metric])
try:
#axes[0].plot(history.history['val_'+metric])
#axes[0].plot(history2.history['val2_'+metric])
#axes[0].plot(history3.history['val3_'+metric])
#axes[0].legend(['ReLU', 'ELU', 'SELU', 'Linear', 'LeakyReLU'], loc='upper right')
except:
pass
#axes[0].set_title('MAE Vs. No of Epochs for Various Activation Functions')
#axes[0].set_ylabel('Mean Absolute Error')
#axes[0].set_xlabel('Epoch')
fig.subplots_adjust(hspace=0.5)
axes[1].plot(history.history['loss'])
axes[1].plot(history2.history['loss'])
axes[1].plot(history3.history['loss'])
axes[1].plot(history4.history['loss'])
axes[1].plot(history5.history['loss'])
try:
#axes[1].plot(history.history['val_loss'])
axes[1].legend(['Adam', 'Adam_Amsgrad', 'Adadelta', 'SGD', 'Nadam'], loc='upper right')
except:
pass
axes[1].set_title('MAE Vs. No of Epochs for Various Optimizers')
axes[1].set_ylabel('Mean Absolute Error')
axes[1].set_xlabel('Epoch')
else:
plt.plot(history.history['loss'])
try:
plt.plot(history.history['val_loss'])
plt.legend(['Train', 'Val'])
except:
pass
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plot_history(descriptor_history_adam, descriptor_history_adam_amsgrad, descriptor_history_adadelta, descriptor_history_sgd, descriptor_history_nadam, 'mean_absolute_error')
# +
def plot_val_history(history, history2, history3, history4, history5, metric = None):
# Plots the loss history of training and validation (if existing)
# and a given metric
if metric != None:
fig, axes = plt.subplots(2,1, figsize=(8, 10))
#axes[0].plot(history.history[metric])
#axes[0].plot(history2.history[metric])
#axes[0].plot(history3.history[metric])
try:
#axes[0].plot(history.history['val_'+metric])
#axes[0].plot(history2.history['val_'+metric])
#axes[0].plot(history3.history['val_'+metric])
#axes[0].plot(history4.history['val_'+metric])
#axes[0].plot(history5.history['val_'+metric])
#axes[0].plot(history6.history['val_'+metric])
#axes[0].legend(['ReLU', 'ELU', 'SELU', 'Linear', 'LeakyReLU'], loc='upper right')
except:
pass
#axes[0].set_title('Validation Loss Vs. No of Epochs for Various Activation Functions')
#axes[0].set_ylabel('Validation Loss')
#axes[0].set_xlabel('Epoch')
fig.subplots_adjust(hspace=0.5)
#axes[1].plot(history.history['loss'])
#axes[1].plot(history2.history['loss'])
#axes[1].plot(history3.history['loss'])
try:
axes[1].plot(history.history['val_loss'])
axes[1].plot(history2.history['val_loss'])
axes[1].plot(history3.history['val_loss'])
axes[1].plot(history4.history['val_loss'])
axes[1].plot(history5.history['val_loss'])
axes[1].legend(['Adam', 'Adam_Amsgrad', 'Adadelta', 'SGD', 'Nadam'], loc='upper right')
except:
pass
axes[1].set_title('Validation Loss Vs. No of Epochs for Various Optimizers')
axes[1].set_ylabel('Validation Loss')
axes[1].set_xlabel('Epoch')
else:
plt.plot(history.history['loss'])
try:
plt.plot(history.history['val_loss'])
plt.legend(['Train', 'Val'])
except:
pass
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plot_val_history(descriptor_history_adam, descriptor_history_adam_amsgrad, descriptor_history_adadelta, descriptor_history_sgd, descriptor_history_nadam, 'mean_absolute_error')
# -
|
Improved Approach Hyperparameter Tests/OL2-Net_optimizers_hyp.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="sK1RNVHrYnzO"
import tensorflow as tf
import os
import random
import numpy as np
from tqdm import tqdm
import nibabel as nib
from skimage.io import imread, imshow
from skimage.transform import resize
import matplotlib.pyplot as plt
from __future__ import print_function
import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense,Dropout,Activation,Flatten,BatchNormalization
from keras.layers import Conv2D,MaxPooling2D
from tensorflow.keras import layers
# + id="yvBLl9vy6Y-a"
num_classes = 2
img_rows,img_cols = 200,200
batch_size = 128
# + id="6eKpcWCE6ebf"
train_data_dir = '/content/drive/MyDrive/train (2)'
validation_data_dir = '/content/drive/MyDrive/val (1)'
test_data_dir = '/content/drive/MyDrive/test (2)'
# + id="gvNi2FIQ6n12"
'''train_datagen = ImageDataGenerator(
horizontal_flip=True,
fill_mode='nearest')'''
train_datagen = ImageDataGenerator(#rotation_range = 180,
validation_split = 0.2, horizontal_flip=True,
)
validation_datagen = ImageDataGenerator(validation_split = 0.2)
# + colab={"base_uri": "https://localhost:8080/"} id="3Zo7uIMO6q-5" outputId="fcdb3d08-be46-47a3-9d00-f2bd017d5489"
train_generator = train_datagen.flow_from_directory(directory = train_data_dir,
target_size = (img_rows,img_cols),
batch_size = 128,
class_mode = "binary",
subset = "training"
)
# + colab={"base_uri": "https://localhost:8080/"} id="wKCTYKmX6yb8" outputId="633b149d-8684-4a12-dfb3-b84f6f420563"
validation_generator = validation_datagen.flow_from_directory( directory = test_data_dir,
target_size = (img_rows,img_cols),
batch_size = 128,
class_mode = "binary",
subset = "validation"
)
# + colab={"base_uri": "https://localhost:8080/", "height": 287} id="AX7D2qNpCe9B" outputId="89834bce-a7aa-4d70-dded-565cd13da947"
import cv2
img=cv2.imread('/content/drive/MyDrive/train (2)/NORMAL/IM-0119-0001.jpeg')
plt.imshow(img)
print(img.shape)# all the images have different shapes so, I have to perform a resize to fit the images
# and to allow the images to be read in my network
# + colab={"base_uri": "https://localhost:8080/", "height": 321} id="uLA93sEbC-hV" outputId="e3fa542e-48d6-4793-bf3f-e558bb619908"
import cv2,pathlib
def resize_fn(fp: str) -> np.ndarray:
""" Resize an image maintaining its proportions
Args:
fp (str): Path argument to image file
scale (Union[float, int]): Percent as whole number of original image. eg. 53
Returns:
image (np.ndarray): Scaled image
"""
_scale = lambda dim, s: int(dim * s / 100)
im: np.ndarray = cv2.imread(fp)
#plt.imshow(im)
width, height, channels = im.shape
scale_1=(200*100)/width
scale_2=(200*100)/height
print("{},{},{}".format(width,height,channels))
new_width: int = _scale(width, scale_1)
new_height: int = _scale(height, scale_2)
new_dim: tuple = (new_width, new_height)
return cv2.resize(src=im, dsize=new_dim, interpolation=cv2.INTER_LINEAR)
resized = resize_fn("/content/drive/MyDrive/train (2)/NORMAL/IM-0119-0001.jpeg")
print(resized.shape)
plt.imshow(resized)
# + id="GY2NtJ1uxBL0"
'''from tqdm import tqdm
def load_normal(norm_path):
norm_files = np.array(os.listdir(norm_path))
norm_labels = np.array(['normal']*len(norm_files))
print(norm_files)
norm_images = []
for image in tqdm(norm_files):
path = pathlib.Path(norm_path+image)
full_path = path.absolute()
my_path = full_path.as_posix()
img=cv2.imread(str(my_path))
plt.imshow(img)
image = resize_fn(str(my_path))
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
norm_images.append(image)
norm_images = np.array(norm_images)
return norm_images, norm_labels
def load_pneumonia(pneu_path):
pneu_files = np.array(os.listdir(pneu_path))
pneu_labels = np.array(['pneumonia']*len(pneu_files))
pneu_images = []
for image in tqdm(pneu_files):
image = resize_fn(norm_path + image)
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
pneu_images.append(image)
pneu_images = np.array(pneu_images)
return pneu_images, pneu_labels'''
# + id="7PQmq8NRx8BF"
'''norm_images, norm_labels = load_normal('/content/drive/MyDrive/train (2)/NORMAL')
pneu_images, pneu_labels = load_pneumonia('/content/drive/MyDrive/train (2)/PNEUMONIA')'''
# + id="6BaLIlszawOj"
'''model = Sequential()
model.add(Conv2D(32 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu' , input_shape = (150,150,1)))
model.add(BatchNormalization())
model.add(MaxPool2D((2,2) , strides = 2 , padding = 'same'))
model.add(Conv2D(64 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu'))
model.add(Dropout(0.1))
model.add(BatchNormalization())
model.add(MaxPool2D((2,2) , strides = 2 , padding = 'same'))
model.add(Conv2D(64 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu'))
model.add(BatchNormalization())
model.add(MaxPool2D((2,2) , strides = 2 , padding = 'same'))
model.add(Conv2D(128 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu'))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(MaxPool2D((2,2) , strides = 2 , padding = 'same'))
model.add(Conv2D(256 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu'))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(MaxPool2D((2,2) , strides = 2 , padding = 'same'))
model.add(Flatten())
model.add(Dense(units = 128 , activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(units = 1 , activation = 'sigmoid'))
model.compile(optimizer = "rmsprop" , loss = 'binary_crossentropy' , metrics = ['accuracy'])
model.summary()'''
# + id="oebEKbHtLDP2"
from keras import regularizers
inputs = tf.keras.layers.Input((200,200,3))
s = tf.keras.layers.Lambda(lambda x: x/ 255)(inputs)
c1=tf.keras.layers.Conv2D(16, (3, 3),activation='relu')(s)
p1 = tf.keras.layers.MaxPooling2D(2, 2)(c1)
c2=tf.keras.layers.Conv2D(32, (3, 3),activation='relu')(p1)
p2 = tf.keras.layers.MaxPooling2D(2,2)(c2)
c3=tf.keras.layers.Conv2D(64, (3, 3),activation='relu')(p2)
p3 = tf.keras.layers.MaxPooling2D(2,2)(c3)
c4=tf.keras.layers.Conv2D(64, (3, 3), activation='relu')(p3)
p4 = tf.keras.layers.MaxPooling2D(2,2)(c4)
c5=tf.keras.layers.Conv2D(128, (3, 3), activation='relu')(p4)
p5 = tf.keras.layers.MaxPooling2D(2,2)(c5)
c5=tf.keras.layers.Flatten()(p4)
c5=tf.keras.layers.Dense(512,activation='relu')(c5)
outputs = tf.keras.layers.Dense(1,activation='sigmoid')(c5)
model = tf.keras.Model(inputs=[inputs], outputs=[outputs])
# + colab={"base_uri": "https://localhost:8080/"} id="Kx0z249ndxAW" outputId="41aeb7b0-efc7-4b0a-a2b5-45f847c61718"
from tensorflow.keras.optimizers import RMSprop,Adam
model.compile(optimizer=Adam(lr=0.001), loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="Yb2q_R527REW" outputId="e12ac3c1-1973-4db4-a046-4b451e49d3ef"
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
checkpoint = ModelCheckpoint('Pneumonia_classification.h5',
monitor='val_loss',
mode='min',
save_best_only=True,
verbose=1)
callbacks = [checkpoint]
epochs=10
history = model.fit(x=train_generator,epochs = epochs,validation_data = validation_generator,callbacks=callbacks)
# + id="zHxvlSClkN6a"
test_datagen = ImageDataGenerator()
# + colab={"base_uri": "https://localhost:8080/"} id="EbPazOgOHcZ0" outputId="bb9a8909-b185-4312-e3e1-9cf50327ad48"
test_generator = test_datagen.flow_from_directory( directory = test_data_dir,
target_size = (img_rows,img_cols),
batch_size = 64,
class_mode = "binary",
)
# + id="_GgDwy57Hm2R" colab={"base_uri": "https://localhost:8080/", "height": 321} outputId="81a27019-a950-4bc0-cef6-b9956fede44a"
img=cv2.imread('/content/drive/MyDrive/test (2)/PNEUMONIA (1)/person112_bacteria_539.jpeg')
print(img.shape)
resized = resize_fn('/content/drive/MyDrive/test (2)/PNEUMONIA (1)/person112_bacteria_539.jpeg')
print(resized.shape)
plt.imshow(resized)
image = resized.reshape((200,200,3))
# + colab={"base_uri": "https://localhost:8080/"} id="1_vaNWysXpKb" outputId="8d0bcd76-2c4e-4edc-e7e0-fd0b14d6b619"
X = np.zeros((1,200,200, 3), dtype=np.float32)
X[0]=image
model.predict(X)
# + colab={"base_uri": "https://localhost:8080/", "height": 321} id="IiQ12ekOX5gm" outputId="824349e9-cc3b-42ab-fcfb-a520d26c2782"
img=cv2.imread('/content/drive/MyDrive/test (2)/NORMAL (1)/IM-0009-0001 (1).jpeg')
print(img.shape)
resized = resize_fn('/content/drive/MyDrive/test (2)/NORMAL (1)/IM-0009-0001 (1).jpeg')
print(resized.shape)
plt.imshow(resized)
image = resized.reshape((200,200,3))
# + colab={"base_uri": "https://localhost:8080/"} id="gzRvnUsMYQXG" outputId="deceef37-45aa-4403-a35e-e6421b4e2cea"
X = np.zeros((1,200,200, 3), dtype=np.float32)
X[0]=image
model.predict(X)
# + id="mIjc0kddYSHM"
model.save('Pneumonia_weights.h5')
# + colab={"base_uri": "https://localhost:8080/", "height": 339} id="IedtgL4CfP87" outputId="9df3f05a-a0bb-4529-eb41-ae347652fd19"
from keras.models import load_model
model_n = load_model('/content/Pneumonia_weights.h5')
img=cv2.imread('/content/drive/MyDrive/test (2)/NORMAL (1)/IM-0009-0001 (1).jpeg')
print(img.shape)
resized = resize_fn('/content/drive/MyDrive/test (2)/NORMAL (1)/IM-0009-0001 (1).jpeg')
print(resized.shape)
plt.imshow(resized)
image = resized.reshape((200,200,3))
X = np.zeros((1,200,200, 3), dtype=np.float32)
X[0]=image
model_n.predict(X)
# + id="EtKm1lL2fkwp" colab={"base_uri": "https://localhost:8080/", "height": 339} outputId="62d1c424-c7ed-4ec9-8770-04b826dc46ce"
from keras.models import load_model
model_n = load_model('/content/Pneumonia_classification.h5')
img=cv2.imread('/content/drive/MyDrive/test (2)/NORMAL (1)/IM-0009-0001 (1).jpeg')
print(img.shape)
resized = resize_fn('/content/drive/MyDrive/test (2)/NORMAL (1)/IM-0009-0001 (1).jpeg')
print(resized.shape)
plt.imshow(resized)
image = resized.reshape((200,200,3))
X = np.zeros((1,200,200, 3), dtype=np.float32)
X[0]=image
model_n.predict(X)
# + id="COztu_Js8AKV"
|
Pneumonia_Classification.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from xyplot import Curve
x = float(input(">"))
a = []
while x != 0:
a.append(x)
x = float(input(">"))
# -
a
# +
value = float(input("> "))
x = []
while value != 0:
x.append(value)
value = float(input("> "))
# -
x
curve = Curve(x, a, 1)
curve.ax.plot(x, a)
curve.ax.grid()
curve.fig
curve.set(
xlabel = "Frequency (kHz)",
ylabel = "20xlog(CMRR)",
title = "CMRR vs Frequency with voltage follower"
)
curve.save("CMRRwithvoltagefollower.png")
|
physics-lab/CMRRwithVoltageFollower.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Sub population Analysis
# \[_In case you’re unable to see the atoti visualizations in GitHub, try viewing the notebook in [nbviewer](https://nbviewer.org/github/atoti/notebooks/blob/master/notebooks/sub-population-analysis/main.ipynb)._]
# ### Objective:
#
# In this notebook, we shall be doing a subpopulation analysis on the predictions from an ML model.
#
# We shall be using [atoti](https://www.atoti.io/) to create the data cubes and doing the analysis.
#
# For the uninitiated, atoti is a python library that enables data scientists to explore the predictions of their models and seamlessly share the results with the business stakeholders.
#
# <div style="text-align:center"><a href="https://www.atoti.io/?utm_source=gallery&utm_content=subpopulation" target="_blank" rel="noopener noreferrer"><img src="https://data.atoti.io/notebooks/banners/discover.png" alt="atoti table" /></a></div>
#
# ### Dataset:
#
# **A data set from the insurance domain is used here.**
# An insurance company that has provided Health Insurance to its customers now needs to predict whether the policyholders (customers) from the past year will also be interested in Vehicle Insurance provided by the company. The dataset has been obtained from the Jantahack organized by [analytics vidya.](https://datahack.analyticsvidhya.com/contest/janatahack-cross-sell-prediction/#ProblemStatement)
#
#
# ### Steps:
#
# **1. Preprocessing**
# * Import the datasets and train-val split
#
# **2. Loading Model Predictions**
# * Evaluate model predictions
#
# **3. Goofing up the prediction**
#
# **4. Loading data in atoti**
#
# **3. Further analysis**
# # Step 1: Preprocessing
#
# ## Loading the datasets
# +
# Importing the necessary packages
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score, cross_validate, train_test_split
import os
from sklearn.model_selection import train_test_split
import sklearn.metrics as metrics
from sklearn.metrics import f1_score, roc_auc_score, accuracy_score
import matplotlib.pyplot as plt
import random
pd.set_option("display.max_columns", None)
seed_value = 36
os.environ["PYTHONHASHSEED"] = str(seed_value)
random.seed(seed_value)
np.random.seed(seed_value)
import multiprocessing
pd.options.mode.chained_assignment = None
# loading the manually created functions from the _utils.py file
from _utils import process_vehicule_age, change_type
# Importing atoti
import atoti as tt
# -
data = pd.read_csv("https://data.atoti.io/notebooks/insurance/train.csv")
# converting vehicle age to inter and changing datatype for other columns
data = process_vehicule_age(data)
data = change_type(data)
data.head()
# ### We need to reshuffle columns and split the label from the training data
# +
# reshuffling columns
cols = data.columns.tolist()
cols.insert(0, cols.pop(cols.index("Response")))
data = data.reindex(columns=cols)
# splitting the labels from the data
y = data.iloc[:, 0]
X = data.iloc[:, 1:]
# -
# ## Splitting the train and Validation datasets
# +
# split X in X_train and x_test
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.1, random_state=42)
# +
# printing the shapes of the new dataframes
print("Train data size: {}\n".format(X_train.shape))
print("Train label size: {}\n".format(y_train.shape))
print("Test data size: {}\n".format(X_val.shape))
print("Test label size: {}\n".format(y_val.shape))
# -
# # Step 2: Load model Predictions
#
# In order to stick to the scope of this notebook, we shall be loading the predictions made on the validation dataset above.
# These are the predictions from a neural network model, which has been uploaded in a different notebook.
#
# +
# Predictions from neural network
model_results_nn = pd.read_csv(
"https://data.atoti.io/notebooks/insurance/prediction_nn_rv.csv"
)
# renaming the result comuln from various models
model_results_nn.columns = ["predicted_response"]
# +
# Here we are only concerned with the val dataset
master_data = X_val.join(y_val)
master_data = master_data.reset_index(drop=True)
# +
# we shall be using the results from the Deep Learning along with the actual labels
master_data = master_data.join(model_results_nn)
master_data["predicted_response"] = master_data["predicted_response"].astype(int)
# -
master_data.head()
# +
# In order to validate the correctness of these metrics, let us calculate these metrics using
print(
"accuracy score: ",
metrics.accuracy_score(master_data["Response"], master_data["predicted_response"]),
)
print(
"precision score: ",
metrics.precision_score(master_data["Response"], master_data["predicted_response"]),
)
print(
"recall score: ",
metrics.recall_score(master_data["Response"], master_data["predicted_response"]),
)
print(
"F1 score: ",
metrics.f1_score(master_data["Response"], master_data["predicted_response"]),
)
print(
"AUC: ",
metrics.roc_auc_score(master_data["Response"], master_data["predicted_response"]),
)
# -
# ## Model Performance:
# Here we can see that the model prediction have an **AUC of 0.79 and F1 score of 0.43**
# # Step 3: Goofing the predictions:
# Now let us try and goof up a bit with the model predictions.
# For the highly valuable customers (the ones with the maximum annual premium), let us see what happens if we simply reverse the model predictions, assuming that the model did not learn anything about those customers.
#
# This could occur in a real-life scenario because of-
#
# * Data drift - The characteristic of the key customers has changed over time
# * Unrepresentative dataset - The model is trained on a non representative dataset as found in production
master_data.Annual_Premium.quantile(0.95)
# Only 5% customers pay a annual premium of more than 55098, let us see how the model performs for these and other buckets
# +
# filtering out top customers
master_data2 = master_data.loc[master_data["Annual_Premium"] >= 55048]
# -
master_data3 = master_data.loc[master_data["Annual_Premium"] < 55048]
master_data2["predicted_response"] = (
master_data2["predicted_response"].map({0: 1, 1: 0}).astype(int)
)
data_set = master_data3.append(master_data2, ignore_index=True)
data_set.shape
# Now let see the overall model performance and the performance on the various subpopulation
# # Step 4: Loading predictions in atoti
session = tt.create_session()
customer = session.read_pandas(
data_set,
table_name="customer",
types={"Policy_Sales_Channel": tt.type.STRING, "Region_Code": tt.type.STRING},
hierarchized_columns=[
"id",
"Response",
"predicted_response",
"Driving_License",
"Gender",
],
keys=["id"],
)
# +
# creating a cube from this store
cube = session.create_cube(customer, "model_cube")
h = cube.hierarchies
m = cube.measures
l = cube.levels
# -
# We create a simple function that helps change the dimension of our hierarchies as we group them into logical categories.
def set_dim(hier_name, dim_name):
h[hier_name].dimension = dim_name
customer.head()
customer_hierarchy = [
"id",
"Response",
"predicted_response",
"Driving_License",
"Gender",
]
[set_dim(hier_name, "Customer") for hier_name in customer_hierarchy]
# ### Defining the various measures in atoti
l
# +
m["true positive"] = tt.agg.sum(
tt.where(((l["predicted_response"] == 1) & (l["Response"] == 1)), 1, 0),
scope=tt.scope.origin(l["id"]),
)
m["true negative"] = tt.agg.sum(
tt.where(((l["predicted_response"] == 0) & (l["Response"] == 0)), 1, 0),
scope=tt.scope.origin(l["id"]),
)
m["false positive"] = tt.agg.sum(
tt.where(((l["predicted_response"] == 1) & (l["Response"] == 0)), 1, 0),
scope=tt.scope.origin(l["id"]),
)
m["false negative"] = tt.agg.sum(
tt.where(((l["predicted_response"] == 0) & (l["Response"] == 1)), 1, 0),
scope=tt.scope.origin(l["id"]),
)
# -
# #### Here we calculate the following metrics
#
# Precision: What proportion of positive identifications was actually correct
#
# Recall: What proportion of actual positives was identified correctly it is also known as True Positive Rate (TPR).
#
# F1 score conveys the balance between the precision and the recall. It is defined as follows:
#
# * __F1 Score = 2*((precision*recall)/(precision+recall))__
#
# Specificity: When the actual value is negative, how often is the prediction correct? It is defined by
#
# * __specificity = TN / (TN + FP)__
#
# False Positive Rate: When the actual value is negative, how often is the prediction incorrect?
#
# * __false_positive_rate = FP / float(TN + FP)__
# +
m["recall"] = m["true positive"] / (m["true positive"] + m["false negative"])
m["precision"] = m["true positive"] / (m["true positive"] + m["false positive"])
m["accuracy score"] = (m["true positive"] + m["true negative"]) / m[
"contributors.COUNT"
]
m["f1 score"] = 2 * ((m["recall"] * m["precision"]) / (m["recall"] + m["precision"]))
m["false positive rate"] = m["false positive"] / (
m["true negative"] + m["false positive"]
)
m["AUC"] = 0.5 - (m["false positive rate"] / 2) + (m["recall"] / 2)
# -
cube.query(m["accuracy score"], m["precision"], m["recall"], m["f1 score"], m["AUC"])
# +
# annual premium buckets
premium = session.read_pandas(
pd.DataFrame(
data=[("0 - 10030", i) for i in range(10030)]
+ [("10030 - 31560", i) for i in range(2630, 31560)]
+ [("31560 - 55098", i) for i in range(31560, 55098)]
+ [("55098+", i) for i in range(55098, 448156)],
columns=["Annual_Premium_group", "Annual_Premium"],
),
keys=["Annual_Premium"],
table_name="premium_groups",
)
customer.join(premium)
# + atoti={"height": 195, "widget": {"mapping": {"columns": ["[premium_groups].[Annual_Premium_group].[Annual_Premium_group]"], "measures": ["[Measures].[AUC]", "[Measures].[f1 score]"], "rows": ["ALL_MEASURES"]}, "query": {"mdx": "SELECT NON EMPTY {[Measures].[AUC], [Measures].[f1 score]} ON ROWS, NON EMPTY Hierarchize(Descendants({[premium_groups].[Annual_Premium_group].[AllMember]}, 1, SELF_AND_BEFORE)) ON COLUMNS FROM [model_cube]", "updateMode": "once"}, "serverKey": "default", "widgetKey": "pivot-table"}} tags=[]
session.visualize("Slicing Data on Premium")
# -
# So here is the difference in the model performance:
#
#
# | Metric | Before | After manipulation |
# | ------------- |:-------------:| -----:|
# | AUC overall | 0.79 | 0.76|
# | AUC critical subpopulation | 0.79 | **0.23**|
# | F1 overall | 0.43 | 0.41 |
# | F1 critical subpopulation | 0.43 | **0.03**|
#
# From the pivot table above we can clearly see a drop in the F1 and AUC scores of the customers paying an annual premium of more than 55098.
# Hence we were able to figure out a drastic drop in model performance in one of the critical customer section, thanks to subpopulation analysis.
#
# **To mitigate this, we need to retrain the model to accomodate these 'high-value' customers.**
#
# # Step 5: Further Analysis
#
# #### To see the model performance on other features
#
# For the **categorical features**, we can simply see the model performance in each of the categories
# For the other **numerical features**, we can make the categories and see the breakup of the model performance
# +
# age group buckets
age_groups = session.read_pandas(
pd.DataFrame(
data=[("0-30Y", i) for i in range(30)]
+ [("30Y - 40Y", i) for i in range(30, 40)]
+ [("40Y - 50Y", i) for i in range(40, 50)]
+ [("50Y+", i) for i in range(50, 200)],
columns=["age group", "age"],
),
keys=["age"],
table_name="age_groups",
)
customer.join(age_groups)
# + atoti={"height": 194, "widget": {"mapping": {"columns": ["[age_groups].[age group].[age group]"], "measures": ["[Measures].[AUC]", "[Measures].[f1 score]"], "rows": ["ALL_MEASURES"]}, "query": {"mdx": "SELECT NON EMPTY {[Measures].[AUC], [Measures].[f1 score]} ON ROWS, NON EMPTY Hierarchize(Descendants({[age_groups].[age group].[AllMember]}, 1, SELF_AND_BEFORE)) ON COLUMNS FROM [model_cube]", "updateMode": "once"}, "serverKey": "default", "widgetKey": "pivot-table"}} tags=[]
session.visualize("Slicing Data on Age")
# this slicing in particular, does not seem to make a difference.
# -
# ### Categorical features
# + atoti={"height": 133, "widget": {"mapping": {"columns": ["[Customer].[Gender].[Gender]", "ALL_MEASURES"], "measures": ["[Measures].[f1 score]", "[Measures].[AUC]"], "rows": []}, "query": {"mdx": "SELECT NON EMPTY Crossjoin(Hierarchize(Descendants({[Customer].[Gender].[AllMember]}, 1, SELF_AND_BEFORE)), {[Measures].[f1 score], [Measures].[AUC]}) ON COLUMNS FROM [model_cube] CELL PROPERTIES VALUE, FORMATTED_VALUE, BACK_COLOR, FORE_COLOR, FONT_FLAGS", "updateMode": "once"}, "serverKey": "default", "widgetKey": "pivot-table"}} tags=[]
session.visualize("Slicing Data based on Gender")
# + atoti={"height": 150, "widget": {"mapping": {"columns": ["ALL_MEASURES"], "measures": ["[Measures].[AUC]", "[Measures].[f1 score]"], "rows": ["[Customer].[Driving_License].[Driving_License]"]}, "query": {"mdx": "SELECT NON EMPTY {[Measures].[AUC], [Measures].[f1 score]} ON COLUMNS, NON EMPTY Hierarchize(Descendants({[Customer].[Driving_License].[AllMember]}, 1, SELF_AND_BEFORE)) ON ROWS FROM [model_cube] CELL PROPERTIES VALUE, FORMATTED_VALUE, BACK_COLOR, FORE_COLOR, FONT_FLAGS", "updateMode": "once"}, "serverKey": "default", "widgetKey": "pivot-table"}} tags=[]
session.visualize("Slicing Data based on Driving License")
# -
# <div style="text-align:center"><a href="https://www.atoti.io/?utm_source=gallery&utm_content=subpopulation" target="_blank" rel="noopener noreferrer"><img src="https://data.atoti.io/notebooks/banners/discover-try.png" alt="atoti table" /></a></div>
# # *********** The End **************
|
notebooks/sub-population-analysis/main.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Time Series Forecasting with ARIMA
# _After brushing my basic knowledge of ARIMA with a lot articles, I found this one is quite clearly-explained: https://www.relataly.com/forecasting-beer-sales-with-arima-in-python/2884/_
#
# Also, analysis of tractor sales, http://ucanalytics.com/blogs/wp-content/uploads/2017/08/ARIMA-TimeSeries-Analysis-of-Tractor-Sales.html
#
# And kaggle tutorial
#
# https://timeseriesreasoning.com/contents/regression-with-arima-errors-model/
#
# https://towardsdatascience.com/end-to-end-time-series-analysis-and-forecasting-a-trio-of-sarimax-lstm-and-prophet-part-1-306367e57db8
#
# https://towardsdatascience.com/how-to-forecast-sales-with-python-using-sarima-model-ba600992fa7d
#
# https://towardsdatascience.com/end-to-end-time-series-analysis-and-forecasting-a-trio-of-sarimax-lstm-and-prophet-part-1-306367e57db8
# +
# Libraries
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import numpy as np
import pandas as pd
from datetime import date, datetime, timedelta
import matplotlib.pyplot as plt
import seaborn as sns#; sns.set()
sns.set(style="darkgrid")
plt.figure(figsize=(16, 6))
# %matplotlib inline
#import inflectionm
import pickle
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
# -
import statsmodels.api as sm
from statsmodels.tsa.stattools import adfuller, acf, pacf
from statsmodels.tsa.arima_model import ARIMA
from statsmodels.tsa.seasonal import seasonal_decompose
from sklearn.metrics import mean_squared_error
# %matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15, 6
# ## Introduction
#
# ARIMA (Auto-Regressive Integrated Moving Average) is a powerful statistical modeling technique for time series analysis and forecasting. Compared to machine learning, ARIMA is a classical modeling technique that is particularly powerful when the time series being analyzed follows a clear pattern. For example, when the time series exhibits seasonality. Due to its effectiveness, ARIMA is still frequently used in various economic and scientific applications. Examples are forecasting weather data, predicting order quantities, and sales forecasting.
#
# Table of Contents
# - Introduction to ARIMA Time Series Modelling
# - Concept of Stationarity
# - Testing for Stationarity
# - What is an (S)ARIMA Model?
# - Time Series Forecasting with ARIMA in Python
# - Prerequisites
# - Step #1 Load the Data
# - Step #2 Visualize the Time Series and check it for Stationarity
# - Step #3 Exemplary Differencing and Autocorrelation
# - Step #4 Finding an Optimal Model with Auto-ARIMA
# - Step #5 Simulate the Time Series with in-sample Forecasting
# - Step #6 Generate and Visualize a Sales Forecast
# - Step #7 Measure Prediction Errors
# - Summary
# ### Data
# +
df = pd.read_excel('sales.xlsx')
df.columns = ['week', 'promo', 'units']
df['promo'] = df['promo'].fillna(0)
df['week'] = pd.to_datetime(df['week'])
# black friday
black_friday = ["2018-11-18", "2019-11-24", "2020-11-22", "2021-11-21"]
cyber_monday = ["2018-11-25", "2019-12-1", "2020-11-29", "2021-11-28"]
df['black_friday'] = np.where(df['week'].isin(black_friday), 1, 0)
df['cyber_monday'] = np.where(df['week'].isin(cyber_monday), 1, 0)
df.head()
# -
# ### Train vs Test
# +
indexed_df = df.set_index('week')
#indexed_df.tail()
size = int(len(indexed_df) - 13)
train, test = indexed_df[0:size], indexed_df[size:len(indexed_df)]
#history = [x for x in train]
#predictions = list()
train.tail()
# + active=""
# # test whether or not the 'scale back to original units' is right
# df['units_log'] = np.log(df['units'])
# df['units_log_diff'] = df['units_log'].diff()
#
# # convert it back
# df['convert1'] = df['units_log_diff'].cumsum()
# df['convert2'] = df['units_log'].iloc[0]
# df['convert3'] = df['convert2'].add(df['convert1'], fill_value=0)
# df['units_converted'] = np.exp(df['convert3'])
#
# df.head()
# + tags=[]
#sales_ts = indexed_df['units']
indexed_df = df.set_index('week')
indexed_df.tail()
sales_ts = train['units']
sales_ts.head(5)
# -
plt.plot(sales_ts)
plt.hist(sales_ts)
# ### Log-transformation & differencing
#
# Difference log transform data to make data stationary on both mean and variance
# +
# resample should not a problem in our case
sales_ts_log = np.log(sales_ts)
# sales_ts_log = np.log10(sales_ts) # np.power(10, x) to convert it back
# ts_week_log_diff = ts_week_log - ts_week_log.shift()
sales_ts_log_diff = sales_ts_log.diff(periods=1)
# drop nan before do ADF test
sales_ts_log_diff.dropna(inplace=True)
# or just difference no log-transformation
sales_ts_diff = sales_ts.diff(periods=1).dropna()
# plt.plot(sales_ts_log_diff)
# +
fig, axes = plt.subplots(1, 2, sharey=False, sharex=False)
fig.set_figwidth(12)
fig.set_figheight(4)
#axes[0].plot(random_df.index, random_df['Sales'])
axes[0].plot(sales_ts)
axes[0].set_xlabel("Years")
axes[0].set_ylabel("Sales")
axes[0].set_title("Original Sales Data")
#axes[1].plot(sales_ts_log_diff)
# how about just do log_transformation, no differencing
# axes[1].plot(sales_ts_log)
axes[1].plot(sales_ts_diff)
axes[1].set_xlabel("Years")
axes[1].set_ylabel("Differenced Sales")
axes[1].set_title("1$^{st}$ Order Differenced Sales") # Log(Sales)
plt.tight_layout()
plt.show()
# -
# ### ADF test
#
# ADF test for stationary
#
# ADF test, unit root, https://towardsdatascience.com/introduction-to-time-series-forecasting-part-2-arima-models-9f47bf0f476b
#
# <font color=red>Indeed, we need 1st-differencing to pass the ADF test, but I have not figure out why the fittedvalues is so wrong with ARIMA(2, 1, 2) </font>
# + active=""
# def test_stationarity(timeseries):
# #Determing rolling statistics
# rolmean = timeseries.rolling(window=12,center=False).mean() # window=52
# rolstd = timeseries.rolling(window=12,center=False).std()
# #Plot rolling statistics:
# orig = plt.plot(timeseries, color='blue',label='Original')
# mean = plt.plot(rolmean, color='red', label='Rolling Mean')
# std = plt.plot(rolstd, color='black', label = 'Rolling Std')
# plt.legend(loc='best')
# plt.title('Rolling Mean & Standard Deviation')
# plt.show(block=False)
#
# #Perform Dickey-Fuller test:
# print('Results of Dickey-Fuller Test:')
# dftest = adfuller(timeseries, autolag='AIC')
# dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
# for key,value in dftest[4].items():
# dfoutput['Critical Value (%s)'%key] = value
# print(dfoutput)
# +
# plot is not very helpful
def test_stationarity(timeseries):
#Perform Dickey-Fuller test:
print('Results of Dickey-Fuller Test:')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print(dfoutput)
# -
test_stationarity(sales_ts)
# +
pd.set_option('display.float_format', lambda x: '%.5f' % x)
#test_stationarity(sales_ts_log_diff)
#test_stationarity(sales_ts_log)
test_stationarity(sales_ts_diff)
# p-value is 0, rejected H0, means it is stationarity already, so log-transformation is not needed
# -
test_stationarity(sales_ts_diff)
# ### ACF & PACF
#
# Create autocorrelation factor (ACF) and partial autocorrelation (PACF) plots to identify patterns in the above data which is stationary on both mean and variance. The idea is to identify presence of AR and MA components in the residuals.
#
# PACF only care about "direct" impact of y_t-2 on y_t, thus used to determined AR part. -- the salmon video
# +
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
fig, axes = plt.subplots(1, 2, figsize=(15,4))
#fig = plot_acf(sales_ts_log_diff, lags=30, ax=axes[0]) # when not put fit=, shows 3 plots
#fit = plot_pacf(sales_ts_log_diff, lags=30, ax=axes[1])
fig = plot_acf(sales_ts_diff, lags=30, ax=axes[0]) # when not put fit=, shows 3 plots
fit = plot_pacf(sales_ts_diff, lags=30, ax=axes[1])
plt.tight_layout()
# -
# _(from the tractor sales example) Since, there are enough spikes in the plots outside the insignificant zone (dotted horizontal lines) we can conclude that the residuals are not random. This implies that there is juice or information available in residuals to be extracted by AR and MA models. Also, there is a seasonal component available in the residuals at the lag 12 (represented by spikes at lag 12). This makes sense since we are analyzing monthly data that tends to have seasonality of 12 months because of patterns in tractor sales._
# + active=""
# # another kind of chart
#
# lag_acf = acf(ts_week_log_diff, nlags=10)
# lag_pacf = pacf(ts_week_log_diff, nlags=10, method='ols')
#
# plt.subplot(121)
# plt.plot(lag_acf)
# plt.axhline(y=0,linestyle='--',color='gray')
# plt.axhline(y=-7.96/np.sqrt(len(ts_week_log_diff)), linestyle='--',color='gray')
# plt.axhline(y=7.96/np.sqrt(len(ts_week_log_diff)), linestyle='--',color='gray')
# plt.title('Autocorrelation Function')
# -
# Inference drawn from the ACF and PACF values
#
# Using the plot we can determine the values for p and q respectively :
#
# - p: the lag value where the PACF cuts off (drop to 0) for the first time. So here p =2.
# - q: the lag value where the ACF chart crosses the upper confidence interval for the first time. if you look closely q=1.
# ### Identification of best fit ARIMA model
#
# In order to fit the time series data with a seasonal ARIMA model, we need to first find the the **values of ARIMA(p,d,q)(P,D,Q)s that optimize a metric of interest such as AIC or BIC**. There are many guidelines and best practices to achieve this goal, yet the correct parametrization of ARIMA models can be a painstaking manual process that requires domain expertise and time. **Auto arima function in forecast package in R** helps us identify the best fit ARIMA model on the fly but in Python we will generate combination of p,d and q to select the optimal parameter values for our ARIMA(p,d,q)(P,D,Q)s time series model. -- _In Python, there is also a auto arima library, but seems not working well_
#
# This technique is known as **"grid search"** where we iteratively explore different combinations of parameters. For each such combination of parameters, we try to fit a new seasonal ARIMA model with the SARIMAX() function from the statsmodels module and assess AIC or BIC score. The model with the best score wins and the parmeters for that model are the optimal parmeters.
#
# The best fit model is selected based on **Akaike Information Criterion (AIC)**, and **Bayesian Information Criterion (BIC)** values. **The idea is to choose a model with minimum AIC and BIC values**.
#
# **Akaike Information Criterion (AIC) - AIC is an effort to balance the model between goodness-of-fit and number of parameters used in the model.** This is similar to the balancing act between income and cost of a company so that the debs of the company is optimized (Debt = Cost - Income). As a modeler, we care about the maximum goodness of fit (income) with the minimum number of parameters (cost).
#
# $$AIC=2K−2ln(L) $$
#
# For the given model, L in the above formula is the maximized value of the likelihood function representing goodness-of-fit, and K the number of estimated parameters. Like our debts, we want to keep AIC value at the minimum to choose the best possible model. Bayesian Information Criterion (BIC) is another variant of AIC and is used for the same purpose of best fit model selection. For the best possible model selection, we want to look at AIC, BIC, and AICc (AIC with sample correction) if all these values are minimum for a given model. With increasing parameters K will increase and hence AIC increases. While with the goodness of the fit L increases thus decreasing AIC.
#exog_ts = indexed_df[['promo', 'black_friday', 'cyber_monday']]
exog_ts = train[['promo', 'black_friday', 'cyber_monday']]
exog_ts.tail()
# +
import itertools
# Define the p, d and q parameters to take any value between 0 and 2
p = d = q = range(0, 2)
# Generate all different combinations of p, d and q triplets
pdq = list(itertools.product(p, d, q))
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
# -
# <font color=blue>seasonality with weekly data </font>
#
# Weekly data is difficult to work with because the seasonal period (the number of weeks in a year) is both large and non-integer. The average number of weeks in a year is 52.18. Most of the methods we have considered require the seasonal period to be an integer. Even if we approximate it by 52, most of the methods will not handle such a large seasonal period efficiently.
#
# The simplest approach is to use an STL decomposition along with a non-seasonal method applied to the seasonally adjusted data (as discussed in Chapter 6). Here is an example using weekly data on US finished motor gasoline products supplied (in millions of barrels per day) from February 1991 to May 2005.
#
# https://otexts.com/fpp2/weekly.html
# +
import sys
import warnings
warnings.filterwarnings("ignore") # specify to ignore warning messages
best_aic = np.inf
best_pdq = None
best_seasonal_pdq = None
temp_model = None
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
temp_model = sm.tsa.statespace.SARIMAX(sales_ts,
exog = exog_ts,
order = param,
trend='c',
#seasonal_order = param_seasonal,
enforce_stationarity=True,
enforce_invertibility=True)
results = temp_model.fit()
if results.aic < best_aic:
best_aic = results.aic
best_pdq = param
best_seasonal_pdq = param_seasonal
except:
continue
print("Best SARIMAX{}x{}12 model - AIC:{}".format(best_pdq, best_seasonal_pdq, best_aic))
# -
# ### ARIMA Model
#
# ARIMA will be surpressed, use SARIMAX for all the models
# + active=""
# # manually choose p, d, q
# model = ARIMA(sales_ts, order=(2, 1, 1)) # sales_ts_log
# results = model.fit() # disp=-1, or disp=False
# print(results.summary())
# + active=""
# # manually choose p, d, q
# model = sm.tsa.statespace.SARIMAX(sales_ts, order=(2, 1, 1))
# results = model.fit() # disp=-1, or disp=False
# print(results.summary())
#
# # 1) sigma squared represents the variance of the residual values.
# # This value is used to test the normality of residuals against the alternative of non-normality.
# # 2) and no "const" as in ARIMA model
# # 3) items now become significant even though we use the same order
# + active=""
# model = sm.tsa.statespace.SARIMAX(sales_ts_log, order=(0, 1, 1), seasonal_order=(1,0,0,12))
# # seasonality seasonal_order=(1, 1, 1, 4) for quarterly, (1, 1, 1, 12) for monthly
# results = model.fit()
# print(results.summary())
# + active=""
# # or use "grid research" result
#
# model = sm.tsa.statespace.SARIMAX(sales_ts, order=(1, 1, 1), seasonal_order=(0, 1, 1, 12))
# # seasonality seasonal_order=(1, 1, 1, 4) for quarterly, (1, 1, 1, 12) for monthly
# results = model.fit()
# print(results.summary())
# -
# ### ARIMAX - promo + holiday
#
# -- put exog_ts in the grid research
#exog_ts = indexed_df[['promo', 'black_friday', 'cyber_monday']]
exog_ts = train[['promo', 'black_friday', 'cyber_monday']]
exog_ts.tail()
# + active=""
# # manually choose p, d, q
# model = sm.tsa.statespace.SARIMAX(sales_ts, exog=exog_ts, order=(2, 1, 1))
# results = model.fit() # disp=-1, or disp=False
# print(results.summary())
# +
# exog_ts = sm.add_constant(exog_ts) # constant seems not important, but makes promo coefficient much smaller
model = sm.tsa.statespace.SARIMAX(sales_ts, exog=exog_ts, order=(1, 1, 1), trend='c') # seasonal_order=(0, 1, 1, 12),
results = model.fit()
print(results.summary())
# -
# <font color=blue>How to read the regression result?</font>
#
# - The number to look at first in the training summary is the **Ljung-Box test’s statistic and its p-value**. The Ljung-Box helps us determine **if the residual errors of regression are auto-correlated in a statistically significant way**. _In this case, the p value is 0.4 which is significantly higher than 0.05 (95% confidence threshold). So we accept the Null hypothesis of the Ljung-Box test that the residual errors are not auto-correlated._
#
# - Two other things to note in the result: **The Jarque-Bera test of normality** has yielded a vanishingly small p-value implying a rejection of the Null hypothesis at a > 99.99% confidence level. The Null hypothesis being that the regression errors are normally distributed. This is probably because the errors are highly kurtotic (note that the Kurtosis=7.49 as against the 3.0 that it should have been for a normal distribution). Note also that the errors are not at all skewed (skewness=0.04 as against 0.0 for normally distributed errors).
#
# - The regression errors are also heteroskedastic i.e. they have non-constant variance. This is evidenced by the vanishingly small p-value of the H-test, but there are some things you can do to fix that.
# + active=""
# # with only promo
# exog_ts_2 = train['promo']
#
# model = sm.tsa.statespace.SARIMAX(sales_ts, exog=exog_ts_2, order=(0, 1, 1), seasonal_order=(1, 1, 1, 12))
# results = model.fit()
# print(results.summary())
# -
# ### Residual diagnostics
#
# Plot ACF and PACF for residuals of ARIMA model to ensure no more information is left for extraction
results.plot_diagnostics(lags=30, figsize=(16,12))
#plt.show()
# plots displaced twice
# We need to ensure that the **residuals of our model are uncorrelated and normally distributed with zero-mean** -- _basically **white noise**_. If it is not that it signifies that the model can be further improved and we repeat the process with the residuals.
#
# In this case, our model diagnostics suggests that the model residuals are normally distributed based on the following:
#
# - The KDE plot of the residuals on the top right is almost similar with the normal distribution.
# - The qq-plot on the bottom left shows that the ordered distribution of residuals (blue dots) follows the linear trend of the samples taken from a standard normal distribution with N(0, 1). Again, this is a strong indication that the residuals are normally distributed.
# - The residuals over time (top left plot) don't display any obvious seasonality and appear to be white noise. This is confirmed by the autocorrelation (i.e. correlogram) plot on the bottom right, which shows that the time series residuals have low correlation with lagged versions of itself.
#
# Those observations coupled with the fact that there are no spikes outside the insignificant zone for both ACF and PACF plots lead us to conclude that that residuals are random with no information or juice in them and our model produces a satisfactory fit that could help us understand our time series data and forecast future values. It seems that our ARIMA model is working fine.
# ### Predictions - fittedvalues
#
# predict sales on in-sample date using the best fit ARIMA model, and find out how close is the model prediction on the in-sample data to the actual truth.
#
# In-sample, out-sample prediction, https://stackoverflow.com/questions/62819758/in-sample-prediction-interval-for-arima-in-python
#
# ```
# from statsmodels.tsa.api import SARIMAX
# model = SARIMAX(timeseries, order=(1, 0, 6))
# results = model.fit()
#
# pred = results.get_prediction(start=..., end=...)
# print(pred.predicted_mean) # prediction
# print(pred.conf_int(alpha=0.05)) # confidence interval
#
# fcast = results.get_forecast(steps=...)
# print(fcast.predicted_mean) # forecast
# print(fcast.conf_int(alpha=0.05)) # confidence interval
# ```
# + active=""
# predictions_log = pd.Series(results.fittedvalues)
# # predictions = np.power(10, predictions_log)
# predictions.head()
#
# # the 1st week is so wrong, why?
# + active=""
# #plt.plot(sales_ts, label='Observed') # the blue one
# #plt.plot(predictions)
# # plt.title('RMSE: %.4f'% np.sqrt(sum((predictions - sales_ts)**2)/len(ts_week)))
# -
predictions = results.get_prediction(start='2019-01-06', end='2021-08-29', exog=exog_ts, alpha=0.05)
predictions = predictions.summary_frame()
predictions
axis = sales_ts.plot(label='Observed', figsize=(15, 6))
# predictions
predictions['mean'].plot(ax=axis, label='Forecast') # , alpha=0.7
#axis.fill_between(pred_dynamic_ci.index, pred_dynamic_ci.iloc[:, 0], pred_dynamic_ci.iloc[:, 1], color='k', alpha=.25)
#axis.fill_betweenx(axis.get_ylim(), pd.to_datetime('2019-01-06'), sales_ts.index[-1], alpha=.1, zorder=-1)
axis.set_xlabel('Week')
axis.set_ylabel('Units')
axis.set_title('Printer Sales vs Forecasts')
plt.legend(loc='best')
plt.show()
plt.close()
# + active=""
# # scale back to original units - for 1st differencing log-transformed
#
# predictions_diff_cumsum = predictions_diff.cumsum()
# predictions_log = pd.Series(sales_ts_log.iloc[0], index=sales_ts_log.index)
# predictions_log = predictions_log.add(predictions_diff_cumsum, fill_value=0)
# # predictions = np.exp(predictions_log)
# predictions = np.power(10, predictions_log)
# predictions.head()
#
# # insanely wrong
# -
# ### Forecast
#
# forecast sales using the best fit ARIMA model, to predict sales for next half year through the above model
# - forecast method gives only point forecasts
# - fet_forecast method is more general, and also allows constructing confidence intervals
# + active=""
# # try to create a new DataFrame when I did not save some weeks for test
# n_steps = 26
# weeks = pd.date_range(start='2021-12-5', periods=n_steps, freq='W')
# exog_future = pd.DataFrame({'promo': [0]*n_steps,
# 'black_friday': [0]*n_steps,
# 'cyber_monday': [0]*n_steps},
# index= weeks)
# exog_future.head()
# -
exog_future = test[['promo', 'black_friday', 'cyber_monday']]
# Get forecast 26 steps (26 weeks, half year) ahead in future
n_steps = len(exog_future)
pred_uc_95 = results.get_forecast(steps=n_steps, exog=exog_future, alpha=0.05)
pred_uc_95 = pred_uc_95.summary_frame()
pred_uc_95
# +
# plot the forecast along with the confidence band
axis = indexed_df['units'].plot(label='Observed', figsize=(15, 6))
predictions['mean'].plot(ax=axis, label='FittedValues', alpha=0.7)
pred_uc_95['mean'].plot(ax=axis, label='Forecast', alpha=0.7)
axis.fill_between(pred_uc_95.index, pred_uc_95['mean_ci_lower'], pred_uc_95['mean_ci_upper'], color='k', alpha=.25)
#axis.fill_between(fc_all.index, fc_all['lower_ci_99'], fc_all['upper_ci_99'], color='k', alpha=.75)
axis.set_xlabel('Years')
axis.set_ylabel('Units')
axis.set_title('Printer Sales vs Forecasts')
plt.legend(loc='best')
plt.show()
# +
from sklearn.metrics import mean_squared_error
from numpy import sqrt
mse = mean_squared_error(test['units'], pred_uc_95['mean'])
print('\n')
print('Printing Mean Squared Error of Predictions...')
print('Test MSE: %.6f' % mse)
# Root Mean Squared Error
rmse = sqrt(mse)
print('Test RMSE: %.2f' % rmse)
# -
mse = mean_squared_error(test['units'], pred_uc_95['mean'])
rmse = sqrt(mse)
print('Printing Root Mean Squared Error of Predictions...')
print('Test RMSE: %.2f' % rmse)
# + active=""
# #Plotting forecasted vs Observed values
#
# fig, ax = plt.subplots()
# # ax.set(title='Spot Exchange Rate, Euro into USD', xlabel='Date', ylabel='Euro into USD')
#
# ax.plot(ts_week, label='observed') # ts_week[-60:], 'o',
# # ax.plot(np.exp(predictions_series), 'g', label='rolling one-step out-of-sample forecast')
# ax.plot(fc_all['forecast'], 'g', label='forecast')
#
# legend = ax.legend(loc='upper left')
# legend.get_frame().set_facecolor('w')
# -
pred_uc_95['actuals'] = test['units']
pred_uc_95
res = sm.tsa.seasonal_decompose(sales_ts)
resplot = res.plot()
|
ARIMA2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: veroku-env
# language: python
# name: veroku-env
# ---
# # Slip on Grass Example
# This notebook contains a toy example model for reasoning about the probability of slipping and getting injured on grass.
# We first create the prior and conditional factors and then use these factors to construct a cluster graph. We then use this graph to perform
# inference.
import sys
sys.path.append('../')
from veroku.factors.categorical import Categorical, CategoricalTemplate
from veroku.cluster_graph import ClusterGraph
import numpy as np
all_factors = []
# ### 1. Factor Specification
# In this section we specify the prior and conditional distributions that will define our graph (and joint distribution).
# #### 1.1 Priors
# +
from veroku.factors.sparse_categorical import SparseCategorical
var_names = ['rain']
probs_table = {(0,):0.8, # no rain
(1,):0.2} # rain
var_cardinalities = [2]
SparseCategorical(probs_table=probs_table,
var_names=var_names,
cardinalities=var_cardinalities)
# +
var_names = ['rain']
probs_table = {(0,):0.8, # no rain
(1,):0.2} # rain
var_cardinalities = [2]
rain_proir_pd = Categorical(probs_table=probs_table,
var_names=var_names,
cardinalities=var_cardinalities)
all_factors.append(rain_proir_pd)
rain_proir_pd.show()
# +
var_names = ['sprinklers']
probs_table = {(0,):0.2, # no rain
(1,):0.8} # rain
var_cardinalities = [2]
sprinkers_proir_pd = Categorical(probs_table=probs_table,
var_names=var_names,
cardinalities=var_cardinalities)
all_factors.append(sprinkers_proir_pd)
sprinkers_proir_pd.show()
# -
rain_proir_pd.absorb(sprinkers_proir_pd).show()
# #### 1.2 Conditional Probability Distributions
# +
var_names = ['rain', 'sprinklers', 'wet_grass']
probs_table = {(0,0,0):0.90, # grass dry given no rain and sprinklers off
(0,0,1):0.10, # grass wet given "
(0,1,0):0.05, # grass dry given no rain and sprinklers on
(0,1,1):0.95, # grass wet given "
(1,0,0):0.20, # grass dry given rain and sprinklers off
(1,0,1):0.80, # grass wet given "
(1,1,0):0.00, # grass dry given rain and sprinklers on
(1,1,1):1.00} # grass wet given "
var_cardinalities = [2,2,2]
wet_grass_given_rain_and_sprinklers_cpd = Categorical(probs_table=probs_table,
var_names=var_names,
cardinalities=var_cardinalities)
all_factors.append(wet_grass_given_rain_and_sprinklers_cpd)
wet_grass_given_rain_and_sprinklers_cpd.show()
# +
var_names = ['wet_grass','slip']
probs_table = {(0,0):0.2, #no slip given no wet_grass
(0,1):0.8, # slip given no wet_grass
(1,0):0.6, #no slip given wet_grass
(1,1):0.4} # slip given wet_grass
var_cardinalities = [2,2]
slip_given_rain_cpd = Categorical(probs_table=probs_table,
var_names=var_names,
cardinalities=var_cardinalities)
all_factors.append(slip_given_rain_cpd)
slip_given_rain_cpd.show()
# +
var_names = ['slip','injury']
probs_table = {(0,0):0.9, #no injury given no slip
(0,1):0.1, # injury given no slip
(1,0):0.2, #no injury given slip
(1,1):0.8} # injury given slip
var_cardinalities = [2,2]
injury_given_slip_cpd = Categorical(probs_table=probs_table,
var_names=var_names,
cardinalities=var_cardinalities)
all_factors.append(injury_given_slip_cpd)
injury_given_slip_cpd.show()
# -
# ### 2. Cluster Graph Inference
# In this section we use the distributions specified in the previous section to build a cluster graph and run message passing inference in this graph to get the posterior marginal distribution over the event of slipping on the grass.
observed_evidence = {'rain':1}
marginal_vrs = ['slip']
# + pycharm={"is_executing": true}
cluster_graph = ClusterGraph(all_factors, observed_evidence)
cluster_graph.debug = True
cluster_graph.show()
# -
cluster_graph._clusters[1]._factor.show()
cluster_graph.process_graph(tol=0.01, max_iter=10)
slip_posterior = cluster_graph.get_marginal(vrs=marginal_vrs)
slip_posterior.normalize().show()
# ### 3.0 Without a Cluster Graph
# We can of course also compute marginal distributions without using a cluster graph by simply multipying all the factors together, observing variables and reducing the joint distribution (with the given evvidence) and then marginalising
# out the variables that we are not interested in. While this approach works well enough for small examples, it is often intractable when the number of factors become large - since this causes a combinatoric increase in the parameters of the joint distribution.
# +
joint_distribution = all_factors[0]
for factor in all_factors[1:]:
joint_distribution = joint_distribution.absorb(factor)
reduced_distribution = joint_distribution.reduce(vrs=list(observed_evidence.keys()),
values=list(observed_evidence.values()))
marginal_posterior = reduced_distribution.marginalize(vrs=marginal_vrs, keep=True)#.normalize()
marginal_posterior.normalize().show()
# -
assert(slip_posterior.equals(marginal_posterior))
|
examples/slip_on_grass.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # CCL feature demo
# **SLAC 2018 DESC meeting**
# In this demo, we use CCL to set up a cosmology and show how to get different quantities of interest.
import numpy as np
import matplotlib.pyplot as plt
import pyccl as ccl
# We start by setting up a cosmology object. This holds the cosmological parameters and metadata. The cosmology object is needed as input for many other functions.
#
# We set three of these to demonstrate the different options which are available.
# +
# Basic cosmology with mostly default parameters and calculating setting.
cosmo = ccl.Cosmology(Omega_c=0.27, Omega_b=0.045, h=0.67, A_s=2.1e-9, n_s=0.96,
Neff=3.046, Omega_k=0.)
# Cosmology which incorporates baryonic correction terms in the power.
cosmo_baryons = ccl.Cosmology(Omega_c=0.27, Omega_b=0.045, h=0.67, A_s=2.1e-9, n_s=0.96,
Neff=3.046, Omega_k=0., baryons_power_spectrum='bcm',
bcm_log10Mc=14.079181246047625, bcm_etab=0.5, bcm_ks=55.0)
# Cosmology where the power spectrum will be computed with an emulator.
cosmo_emu = ccl.Cosmology(Omega_c=0.27, Omega_b=0.05, h=0.67, sigma8=0.83, n_s=0.96,
Neff=3.04, Omega_k=0., transfer_function='emulator',
matter_power_spectrum="emu")
# -
# ## Background quantities
# We can calculate a variety of background-type quantities. We set up a vector of scale factors at which to compute them.
z = np.linspace(0.0001, 5., 100)
a = 1. / (1.+z)
# Compute ** distances **:
# +
chi_rad = ccl.comoving_radial_distance(cosmo, a)
chi_ang = ccl.comoving_angular_distance(cosmo,a)
lum_dist = ccl.luminosity_distance(cosmo, a)
dist_mod = ccl.distance_modulus(cosmo, a)
# Plot the comoving radial distance as a function of redshift, as an example.
plt.figure()
plt.plot(z, chi_rad, 'k', linewidth=2)
plt.xlabel('$z$', fontsize=20)
plt.ylabel('Comoving distance, Mpc', fontsize=15)
plt.tick_params(labelsize=13)
plt.show()
# -
# Compute ** growth quantities ** :
# +
D = ccl.growth_factor(cosmo, a)
f = ccl.growth_rate(cosmo, a)
plt.figure()
plt.plot(z, D, 'k', linewidth=2, label='Growth factor')
plt.plot(z, f, 'g', linewidth=2, label='Growth rate')
plt.xlabel('$z$', fontsize=20)
plt.tick_params(labelsize=13)
plt.legend(loc='lower left')
plt.show()
# -
# The ratio of the ** Hubble parameter ** at scale factor a to H0:
# +
H_over_H0 = ccl.h_over_h0(cosmo, a)
plt.figure()
plt.plot(z, H_over_H0, 'k', linewidth=2)
plt.xlabel('$z$', fontsize=20)
plt.ylabel('$H / H_0$', fontsize=15)
plt.tick_params(labelsize=13)
plt.show()
# -
# For each component of the matter / energy budget, we can get $\Omega_{\rm x}(z)$, the ** fractional energy density ** at $z \ne 0$.
OmM_z = ccl.omega_x(cosmo, a, 'matter')
OmL_z = ccl.omega_x(cosmo, a, 'dark_energy')
OmR_z = ccl.omega_x(cosmo, a, 'radiation')
OmK_z = ccl.omega_x(cosmo, a, 'curvature')
OmNuRel_z = ccl.omega_x(cosmo, a, 'neutrinos_rel')
OmNuMass_z = ccl.omega_x(cosmo, a, 'neutrinos_massive')
plt.figure()
plt.plot(z, OmM_z, 'k', linewidth=2, label='$\Omega_{\\rm M}(z)$')
plt.plot(z, OmL_z, 'g', linewidth=2, label='$\Omega_{\Lambda}(z)$')
plt.plot(z, OmR_z, 'b', linewidth=2, label='$\Omega_{\\rm R}(z)$')
plt.plot(z, OmNuRel_z, 'm', linewidth=2, label='$\Omega_{\\nu}^{\\rm rel}(z)$')
plt.xlabel('$z$',fontsize=20)
plt.ylabel('$\Omega_{\\rm x}(z)$', fontsize= 20)
plt.tick_params(labelsize=13)
plt.legend(loc='upper right')
plt.show()
# ## Matter power spectra and related quantities
# To compute the matter power spectrum, we define a vector of k values, and use the same z values as above.
k = np.logspace(-3, 2, 100)
# The first power spectrum call for a given cosmology will take a few seconds to run, because we are computing $P(k)$ with CLASS and initializing splines. Further calls will be much quicker because they just access the precomputed splined values.
# +
z_Pk = 0.2
a_Pk = 1. / (1.+z_Pk)
Pk_lin = ccl.linear_matter_power(cosmo, k, a_Pk)
Pk_nonlin = ccl.nonlin_matter_power(cosmo, k, a_Pk)
Pk_baryon = ccl.nonlin_matter_power(cosmo_baryons, k, a_Pk)
Pk_emu = ccl.nonlin_matter_power(cosmo_emu, k, a_Pk)
# -
plt.figure()
plt.loglog(k, Pk_lin, 'k', linewidth=2, label='Linear')
plt.loglog(k, Pk_nonlin, 'g', linewidth=2, label='Non-linear (halofit)')
plt.loglog(k, Pk_baryon, 'm', linewidth=2, linestyle=':', label='With baryonic correction')
plt.loglog(k, Pk_emu, 'b', linewidth=2, linestyle = '--', label='CosmicEmu')
plt.xlabel('$k, \\frac{1}{\\rm Mpc}$', fontsize=20)
plt.ylabel('$P(k), {\\rm Mpc^3}$', fontsize=20)
plt.xlim(0.001, 50)
plt.ylim(0.01, 10**6)
plt.tick_params(labelsize=13)
plt.legend(loc='lower left')
plt.show()
# We can also compute $\sigma_{\rm R}$, the RMS variance in a top-hat of radius R Mpc, as well as the special case of $\sigma_{8}$.
# +
R = np.linspace(5, 20, 15)
sigmaR = ccl.sigmaR(cosmo, R)
sigma8 = ccl.sigma8(cosmo)
print("sigma8 =", sigma8)
# -
# ## $C_\ell$ spectra
# We can compute $C_\ell$ for galaxy counts, galaxy lensing, and CMB lensing, for autocorrelations or any cross-correlation.
# The first step to getting $C_\ell$'s involving galaxy counts or lensing is to define a photo-z probability function and a galaxy redshift distribution. CCL allows you to flexibly design your own photo-z function, but fo the purposes of demonstration we use the included Gaussian function.
z_pz = np.linspace(0.3, 3., 3) # Define the edges of the photo-z bins.
pz = ccl.PhotoZGaussian(sigma_z0=0.05)
# We get the galaxy redshift distribution for each tomographic bin, for galaxy counts and galaxy lenisng.
# +
def dndz(z,args) :
return z**2*np.exp(-(z/0.5)**1.5)
redshift_dist=ccl.dNdzFunction(dndz)
dNdz_nc = [ccl.dNdz_tomog(z=z, zmin=z_pz[zi], zmax=z_pz[zi+1], pz_func=pz, dNdz_func=redshift_dist)
for zi in range(0, len(z_pz)-1)]
dNdz_len = [ccl.dNdz_tomog(z=z, zmin=z_pz[zi], zmax=z_pz[zi+1], pz_func=pz, dNdz_func=redshift_dist)
for zi in range(0, len(z_pz)-1)]
# -
# Let's assume a toy linear galaxy bias for our galaxy-count tracer.
bias = 2.*np.ones(len(z))
# We can now set up tracer objects for CMB lensing and for each tomographic bin of galaxy counts and galaxy lensing.
# +
gal_counts = ([ccl.NumberCountsTracer(cosmo, has_rsd=False,
dndz=(z, dNdz_nc[zi]), bias=(z, bias)) for zi in range(0, len(z_pz)-1)])
gal_lens = ([ccl.WeakLensingTracer(cosmo, dndz=(z, dNdz_len[zi])) for zi in range(0, len(z_pz)-1)])
cmb_lens = [ccl.CMBLensingTracer(cosmo, z_source=1089.)]
all_tracers = gal_counts + gal_lens + cmb_lens
# -
# With these tracer objects, we can now get $C_\ell$'s.
# +
ell = np.linspace(1, 2000, 2000)
n_tracer = len(all_tracers)
c_ells = ([[ccl.angular_cl(cosmo, all_tracers[ni], all_tracers[nj], ell)
for ni in range(0, n_tracer)] for nj in range(0, n_tracer)])
# -
# We can plot a couple of examples
plt.figure()
plt.loglog(ell, c_ells[0][0], 'k', linewidth=2, label='gg bin 1 auto')
plt.loglog(ell, c_ells[0][3], 'g', linewidth=2, label='g1 x src2')
plt.loglog(ell, c_ells[4][4], 'm', linewidth=2, label='CMB lensing auto')
plt.xlabel('$\ell$', fontsize=20)
plt.ylabel('$C_\ell$', fontsize=20)
plt.xlim(1, 1000)
plt.tick_params(labelsize=13)
plt.legend(loc='lower left')
plt.show()
# # Correlation functions
# From the $C_\ell$s, we can then get correlatoin functions. Let's do an example of each type.
# +
theta_deg = np.logspace(-1, np.log10(5.), 20) # Theta is in degrees
xi_plus = ccl.correlation(cosmo, ell, c_ells[2][2], theta_deg, corr_type='L+', method='FFTLog')
xi_minus = ccl.correlation(cosmo, ell, c_ells[2][2], theta_deg, corr_type='L-', method='FFTLog')
xi_gg = ccl.correlation(cosmo, ell, c_ells[0][0], theta_deg, corr_type='GG', method='FFTLog')
# +
plt.figure()
plt.loglog(theta_deg, xi_plus, '+k', label='+')
plt.loglog(theta_deg, xi_minus, 'ob', label='-')
plt.xlabel('$\\theta$, deg', fontsize=20)
plt.ylabel('$\\xi_{+ / -}$', fontsize=20)
plt.xlim(0.1, 5)
plt.ylim(10**(-7), 10**(-4))
plt.tick_params(labelsize=13)
plt.legend(loc='lower left')
plt.show()
plt.figure()
plt.loglog(theta_deg, xi_gg, 'mo', linewidth=2)
plt.xlabel('$\\theta$, deg', fontsize=20)
plt.ylabel('$\\xi_{gg}$', fontsize=20)
plt.xlim(0.1, 5)
plt.ylim(4*10**(-5), 0.05)
plt.tick_params(labelsize=13)
plt.show()
# -
# # Halo Mass Function & Halo Bias
# We can compute the halo bias and halo mass function from Tinker et al.
# +
halo_mass = np.logspace(10, 16, 200)
hmf = ccl.massfunc(cosmo, halo_mass, a=1., overdensity=200)
# -
plt.figure()
plt.loglog(halo_mass, hmf, 'k', linewidth=2)
plt.xlabel('Halo mass, $M_\odot$', fontsize=20)
plt.ylabel('$\\frac{dn}{dlog_{10}M}$', fontsize=20)
plt.tick_params(labelsize=13)
plt.show()
# +
halo_bias = ccl.halo_bias(cosmo, halo_mass, a=1., overdensity=200)
plt.figure()
plt.loglog(halo_mass, halo_bias, 'k', linewidth=2)
plt.xlabel('Halo mass, $M_\odot$', fontsize=20)
plt.ylabel('$b_h$', fontsize=20)
plt.tick_params(labelsize=13)
plt.show()
# -
|
examples/SLAC Feb2018 Demo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import requests
import os
import json
from datetime import datetime
import uuid
consumer_key= 'p5CzEB9O7j6uS4pgoWGpG28oe'
consumer_secret= '<KEY>'
access_token= '<KEY>'
access_token_secret= '<KEY>'
bearer_token = '<PASSWORD>Hs<PASSWORD>Vku<PASSWORD>ftFN1DlKV5HNLKmKB7KFwEfiPONZqx9oRt8AfAtGkyjV48bsN1'
# ### Scenario 1: Real-time streaming
#
# ***Note:*** The run has been terminated after extracting two tweets since this is real-time continuous run
# +
import requests
import os
import json
df=pd.DataFrame()
def bearer_oauth(r):
"""
Method required by bearer token authentication.
"""
r.headers["Authorization"] = f"Bearer {bearer_token}"
r.headers["User-Agent"] = "v2FilteredStreamPython"
return r
def get_rules():
response = requests.get(
"https://api.twitter.com/2/tweets/search/stream/rules", auth=bearer_oauth
)
if response.status_code != 200:
raise Exception(
"Cannot get rules (HTTP {}): {}".format(response.status_code, response.text)
)
print(json.dumps(response.json()))
return response.json()
def delete_all_rules(rules):
if rules is None or "data" not in rules:
return None
ids = list(map(lambda rule: rule["id"], rules["data"]))
payload = {"delete": {"ids": ids}}
response = requests.post(
"https://api.twitter.com/2/tweets/search/stream/rules",
auth=bearer_oauth,
json=payload
)
if response.status_code != 200:
raise Exception(
"Cannot delete rules (HTTP {}): {}".format(
response.status_code, response.text
)
)
print(json.dumps(response.json()))
def set_rules(delete):
# You can adjust the rules if needed
sample_rules = [
{"value": "<NAME>", "tag": "music"}
]
payload = {"add": sample_rules}
response = requests.post(
"https://api.twitter.com/2/tweets/search/stream/rules",
auth=bearer_oauth,
json=payload,
)
if response.status_code != 201:
raise Exception(
"Cannot add rules (HTTP {}): {}".format(response.status_code, response.text)
)
print(json.dumps(response.json()))
def get_stream(set):
response = requests.get(
"https://api.twitter.com/2/tweets/search/stream", auth=bearer_oauth, stream=True
)
print(response.status_code)
if response.status_code != 200:
raise Exception(
"Cannot get stream (HTTP {}): {}".format(
response.status_code, response.text
)
)
for response_line in response.iter_lines():
if response_line:
json_response = json.loads(response_line)
#Loading to Bigquey
print(json.dumps(json_response, indent=4, sort_keys=True))
df['id']=[json_response['data']['id']]
df['text']=[json_response['data']['text']]
#Extracting current timestamp
currentdt=datetime.now()
df['load_date_time']=[currentdt]
#Creating unique batch id
id = uuid.uuid1()
df['BatchId']=[id.int]
df.to_gbq('myproject-330700.twitter_data.tweet_data', project_id='myproject-330700',credentials=credentials,
if_exists='append')
def main():
rules = get_rules()
delete = delete_all_rules(rules)
set = set_rules(delete)
get_stream(set)
if __name__ == "__main__":
main()
# -
# ### Loading Stream into Bigquery
#
# 
#
#
# ***Since this is real-time load into bigquery we cannot capture unique count/total count. Those were captured as part of Batch Load Scenario Below***
# ### Scenario 2: Batch Load
#
# ***Note:*** The free account which has been utilized for this exercise will extract all tweets related to keyword for past 7 days from a given date
import tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, wait_on_rate_limit=True)
word='<NAME> #music'
tweets = tweepy.Cursor(api.search_tweets,
q=word,
lang="en",
until="2021-10-30").items()
tweets
data=[[tweet.id,tweet.created_at,tweet.text] for tweet in tweets]
data
# +
df=pd.DataFrame()
ids=[data[i][0] for i in range(0,len(data))]
text=[data[i][1] for i in range(0,len(data))]
df['id']=ids
df['text']=text
#count of all tweets extracted
twcount=len(df)
#count of unique tweets extracted
twuncount=len(df.id.unique())
#Dropping duplicates
df.drop_duplicates(subset ="id",keep = False, inplace = True)
#Extracting current timestamp
currentdt=datetime.now()
df['load_date_time']=currentdt
#Creating unique batch id
id = uuid.uuid1()
df['BatchId']=id.int
#Creating log table
log_df=pd.DataFrame()
log_df['BatchId']=[id.int]
log_df['tweet_count']=[twcount]
log_df['uniq_tweet_count']=[twuncount]
log_df['load_date_time']=[currentdt]
# -
# ### Loading to Google Bigquery
from google.oauth2 import service_account
service_account_info = json.load(open('/Users/prudhvichandra/Documents/twitter_gcp_scacc.json'))
credentials = service_account.Credentials.from_service_account_info(
service_account_info)
df.to_gbq('myproject-330700.twitter_data.tweet_data', project_id='myproject-330700',credentials=credentials,
if_exists='append')
log_df.to_gbq('myproject-330700.twitter_data.tweeet_log', project_id='myproject-330700',credentials=credentials,
if_exists='append')
# ### Link to access tweet data on bigquery
#
# 
#
# ### Link to access log table of tweet Load
#
# 
#
# ### Questions:
#
# ***What are the risks involved in building such a pipeline?***
#
# - Need to have effecient failure handling mechanisms otherwise there is high chance of missing data or having duplicated data
# - The API token, keys and urls has to closely monitored and have to updated when there is a change
# - Error traceability is an important aspect of such pipelines
#
# ***How would you roll out the pipeline going from proof-of-concept to a production-ready solution?***
#
# Inorder to roll out a production ready solution we need to determine the following:
#
# - Agree upon how frequently we need this data to be stored(real-time/Batch) based on business requirement.
# - Once agreed upon we have to collaborate with API teams and make sure we have all the production ready urls and keys required
# - Then we need to design a Data Architecture and define what components or services are needed to accomplish this
# - API ----> App running on Google Cloud Run/VM to stream data from api -----> Load into Bigquery(Real-time)
# - API ----> Python code on Google Cloud Functions that can be scheduled using pubsub & Cloud Scheduler -------> Loading into Bigquery(Batch)
# - Once we have data architecture finalised we define table structures/schemas and access controls etc.
# - The we need to parameterise the Python code for example storing the keys, tokens, table names, urls in config files/env variables and read from there instead of hard coding. Shell scripting etc.
# - Then we make sure we enable each service required and build a end-to-end pipeline on dev
# - Testing the pipeline
# - UAT approval
# - Moving elements to production
#
#
# ***What would a production-ready solution entail that a POC wouldn't?***
#
# - No defined Architecture
# - Everything is hard-coded in POC, no config files, no shell sripts, no env variables
# - No enabling systems to run the code
# - No users are involved in POC stage
#
#
# ***What is the level of effort required to deliver each phase of the solution?***
#
# - Intial Business Analysis with users/stakeholders to define product owners and agree upon the requirements
# - Data Modelling/Architecture to design an architure and other enabling elements
# - Cloud Infrasture setup and code finalizations.
# - Engineering data pipeline and making sure we have the business use case justified
# - Testing the pipeline
# - User acceptance and approval
# - Deploying the pipeline into production
#
#
# ***What is your estimated timeline for delivery for a production-ready solution?***
#
# - Businees Analysis and complete requirement handling: 1 week
# - Data Modelling/ Architecture: 1 week
# - Cloud Infrasture setup and code enablement: 2 weeks
# - Testing: 1 week
# - UAT & Deployment: 1 Week
#
#
|
Walmart/Walmart_Prudhvi.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import nltk
from collections import defaultdict
# ### Import Scripts
episode_scripts_df = pd.read_csv("../05_Data/02_Clean/scripts_by_law_order_half.csv",index_col=0)
len(episode_scripts_df) #Number of Scripts should be 456
episode_scripts_df.head(3)
# ### Truncate Script Half
# Although the episodes are split in half between Law and Order, the cutoff may not be exactly at the 50% mark so this section allows you to toggle the section back if it helps with the clustering
def reduce_script(row,start,end):
text = row.split()
start_pos = int(len(text)*start)
end_pos = int(len(text)*end)
text = text[start_pos:end_pos]
return " ".join(text)
episode_scripts_df["first"] = episode_scripts_df["first"].apply(reduce_script,start=0.15,end=.75)
episode_scripts_df.head()
# ### Process Text for NLP
# #### 1. Removing punctuation - Removed by the previous notebook (04_Clean_Scripts_by_Episode.ipynb) already
# #### 2. Make Lowercase - Even though CountVectorizer and TFIDF already do this, why not do it manually here
episode_scripts_df["first"] = episode_scripts_df["first"].apply(lambda x: x.lower())
episode_scripts_df["first"][0][4500:4750]
# #### 3. Remove Numbers
from string import digits
remove_digits = str.maketrans('', '', digits)
episode_scripts_df["first"] = episode_scripts_df["first"].apply(lambda x: x.translate(remove_digits))
episode_scripts_df["first"][0][4500:4750]
# #### 4. Import Stopwords and Remove Stopwords
law_order_stopwords = list(pd.read_csv("../05_Data/02_Clean/all_stopwords.csv",index_col=0)['0'])
law_order_stopwords[:5]
relationship_stopwords = ["mother","sister","father","brother","neighbord","cop","witness","customer","daughter",
"dad","attorney","baby","body","boyfriend","captain","clerk","cousin","fbi","girlfriend",
"killer","lady","lieutenant","men","nurse","officer","parent","person","shooter","victim",
"warrant","witness","judge","client","miss","mom","teacher","professor","student",
"child","childrens","employee","uncle","doctor","patient","owner", "driver",
"writer", "psychiatrist", "suspect","book","horse","dog","animal","chinese","drug"]
# Combine Stopword Lists
law_order_stopwords = law_order_stopwords + relationship_stopwords
# Remove Stopwords
# +
from nltk.tokenize import word_tokenize
def remove_stopwords(row):
tokens = word_tokenize(row)
tokens_clean = [word for word in tokens if not word in law_order_stopwords]
text_clean = " ".join(tokens_clean)
return text_clean.strip()
# -
episode_scripts_df["first"] = episode_scripts_df["first"].apply(remove_stopwords)
episode_scripts_df["first"][0][4500:4750]
# #### 5. Lemmatize Text
replace_dict = {"abused":"abuse", "adopted":"adoption", "bombing":"bomb", "fighting":"fight",
"gangster":"gang","jew":"jewish","marriage":"married","medication":"medicine",
"pregnancy":"pregnant","senate":"senator","shoot":"shooter","stabbed":"stab",
"stabbing":"stab","suicidal":"suicide","russia":"russian","raped":"rape"}
# +
from nltk.stem import WordNetLemmatizer
def lemma_my_text(row):
tokens = word_tokenize(row)
lemmatizer = WordNetLemmatizer()
tokens_lemma = [lemmatizer.lemmatize(y) for y in tokens]
tokens_lemma = [x if x not in replace_dict else replace_dict[x] for x in tokens_lemma] #remap words in dict
text_lemma = " ".join(tokens_lemma)
return text_lemma.strip()
# -
episode_scripts_df["first"] = episode_scripts_df["first"].apply(lemma_my_text)
episode_scripts_df["first"][0][2500:2750]
# ### Vectorize
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
# I included stopwords again here in case the stemming process truncated variants of each stopword, they will be caught again here
vectorizer = TfidfVectorizer(ngram_range=(1,1),
stop_words=law_order_stopwords,
token_pattern="\\b[a-zA-Z][a-zA-Z][a-zA-Z]+\\b", #words with >= 3 alpha chars
max_df=.45,
min_df=.02)
X = vectorizer.fit_transform(episode_scripts_df["first"])
X.shape
n_grams = list(vectorizer.get_feature_names())
counts = list(sum(X.toarray()))
word_counts = list(zip(n_grams, counts))
pd.DataFrame(word_counts).sort_values(by=1,ascending=False).head(5)
X_df = pd.DataFrame(X.todense(), columns=vectorizer.get_feature_names())
X_df.head()
# ### NMF Model
# +
from sklearn.decomposition import NMF
nmf = NMF(n_components=16)
nmf_vecs = nmf.fit_transform(X_df)
# -
len(nmf_vecs)
# ### Explore Topics
# #### How many Episodes in each Topic?
topic, episode_count = np.unique(nmf_vecs.argmax(axis = 1), return_counts=True) # Num of eps in each topic
topic_count_df = pd.DataFrame(list(zip(topic,episode_count)))
topic_count_df = topic_count_df.drop([0],axis=1)
topic_count_df = topic_count_df.rename({1:"num_eps"},axis=1)
topic_count_df
# #### Print Out Each Topic
def print_top_words(model, feature_names, n_top_words):
for index, topic in enumerate(model.components_):
message = "Topic #%d: " % index
message += " ".join([feature_names[i] for i in topic.argsort()[:-n_top_words - 1:-1]])
print(message, "\n")
words = list(X_df.columns.values)
print_top_words(model=nmf,feature_names=words,n_top_words=40)
# #### Visualize Topics
import matplotlib.pyplot as plt
eps_df = pd.DataFrame(nmf_vecs)
eps1_df = eps_df
plt.pcolor(eps1_df)
plt.yticks(np.arange(0.5, len(eps1_df.index), 1), eps1_df.index)
plt.xticks(np.arange(0.5, len(eps1_df.columns), 1), eps1_df.columns)
plt.show()
# ### Create Words in Topics DataFrame
def get_nmf_topics(model, num_topics):
#the word ids obtained need to be reverse-mapped to the words so we can print the topic names.
feat_names = vectorizer.get_feature_names()
word_dict = {};
for i in range(num_topics):
#for each topic, obtain the largest values, and add the words they map to into the dictionary.
words_ids = model.components_[i].argsort()[:-20 - 1:-1]
words = [feat_names[key] for key in words_ids]
word_dict[i] = words;
return pd.DataFrame(word_dict).transpose();
topics_df = get_nmf_topics(model=nmf,num_topics=16)
topics_df
# ### Create Topics in Episode DataFrame
# Find Top Topic per Episode
max_topic = nmf_vecs.argmax(axis = 1)
topic_df = pd.DataFrame(max_topic)
topic_df = topic_df.rename({0:"topic"},axis=1)
topic_df.head()
# Find Top Topic Score Per Episode
max_topic_score = nmf_vecs.max(axis = 1)
topic_score_df = pd.DataFrame(max_topic_score)
topic_score_df = topic_score_df.rename({0:"score"},axis=1)
topic_score_df.head()
# Big Episode Topic Matrix
episode_topics_df = pd.DataFrame(nmf_vecs)
episode_topics_df.head()
# Merge Episode DataFrames Together
eps_topic_comb_df = episode_scripts_df.merge(topic_df,how='outer', left_index=True, right_index=True)
eps_topic_comb_df = eps_topic_comb_df.merge(topic_score_df,how='outer', left_index=True, right_index=True)
eps_topic_comb_df = eps_topic_comb_df.merge(episode_topics_df,how='outer', left_index=True, right_index=True)
eps_topic_comb_df = eps_topic_comb_df.drop(["first","second"],axis=1)
eps_topic_comb_df.head()
# ### Name Topics
top_eps_per_topic = eps_topic_comb_df.sort_values(['topic','score'],ascending=False)[["season_ep","topic","score"]].groupby('topic').head(5)
top_eps_per_topic[["season_ep","topic","score"]][top_eps_per_topic["topic"]==15]
#cycle through each time and look at scripts
topic_dict = {0:"Bar/Drinking",1:"Shooting/Guns",2:"Sexual Assault",3:"Vehicle",4:"School",5:"Hospital",
6:"Email/Phone/Mail",7:"Tourist/Visitor",8:"Russian Mob",9:"Party/Nightclub",10:"Apartment Building",
11:"Bombing",12:"Credit Card Purchases",13:"Arson",14:"Money",15:"Romantic Relationship"}
topic_names_df = eps_topic_comb_df.rename(topic_dict,axis=1)
topic_names_df["topic"] = topic_names_df["topic"].map(topic_dict)
# ### Merge Topic Names with Episode Table & Export
topic_names_df.head()
topic_names_df.to_csv("../05_Data/04_Model_Output/1st_half_episodes_by_topic.csv")
# ### Merge Topic Names with Word Table & Export
topic_words_df = topic_count_df.merge(topics_df,how='outer', left_index=True, right_index=True)
topic_words_df = topic_words_df.rename(topic_dict,axis=0)
topic_words_df.head(10)
topic_words_df.to_csv("../05_Data/04_Model_Output/1st_half_topic_words.csv")
# ### Pickle The Model
# +
import pickle
with open('1st_half_episode_model.pkl', 'wb') as picklefile:
pickle.dump(nmf_vecs, picklefile)
# -
|
Model_1st_Half.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <center>MobileNet - Pytorch
# # Step 1: Prepare data
# +
# MobileNet-Pytorch
import argparse
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import StepLR
from torchvision import datasets, transforms
from torch.autograd import Variable
from torch.utils.data.sampler import SubsetRandomSampler
from sklearn.metrics import accuracy_score
from mobilenets import mobilenet
use_cuda = torch.cuda.is_available()
use_cudause_cud = torch.cuda.is_available()
dtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
# +
# Train, Validate, Test. Heavily inspired by Kevinzakka https://github.com/kevinzakka/DenseNet/blob/master/data_loader.py
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
valid_size=0.1
# define transforms
valid_transform = transforms.Compose([
transforms.ToTensor(),
normalize
])
train_transform = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize
])
# load the dataset
train_dataset = datasets.CIFAR10(root="data", train=True,
download=True, transform=train_transform)
valid_dataset = datasets.CIFAR10(root="data", train=True,
download=True, transform=valid_transform)
num_train = len(train_dataset)
indices = list(range(num_train))
split = int(np.floor(valid_size * num_train)) #5w张图片的10%用来当做验证集
np.random.seed(42)
np.random.shuffle(indices) # 随机乱序[0,1,...,49999]
train_idx, valid_idx = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_idx) # 这个很有意思
valid_sampler = SubsetRandomSampler(valid_idx)
# ------------------------- 使用不同的批次大小 ------------------------------------
show_step=10 # 批次大,show_step就小点
max_epoch=60 # 训练最大epoch数目
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=256, sampler=train_sampler)
valid_loader = torch.utils.data.DataLoader(valid_dataset,
batch_size=256, sampler=valid_sampler)
test_transform = transforms.Compose([
transforms.ToTensor(), normalize
])
test_dataset = datasets.CIFAR10(root="data",
train=False,
download=True,transform=test_transform)
test_loader = torch.utils.data.DataLoader(test_dataset,
batch_size=64,
shuffle=True)
# -
# # Step 2: Model Config
# +
# 32 缩放5次到 1x1@1024
# From https://github.com/kuangliu/pytorch-cifar
import torch
import torch.nn as nn
import torch.nn.functional as F
class Block(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False)
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
return out
class MobileNet(nn.Module):
# (128,2) means conv planes=128, conv stride=2, by default conv stride=1
cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), 1024]
def __init__(self, num_classes=10):
super(MobileNet, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(32)
self.layers = self._make_layers(in_planes=32) # 自动化构建层
self.linear = nn.Linear(1024, num_classes)
def _make_layers(self, in_planes):
layers = []
for x in self.cfg:
out_planes = x if isinstance(x, int) else x[0]
stride = 1 if isinstance(x, int) else x[1]
layers.append(Block(in_planes, out_planes, stride))
in_planes = out_planes
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layers(out)
out = F.avg_pool2d(out, 2)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
# +
# From https://github.com/Z0m6ie/CIFAR-10_PyTorch
#model = mobilenet(num_classes=10, large_img=False)
# From https://github.com/kuangliu/pytorch-cifar
model=MobileNet(10).cuda()
optimizer = optim.Adam(model.parameters(), lr=0.01)
scheduler = StepLR(optimizer, step_size=10, gamma=0.5)
criterion = nn.CrossEntropyLoss()
# -
# Implement validation
def train(epoch):
model.train()
#writer = SummaryWriter()
for batch_idx, (data, target) in enumerate(train_loader):
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
correct = 0
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).sum()
loss = criterion(output, target)
loss.backward()
accuracy = 100. * (correct.cpu().numpy()/ len(output))
optimizer.step()
if batch_idx % show_step == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}, Accuracy: {:.2f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item(), accuracy))
#writer.add_scalar('Loss/Loss', loss.item(), epoch)
#writer.add_scalar('Accuracy/Accuracy', accuracy, epoch)
scheduler.step()
def validate(epoch):
model.eval()
#writer = SummaryWriter()
valid_loss = 0
correct = 0
for data, target in valid_loader:
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
output = model(data)
valid_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).sum()
valid_loss /= len(valid_idx)
accuracy = 100. * correct.cpu().numpy() / len(valid_idx)
print('\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
valid_loss, correct, len(valid_idx),
100. * correct / len(valid_idx)))
#writer.add_scalar('Loss/Validation_Loss', valid_loss, epoch)
#writer.add_scalar('Accuracy/Validation_Accuracy', accuracy, epoch)
return valid_loss, accuracy
# +
# Fix best model
def test(epoch):
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
output = model(data)
test_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct.cpu().numpy() / len(test_loader.dataset)))
# -
def save_best(loss, accuracy, best_loss, best_acc):
if best_loss == None:
best_loss = loss
best_acc = accuracy
file = 'saved_models/best_save_model.p'
torch.save(model.state_dict(), file)
elif loss < best_loss and accuracy > best_acc:
best_loss = loss
best_acc = accuracy
file = 'saved_models/best_save_model.p'
torch.save(model.state_dict(), file)
return best_loss, best_acc
# +
# Fantastic logger for tensorboard and pytorch,
# run tensorboard by opening a new terminal and run "tensorboard --logdir runs"
# open tensorboard at http://localhost:6006/
from tensorboardX import SummaryWriter
best_loss = None
best_acc = None
import time
SINCE=time.time()
for epoch in range(max_epoch):
train(epoch)
loss, accuracy = validate(epoch)
best_loss, best_acc = save_best(loss, accuracy, best_loss, best_acc)
NOW=time.time()
DURINGS=NOW-SINCE
SINCE=NOW
print("the time of this epoch:[{} s]".format(DURINGS))
#writer = SummaryWriter()
#writer.export_scalars_to_json("./all_scalars.json")
#writer.close()
#---------------------------- Test ------------------------------
test(epoch)
# -
# # Step 3: Test
test(epoch)
|
Main_scripts_04.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # General Information
#
# This notebook demonstrates how the `fastai_sparse` library can be used in semantic segmentation tasks using the example of the [ShapeNet Core55](https://shapenet.cs.stanford.edu/iccv17/) 3D semantic segmentation solution presented in [SparseConvNet example](https://github.com/facebookresearch/SparseConvNet/tree/master/examples/3d_segmentation).
#
#
# <img src="https://camo.githubusercontent.com/a94ad53ba6adc857323bd9ba3050805fa16d8aab/687474703a2f2f6d73617676612e6769746875622e696f2f66696c65732f73686170656e65742e706e67" width="480" />
#
# Initial data is subset of ShapeNetCore containing about 17,000 models from 16 shape categories Each category is annotated with 2 to 6 parts and there are 50 different parts annotated in total. 3D shapes are represented as point clouds uniformly sampled from 3D surfaces
#
# Evaluation metric: weighted IoU (see [https://arxiv.org/pdf/1711.10275.pdf](https://arxiv.org/pdf/1711.10275.pdf))
#
# Firstly, it is necessary to upload and prepare the initial data. See [examples/shapenet_iccv17](examples/shapenet_iccv17)
#
# # Imports
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
# +
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import sparseconvnet as scn
import time
import os, sys
import math
import numpy as np
import pandas as pd
import datetime
import glob
from IPython.display import display, HTML, FileLink
from os.path import join, exists, basename, splitext
from pathlib import Path
from matplotlib import pyplot as plt
from matplotlib import cm
#from tensorboardX import SummaryWriter
from joblib import cpu_count
from tqdm import tqdm
#from tqdm import tqdm_notebook as tqdm
#import fastai
# autoreload python modules on the fly when its source is changed
# %load_ext autoreload
# %autoreload 2
# +
from fastai_sparse import utils
from fastai_sparse.utils import log, log_dict, print_random_states
from fastai_sparse.datasets import find_files
from fastai_sparse.datasets import PointsDataset
#, SparseDataBunch
from fastai_sparse import visualize
from datasets import DataSourceConfig, reader_fn
import transform as T
#from data import merge_fn
# -
assert torch.cuda.is_available()
# # Experiment environment / system metrics
experiment_name = 'unet_24_detailed'
utils.watermark()
# ## Notebook options
utils.wide_notebook()
# uncomment this lines if you want switch off interactive and save visaulisation as screenshoots:
# For rendering run command in terminal: `chromium-browser --remote-debugging-port=9222`
if True:
visualize.options.interactive = False
visualize.options.save_images = True
visualize.options.verbose = True
visualize.options.filename_pattern_image = Path('images', experiment_name, 'fig_{fig_number}')
# # Sourse
# ## Create DataFrames
# +
SOURCE_DIR = Path('data').expanduser()
#SOURCE_DIR = Path('/home/ssd/shapenet_data').expanduser()
assert SOURCE_DIR.exists()
DIR_TRAIN_VAL = SOURCE_DIR / 'train_val'
assert DIR_TRAIN_VAL.exists(), "Hint: run `download_and_split_data.sh` then `convert_to_numpy.ipynb`"
print(SOURCE_DIR)
print(DIR_TRAIN_VAL)
# +
categories = [
"02691156", "02773838", "02954340", "02958343", "03001627", "03261776",
"03467517", "03624134", "03636649", "03642806", "03790512", "03797390",
"03948459", "04099429", "04225987", "04379243"
]
classes = [
'Airplane', 'Bag', 'Cap', 'Car', 'Chair', 'Earphone', 'Guitar', 'Knife',
'Lamp', 'Laptop', 'Motorbike', 'Mug', 'Pistol', 'Rocket', 'Skateboard',
'Table'
]
num_classes_by_category = [4, 2, 2, 4, 4, 3, 3, 2, 4, 2, 6, 2, 3, 3, 3, 3]
assert len(categories) == len(classes)
print("Number of categories:", len(categories))
# -
df_train = find_files(path=SOURCE_DIR / 'npy' / 'train', ext='.points.npy', ext_labels='.labels.npy', categories=categories)
df_valid = find_files(path=SOURCE_DIR / 'npy' / 'valid', ext='.points.npy', ext_labels='.labels.npy', categories=categories)
print(len(df_train))
df_train.head()
print(len(df_valid))
df_valid.head()
# # DataSets config
# You can create PointsDataset using the configuration.
train_source_config = DataSourceConfig(
root_dir=SOURCE_DIR / 'npy' / 'train',
df=df_train,
batch_size=16,
num_workers=12,
num_classes=50,
num_classes_by_category=num_classes_by_category,
)
train_source_config
valid_source_config = DataSourceConfig(
root_dir=SOURCE_DIR / 'npy' / 'valid',
df=df_valid,
batch_size=16,
num_workers=12,
num_classes=50,
num_classes_by_category=num_classes_by_category,
init_numpy_random_seed=False,
)
valid_source_config
train_source_config.check_accordance(valid_source_config)
# ## Datasets
train_items = PointsDataset.from_source_config(train_source_config, reader_fn=reader_fn)
valid_items = PointsDataset.from_source_config(valid_source_config, reader_fn=reader_fn)
train_items.check()
valid_items.check()
#train_items.check_num_classes(max_num_examples=100)
train_items
o = train_items.get(5)
o
# Let's see what we've done with one example.
o.describe()
o.show()
# # Transforms
# ## Define transforms
#
# In order to reproduce the [example of SparseConvNet](https://github.com/facebookresearch/SparseConvNet/tree/master/examples/3d_segmentation), the same transformations have been redone, but in the manner of fast.ai transformations.
#
# The following cells define the transformations: preprocessing (PRE_TFMS); augmentation (AUGS_); and transformation to convert the points cloud to a sparse representation (SPARSE_TFMS). Sparse representation is the input format for the SparseConvNet model and contains a list of voxels and their features
# +
resolution = 24
PRE_TFMS = [
T.fit_to_sphere(center=False),
T.shift_labels(offset=-1)
]
AUGS_TRAIN = [
T.rotate(),
T.flip_x(p=0.5),
]
AUGS_VALID = [
T.rotate(),
T.flip_x(p=0.5),
]
SPARSE_TFMS = [
T.translate(offset=2), # segment [-1, 1] ---> segment [1, 3]
T.scale(scale=resolution),
T.merge_features(ones=True),
T.to_sparse_voxels(),
]
# -
# Let's see what we got in results of train and valid tranformations for the first example:
#
# initial representation:
o = train_items[-1]
o.describe()
o.show()
# transformed:
np.random.seed(42)
b = o.copy().apply_tfms(PRE_TFMS + AUGS_TRAIN + SPARSE_TFMS)
b.describe()
b.show()
# ## Apply transforms to datasets
# +
tfms = (
PRE_TFMS + AUGS_TRAIN + SPARSE_TFMS,
PRE_TFMS + AUGS_VALID + SPARSE_TFMS,
)
train_items.transform(tfms[0])
pass
valid_items.transform(tfms[1])
pass
# -
# # DataBunch
# In fast.ai the data is represented DataBunch which contains train, valid and optionally test data loaders.
from fastai_sparse.data import SparseDataBunch
from data import merge_fn
# +
data = SparseDataBunch.create(train_ds=train_items,
valid_ds=valid_items,
collate_fn=merge_fn,)
data.describe()
# -
# ## Dataloader idle run speed measurement
# +
from fastai_sparse.core import num_cpus
print("num_cpus:", num_cpus())
# !lscpu | grep "Model"
print()
#data.describe()
# -
# train
t = tqdm(enumerate(data.train_dl), total=len(data.train_dl))
for i, batch in t:
pass
# valid
t = tqdm(enumerate(data.valid_dl), total=len(data.valid_dl))
for i, batch in t:
pass
# # Model
# U-Net SparseConvNet implemenation ([link](https://github.com/facebookresearch/SparseConvNet/blob/master/examples/3d_segmentation/unet.py)):
# +
from fastai_sparse.learner import SparseModelConfig
model_config = SparseModelConfig(spatial_size=24 * 8, num_input_features=1)
model_config.check_accordance(data.train_ds.source_config, sparse_item=data.train_ds[0])
model_config
# +
class Model(nn.Module):
def __init__(self, cfg):
nn.Module.__init__(self)
spatial_size = torch.LongTensor([cfg.spatial_size]*3)
self.sparseModel = scn.Sequential(
scn.InputLayer(cfg.dimension, spatial_size, mode=cfg.mode),
scn.SubmanifoldConvolution(cfg.dimension, nIn=cfg.num_input_features, nOut=cfg.m, filter_size=3, bias=cfg.bias),
scn.UNet(cfg.dimension, cfg.block_reps, cfg.num_planes, residual_blocks=cfg.residual_blocks, downsample=cfg.downsample),
scn.BatchNormReLU(cfg.m),
scn.OutputLayer(cfg.dimension),
)
self.linear = nn.Linear(cfg.m, cfg.num_classes)
def forward(self, xb):
coords = xb['coords']
features = xb['features']
x = [coords, features]
x = self.sparseModel(x)
x = self.linear(x)
return x
model = Model(model_config)
# -
model
utils.print_trainable_parameters(model)
# ## Learner creation
# Learner is core fast.ai class which contains model architecture, databunch and optimizer options and implement train loop and prediction
# +
from torch import optim
from functools import partial
from fastai.callbacks.general_sched import TrainingPhase, GeneralScheduler
#from fastai.callbacks.csv_logger import CSVLogger
#from fastai.callbacks.tracker import SaveModelCallback
from fastai.callback import annealing_exp
from fastai_sparse.learner import Learner
from fastai_sparse.callbacks import TimeLogger, SaveModelCallback, CSVLogger, CSVLoggerIouByCategory
from metrics import IouByCategories
# +
learn = Learner(data, model,
opt_func=partial(optim.SGD, momentum=0.9),
wd=1e-4,
true_wd=False,
path=str(Path('results', experiment_name)))
# -
# ## Learning Rate finder
# We use Learning Rate Finder provided by fast.ai library to find the optimal learning rate
# %%time
learn.lr_find(start_lr=1e-5, end_lr=100)
learn.recorder.plot()
# # Train
# To visualize the learning process, we specify some additional callbacks
# +
learn.callbacks = []
cb_iou = IouByCategories(learn, len(categories))
learn.callbacks.append(cb_iou)
learn.callbacks.append(TimeLogger(learn))
learn.callbacks.append(CSVLogger(learn))
learn.callbacks.append(CSVLoggerIouByCategory(learn, cb_iou, categories_names=classes))
learn.callbacks.append(SaveModelCallback(learn, every='epoch', name='weights', overwrite=True))
# -
learn.fit_annealing_exp(3, lr=0.1, lr_decay=4e-2, momentum=0.9)
# +
# learn.fit_annealing_exp(100, lr=0.1, lr_decay=4e-2, momentum=0.9)
# -
learn.find_callback('CSVLoggerIouByCategory').read_logged_file().tail()
learn.find_callback('CSVLogger').read_logged_file().tail()
learn.recorder.plot_losses()
learn.recorder.plot_lr()
learn.recorder.plot_metrics()
learn.recorder.plot()
|
examples/shapenet_iccv17/unet_24_detailed.ipynb
|