code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div class="alert alert-success">
# <b>Author</b>:
#
# <NAME>
# <EMAIL>
#
# </div>
#
# # [Click here to see class lecture](https://photos.app.goo.gl/pLHza84vT2HViZJz9)
# # Todays agenda
#
# 
#
# 
#
# ## Agile Model [Read More](https://www.tutorialspoint.com/sdlc/sdlc_agile_model.htm)
#
# Agile model always thinks about customer satisfaction, this model tries to satisfy customers. It's a combination of iterative and incremental process. Its iterative cause it starts the project and with time the it iterates to get better outputs and in each iteration we can see a full SDLC life cycle. Also it's incremental cause it starts from a small scale by just implementing core features and with iteration it grows & improves by adding more features. It supports rapid change in environment like if a customer wants to change something it can be changed in the current iteration or in the next one, this is it's benefits compared to other models. Each iteration will have N amount of time. N is fixed for all iteration. Agile model can be implemented in making mobile phones, but each mobile phone will have it's own agile model. After the final iteration the product will be finally deployed and if the customer wants deployment of test unit after each iteration that can also be done as agile model's goal is to satisfy customer. After each iteration we give a report called release. Agile model needs experienced team.
#
# 
#
# 
#
# 
#
# 
#
# 
#
# 
#
# We'll only learn eXtreme programming(XP) to test agile models. **Very Important**, These 5 methods can come in mcq and XP will come in written part.
#
# 
# ## eXtreme programming (XP)
#
# It's a concept not a programming language, it's a developing methodology. It's used due to when customers changes their requirements.
#
# 
#
# 
#
# 
#
# Even testing needs some planing. Like if we need to test a camera we need to plan first how we would test it. We can test in portrait or landscape or we can do some automation that will test all the functionalities of the camera. Based on planing we need to design the test and write codes to do unit tests. Then we implement the testing code and methods in test section where we actually test the product. Now while testing lets assume the code to test camera flash isn't working so this bug info is sent to coding section the refactor the code and then again the flash is tested. So testing and refactoring happens parallelly. If the product passes testing then it's accepted otherwise if there's a bug the bug report will be sent to coding section or if the product fails the test then we can goto planing considering that there was some mistake in planing. Testing happens in each iteration.
#
# 
#
# This is the concept of XP.
#
# Uses of agile model.
#
# 
#
# **Important**
#
# 
#
# Scenario , Team A uses waterfall model and Team B uses agile model. Team_A calls requirement team first and spends 1.5 month analysis user requirements and Team_B calls all the teams and start the project from a small scale and advancing with each iteration. Team_B does requirement analysis in each iteration. Now after three months Team_A is in design of system phase and Team_B is in a iteration. Now the user comes and changes a requirement that wasn't mentioned previously. Let's see the Teams response.
#
# Now Team_A won't be able to update the requirements as they spent 1.5 month just gathering and analyzing requirements and the requirement analyzing team is working on another project as we only called them for 1.5 months. Then the customer can ask for some demo/coding but Team_A is still in design phase so there's no demo to present.But Team_B can update requirements and also present some coding demo as all teams are available throughout the project and due to iteration some demo can be presented.
#
# That's why day by day agile model is becoming more popular.
#
# 
#
# 
# ## Big bang model
#
# It's very flexible for developers. Developers doesn't need to maintain documentation and they can make the project by whatever means they want like in java or python etc. This model can be used in school projects.
#
# 
# ## Prototyping Model
#
# 
#
# When a prototype is rejected it goes to rapid throwaway prototyping phase. Where current prototype is destroyed final srs(documentation) from the first loop is developed/updated then it goes through next phases of rapid throwaway prototyping phase and after that it again applies for acceptance, if accepted then it goes to evolutionary prototyping otherwise this process will be repeated.
#
# 
#
# We can use this when the requirement isn't clear like before the podda shetu went into the making engineers from CSE,CE,EEE came together to develop a prototype and simulate it cause it's a big project and error won't be tolerated. The engineers didn't knew the full requirements like how the soil will react to the pillar and what should they do to solve it. So they used prototyping to get those requirements.
#
#
# 
#
# 
# ## Very Very Important
#
# 
# ## System Evaluation
#
# 
# ## RUP
#
# 
#
# 
#
# 
# ## CASE
#
# 
#
# 
#
# This figure explanation may come in exam mid/final. There are tools in y axis and phases in x axis. Lets see some description of this figure. In specification phase we don't need engineering, testing, debugging, program tools cause we are just gathering specification or requirement of the system.We'll need method support, prototyping, documentation tools cause these are needed in requirement analysis like what method can we use to develop this system or what prototyping tools should be used etc.
#
# 
#
# 
#
# 
# # That's all for this lecture!
| CSE_321_Software Engineering/Lecture_7_20.07.2020.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Minimal LineageOT demo
#
# This notebook shows a minimal working example of the LineageOT pipeline.
import anndata
import lineageot
import numpy as np
rng = np.random.default_rng()
# The anndata object requires three kinds of information:
#
# - Cell state measurements (in adata.X or adata.obsm)
# - Cell sampling times, relative to the root of lineage tree, i.e. fertilization (in adata.obs)
# - Cell lineage barcodes (in adata.obsm)
#
# The barcodes should be encoded as row vectors where each entry corresponds to a possibly-mutated site. A positive number indicates an observed mutation, zero indicates no mutation, and -1 indicates the site was not observed.
#
# For example, if row `i` of `adata.obsm['barcodes']` is
# ```
# [0, 0, 13, -1]
# ```
# that means that, out of four possible sites for mutations, cell `i` was observed to have no mutations in the first two sites and mutation 13 in the third site. The last site was not observed for cell `i`.
# +
#Creating a minimal fake AnnData object to run LineageOT on
t1 = 5;
t2 = 10;
n_cells_1 = 5;
n_cells_2 = 10;
n_cells = n_cells_1 + n_cells_2;
n_genes = 5;
barcode_length = 10;
adata = anndata.AnnData(X = np.random.rand(n_cells, n_genes),
obs = {"time" : np.concatenate([t1*np.ones(n_cells_1), t2*np.ones(n_cells_2)])},
obsm = {"barcodes" : rng.integers(low = -1, high = 10, size = (n_cells, barcode_length))}
)
# +
# Running LineageOT
lineage_tree_t2 = lineageot.fit_tree(adata[adata.obs['time'] == t2], t2)
coupling = lineageot.fit_lineage_coupling(adata, t1, t2, lineage_tree_t2)
# -
# Saving the fitted coupling in the format Waddington-OT expects
lineageot.save_coupling_as_tmap(coupling, t1, t2, './tmaps/example')
| examples/pipeline_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Combine data
#
# ### criteria
#
# - has data since 2012-03-01
import pandas as pd
import numpy as np
import os
from pathlib import Path
src_path='data/raw_price'
ticker_file = 'data/544tickers.csv'
benchmark_tickers = {1:'^GSPC', 2:'^DJI', 3:'^IXIC', 4:'^RUT',
5:'CL=F', 6:'GC=F', 7:'^TNX'}
benchmark_tickers
min_date = '2012-03-01'
df_ticker = pd.read_csv(ticker_file)
df_ticker.head()
df_list = []
for _, row in df_ticker.iterrows():
ticker = row['Symbol']
stock_id = row['stock_id']
raw_file = Path(src_path).joinpath(f'{ticker}.csv')
if raw_file.exists():
df_ = pd.read_csv(raw_file, index_col=0, sep='|')
if df_.index.min()<min_date:
df_['ticker']=ticker
df_['stock_id']=stock_id
df_list.append(df_[df_.index>=min_date])
else:
print(f'not enough data {ticker}: {df_.index.min()}')
continue
len(df_list)
for stock_id, ticker in benchmark_tickers.items():
raw_file = Path(src_path).joinpath(f'{ticker}.csv')
if raw_file.exists():
df_ = pd.read_csv(raw_file, index_col=0, sep='|')
if df_.index.min()<min_date:
df_['ticker']=ticker
df_['stock_id']=stock_id
df_list.append(df_[df_.index>=min_date])
else:
print(f'not enough data {ticker}: {df_.index.min()}')
continue
len(df_list)
df_all = pd.concat(df_list)
df_all.shape
df_all.index.min(), df_all.index.max()
dest_file = f"data/min_date_{min_date.replace('-', '')}_{len(df_list)-7}stocks.csv"
dest_file
df_all.to_csv(dest_file, index=True, sep='|', compression='bz2')
| pharma/2_combine_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="fdYPZliXZ_Mv" colab_type="code" colab={}
import tensorflow as tf
import numpy as np
# + id="4RTpjPj9aEBi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="25a8f821-5bab-4d53-cc25-2cdc977d0e8f"
print(tf.__version__)
# + id="m6CCectM4aWi" colab_type="code" colab={}
from tensorflow.keras.applications.inception_v3 import InceptionV3
# + id="7sboTYFs4aaV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="51d38add-0415-4f7e-f2cc-27453aee5efa"
model_inception = InceptionV3(weights='imagenet')
# + id="1x-sDrvF4hcq" colab_type="code" colab={}
model_inception.save('inception_imagenet_weights.h5')
# + id="tTNNzJXKaFgz" colab_type="code" colab={}
from tensorflow.keras.applications.vgg19 import VGG19
# + id="KqCmC27YaYpg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="1a33f174-00b0-4e32-8af4-bc28c5974619"
model = VGG19(weights='imagenet')
# + id="s8ynyniLadg-" colab_type="code" colab={}
model.save('vgg19_imagenet_weights.h5')
# + id="D1naBbbJakXT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="c377dae8-81d3-4df6-b267-994a042ff184"
from google.colab import drive
drive.mount('/content/gdrive/')
# + id="0eFG4wV_azE1" colab_type="code" colab={}
# !mv "/content/inception_imagenet_weights.h5" "/content/gdrive/My Drive/Artificial Intelligence/"
# + id="k9q_5YWEbLeg" colab_type="code" colab={}
# Finally download the weights file from google drive and use it in your project
| imagenet_weights.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pptx import Presentation
from pptx.util import Inches
from pptx import Presentation
from pptx.chart.data import ChartData
from pptx.enum.chart import XL_CHART_TYPE
from pptx.util import Cm #Inches
from pptx.enum.chart import XL_LEGEND_POSITION
from pptx.dml.color import RGBColor
import pandas as pd
# + jupyter={"source_hidden": true}
if __name__ == '__main__':
# open ppt with cover
prs = Presentation('cover.pptx')
title_only_slide_layout = prs.slide_layouts[0]
slide = prs.slides.add_slide(title_only_slide_layout)
shapes = slide.shapes
# define the table data
name_objects = ["PR 69.SP", "PR 69.OP", "MS% PR69.SP", "MS% PR69.OP"]
name_AIs = ["PR 69.SP", "PR 69.OP", "MS% PR69.SP", "MS% PR69.OP"]
val_AI1 = (4620952.33,4855459.236,5201840.644,5414324.468)
val_AI2 = (4451751, 4687641,4859008,4809117)
val_AI3 = (0.204, 0.211, 0.21, 0.205)
val_AI4 = (0.205, 0.213, 0.216, 0.206)
val_AIs = [val_AI1, val_AI2, val_AI3, val_AI4]
# define the table style
rows = 5
cols = 5
top = Cm(12.5)
left = Cm(3.5) #Inches(2.0)
width = Cm(24) # Inches(6.0)
height = Cm(6) # Inches(0.8)
# 添加表格到幻灯片 --------------------
table = shapes.add_table(rows, cols, left, top, width, height).table
# 设置单元格宽度
table.columns[0].width = Cm(6)# Inches(2.0)
table.columns[1].width = Cm(6)
table.columns[2].width = Cm(6)
table.columns[3].width = Cm(6)
# 设置标题行
table.cell(0, 1).text = name_objects[0]
table.cell(0, 2).text = name_objects[1]
table.cell(0, 3).text = name_objects[2]
table.cell(0, 3).text = name_objects[3]
# 填充数据
table.cell(1, 0).text = name_AIs[0]
table.cell(1, 1).text = str(val_AI1[0])
table.cell(1, 2).text = str(val_AI1[1])
table.cell(1, 3).text = str(val_AI1[2])
table.cell(1, 4).text = str(val_AI1[3])
table.cell(2, 0).text = name_AIs[1]
table.cell(2, 1).text = str(val_AI2[0])
table.cell(2, 2).text = str(val_AI2[1])
table.cell(2, 3).text = str(val_AI2[2])
table.cell(2, 4).text = str(val_AI2[3])
table.cell(3, 0).text = name_AIs[2]
table.cell(3, 1).text = str(val_AI3[0])
table.cell(3, 2).text = str(val_AI3[1])
table.cell(3, 3).text = str(val_AI3[2])
table.cell(3, 4).text = str(val_AI3[3])
# 定义图表数据 ---------------------
chart_data = ChartData()
chart_data.categories = name_objects
chart_data.add_series(name_AIs[0], val_AI1)
chart_data.add_series(name_AIs[1], val_AI2)
chart_data.add_series(name_AIs[2], val_AI3)
chart_data.add_series(name_AIs[3], val_AI4)
# 添加图表到幻灯片 --------------------
x, y, cx, cy = Cm(3.5), Cm(4.2), Cm(24), Cm(8)
graphic_frame = slide.shapes.add_chart(
XL_CHART_TYPE.COLUMN_CLUSTERED, x, y, cx, cy, chart_data
)
chart = graphic_frame.chart
chart.has_legend = True
chart.legend.position = XL_LEGEND_POSITION.TOP
chart.legend.include_in_layout = False
value_axis = chart.value_axis
value_axis.maximum_scale = 100.0
value_axis.has_title = True
value_axis.axis_title.has_text_frame = True
value_axis.axis_title.text_frame.text = "False positive"
value_axis.axis_title.text_frame.auto_size
prs.save('template_tmp.pptx')
# +
# 打开pptx文件
prs=Presentation(r'template.pptx')
# # 第一张图表 定义图表数据 ---------------------
data = pd.read_csv('p1.csv')
csv=data.where(data.notnull(), None)
name_AIs = ["PR 69.SP", "PR 69.OP", "MS% PR69.SP", "MS% PR69.OP"]
name_objects = csv.columns[1:].tolist() #["2020", "2021", "2022", "2023"]
val_AI1 = [tuple(x) for x in csv.iloc[0:1,1:].values][0] #(4620952.33,4855459.236,5201840.644,5414324.468)
val_AI2 = [tuple(x) for x in csv.iloc[1:2,1:].values][0] #(4451751, 4687641,4859008,4809117)
val_AI3 = [tuple(x) for x in csv.iloc[2:3,1:].values][0] #(0.204, 0.211, 0.21, 0.205)
val_AI4 = [tuple(x) for x in csv.iloc[3:4,1:].values][0] #(0.205, 0.213, 0.216, 0.206)
val_AIs = [val_AI1, val_AI2, val_AI3, val_AI4]
# 定义图表数据 ---------------------
chart_data = ChartData()
chart_data.categories = name_objects
chart_data.add_series(name_AIs[0], val_AI1)
chart_data.add_series(name_AIs[1], val_AI2)
chart_data.add_series(name_AIs[2], val_AI3)
chart_data.add_series(name_AIs[3], val_AI4)
slide = prs.slides[1]
for shape in slide.shapes:
if shape.has_chart:
c_chart = shape.chart
c_chart.replace_data(chart_data)
# 第二张图表
#定义图表数据 ---------------------
data = pd.read_csv('p2.csv')
p2_csv=data.where(data.notnull(), None)
p2_name_AIs = ["PR1", "PR2"]
p2_name_objects = p2_csv.columns[1:].tolist()
p2_val_AI1 = [tuple(x) for x in p2_csv.iloc[0:1,1:].values][0]
p2_val_AI2 = [tuple(x) for x in p2_csv.iloc[1:2,1:].values][0]
p2_chart_data = ChartData()
p2_chart_data.categories = p2_name_objects
p2_chart_data.add_series(p2_name_AIs[0], p2_val_AI1)
p2_chart_data.add_series(p2_name_AIs[1], p2_val_AI2)
p2_slide = prs.slides[2]
#定义表格数据 ---------------------
data = pd.read_csv('p2_t.csv',header=None)
p2t_csv=data.where(data.notnull(), None)
for shape in p2_slide.shapes:
if shape.has_chart:
p2_c_chart = shape.chart
p2_c_chart.replace_data(p2_chart_data)
if shape.has_table:
p2_t_table = shape.table
for index_r, row in p2t_csv.iterrows():
i_col = 0
for col in row:
# p2_t_table.cell(index_r, i_col).text = cell
cell = p2_t_table.cell(index_r, i_col)
tf = cell.text_frame
# cell.text_frame.paragraphs[0].text= col
p = tf.paragraphs[0]
p.font.size = Pt(10)
p.font.color.rgb = RGBColor(0xFF, 0x00, 0x00)
p.text = col
i_col = i_col + 1
prs.save('result.pptx')
# -
import pandas as pd
data=pd.read_csv('p2.csv').where(data.notnull(), None)
print(data)
| documents/demo code/.ipynb_checkpoints/ppt-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + tags=["remove-input"]
import numpy as np
np.set_printoptions(threshold=50)
path_data = '../../assets/data/'
# -
# # Tables
#
# Tables are a fundamental object type for representing data sets. A table can be viewed in two ways:
# * a sequence of named columns that each describe a single aspect of all entries in a data set, or
# * a sequence of rows that each contain all information about a single entry in a data set.
#
# In order to use tables, import all of the module called `datascience`, a module created for this text.
from datascience import *
# Empty tables can be created using the `Table` function. An empty table is useful because it can be extended to contain new rows and columns.
Table()
# The `with_columns` method on a table constructs a new table with additional labeled columns. Each column of a table is an array. To add one new column to a table, call `with_columns` with a label and an array. (The `with_column` method can be used with the same effect.)
#
# Below, we begin each example with an empty table that has no columns.
Table().with_columns('Number of petals', make_array(8, 34, 5))
# To add two (or more) new columns, provide the label and array for each column. All columns must have the same length, or an error will occur.
Table().with_columns(
'Number of petals', make_array(8, 34, 5),
'Name', make_array('lotus', 'sunflower', 'rose')
)
# We can give this table a name, and then extend the table with another column.
# +
flowers = Table().with_columns(
'Number of petals', make_array(8, 34, 5),
'Name', make_array('lotus', 'sunflower', 'rose')
)
flowers.with_columns(
'Color', make_array('pink', 'yellow', 'red')
)
# -
# The `with_columns` method creates a new table each time it is called, so the original table is not affected. For example, the table `flowers` still has only the two columns that it had when it was created.
flowers
# Creating tables in this way involves a lot of typing. If the data have already been entered somewhere, it is usually possible to use Python to read it into a table, instead of typing it all in cell by cell.
#
# Often, tables are created from files that contain comma-separated values. Such files are called CSV files.
#
# Below, we use the Table method `read_table` to read a CSV file that contains some of the data used by Minard in his graphic about Napoleon's Russian campaign. The data are placed in a table named `minard`.
minard = Table.read_table(path_data + 'minard.csv')
minard
# We will use this small table to demonstrate some useful Table methods. We will then use those same methods, and develop other methods, on much larger tables of data.
# <h2>The Size of the Table</h2>
#
# The method `num_columns` gives the number of columns in the table, and `num_rows` the number of rows.
minard.num_columns
minard.num_rows
# <h2>Column Labels</h2>
#
# The method `labels` can be used to list the labels of all the columns. With `minard` we don't gain much by this, but it can be very useful for tables that are so large that not all columns are visible on the screen.
minard.labels
# We can change column labels using the `relabeled` method. This creates a new table and leaves `minard` unchanged.
minard.relabeled('City', 'City Name')
# However, this method does not change the original table.
minard
# A common pattern is to assign the original name `minard` to the new table, so that all future uses of `minard` will refer to the relabeled table.
minard = minard.relabeled('City', 'City Name')
minard
# <h2>Accessing the Data in a Column</h2>
#
# We can use a column's label to access the array of data in the column.
minard.column('Survivors')
# The 5 columns are indexed 0, 1, 2, 3, and 4. The column `Survivors` can also be accessed by using its column index.
minard.column(4)
# The 8 items in the array are indexed 0, 1, 2, and so on, up to 7. The items in the column can be accessed using `item`, as with any array.
minard.column(4).item(0)
minard.column(4).item(5)
# <h2>Working with the Data in a Column</h2>
#
# Because columns are arrays, we can use array operations on them to discover new information. For example, we can create a new column that contains the percent of all survivors at each city after Smolensk.
initial = minard.column('Survivors').item(0)
minard = minard.with_columns(
'Percent Surviving', minard.column('Survivors')/initial
)
minard
# To make the proportions in the new columns appear as percents, we can use the method `set_format` with the option `PercentFormatter`. The `set_format` method takes `Formatter` objects, which exist for dates (`DateFormatter`), currencies (`CurrencyFormatter`), numbers, and percentages.
minard.set_format('Percent Surviving', PercentFormatter)
# <h2>Choosing Sets of Columns</h2>
#
# The method `select` creates a new table that contains only the specified columns.
minard.select('Longitude', 'Latitude')
# The same selection can be made using column indices instead of labels.
minard.select(0, 1)
# The result of using `select` is a new table, even when you select just one column.
minard.select('Survivors')
# Notice that the result is a table, unlike the result of `column`, which is an array.
minard.column('Survivors')
# Another way to create a new table consisting of a set of columns is to `drop` the columns you don't want.
minard.drop('Longitude', 'Latitude', 'Direction')
# Neither `select` nor `drop` change the original table. Instead, they create new smaller tables that share the same data. The fact that the original table is preserved is useful! You can generate multiple different tables that only consider certain columns without worrying that one analysis will affect the other.
minard
# All of the methods that we have used above can be applied to any table.
| Mathematics/Statistics/Statistics and Probability Python Notebooks/Computational and Inferential Thinking - The Foundations of Data Science (book)/Notebooks - by chapter/6. Python Tables/6.0.0 Tables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 函数
#
# - 函数可以用来定义可重复代码,组织和简化
# - 一般来说一个函数在实际开发中为一个小功能
# - 一个类为一个大功能
# - 同样函数的长度不要超过一屏
# ## 定义一个函数
#
# def function_name(list of parameters):
#
# do something
# 
# - 以前使用的random 或者range 或者print.. 其实都是函数或者类
# ## 调用一个函数
# - functionName()
# - "()" 就代表调用
def panduan ():
i = eval(input('>>'))
if i % 2==0:
print('偶数')
else:
print('奇数')
def fun1():
num_=eval(input('>>'))
return num_
a = fun1()
print(a **3)
# 
# ## 带返回值和不带返回值的函数
# - return 返回的内容
# - return 返回多个值
# - 一般情况下,在多个函数协同完成一个功能的时候,那么将会有返回值
# 
#
# - 当然也可以自定义返回None
# ## EP:
# 
# ## 类型和关键字参数
# - 普通参数
# - 多个参数
# - 默认值参数
# - 不定长参数
# +
def hanshu(x):
y = x**2
print(y)
# -
hanshu(x=2)
def y(x):
return x**2
y_ = y(100)
print (y_)
def san(num):
return num**3
def liang(num):
return num**2
def input_():
num=eval(input('>>'))
res3=san(num)
res2=liang(num)
print(res3-res2)
# ## 普通参数
input_()
# ## 多个参数
import os
def kuajiang(name1,name2,name3):
os.system('say{}{}{}哈哈'.format(name1,name2,name3))
kuajiang(name1='张学友',name2='hh',name3='kk')
# ## 默认值参数
account='54745'
password='<PASSWORD>'
is_ok_and_qitan=False
def login(account_login,password_login):
if account_login==account and password_login==password:
print('登陆成功')
else:
print('账号或密码错误')
login(account_login='54745',password_login='<PASSWORD>')
def qidong():
global is_ok_and_y
if is_ok_and_y ==False:
print('是否七天免登录?y/n')
res=input('>>')
account_1login=input('请输入账号')
password_login=input('请输入密码')
if res=='y':
login(account_login,password_login)
is_ok_and_y ==True
else:
login(account_login,password_login)
else:
print('登陆成功')
# ## 强制命名
# ## 不定长参数
# - \*args
# > - 不定长,来多少装多少,不装也是可以的
# - 返回的数据类型是元组
# - args 名字是可以修改的,只是我们约定俗成的是args
# - \**kwargs
# > - 返回的字典
# - 输入的一定要是表达式(键值对)
# - name,\*args,name2,\**kwargs 使用参数名
# ## 变量的作用域
# - 局部变量 local
# - 全局变量 global
# - globals 函数返回一个全局变量的字典,包括所有导入的变量
# - locals() 函数会以字典类型返回当前位置的全部局部变量。
# ## 注意:
# - global :在进行赋值操作的时候需要声明
# - 官方解释:This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope.
# - 
# # Homework
# - 1
# 
def getPentagonalNumber(num):
count = 0
for i in range(1,101):
num = int((i * ( 3 * i - 1) )/ 2 )
count += 1
if num < 100000:
print(num,' ',end=' ')
if count % 10 == 0:
print( )
getPentagonalNumber(10000)
# - 2
# 
def sumDigits():
n1 = 0
n = eval(input('一个整数>>'))
while n % 10 !=0:
n1 += n % 10
n = n // 10
print('这个整数所有数字的和为:',n1+n )
sumDigits()
# - 3
# 
def dsiplaySortedNumbers(num1,num2,num3):
if num1 > num2 and num1 > num3:
print(str(num1) ,end =' ' )
if num2 > num3:
print(str(num2) + ' ' + str(num3))
elif num3 > num2:
print(str(num3)+ ' ' + str(num2))
elif num2 > num1 and num2 > num3:
print(str(num2) ,end =' ' )
if num1 > num3:
print(str(num1) + ' ' +str(num3))
elif num3 > num1:
print(str(c) + ' ' + str(num1))
elif num3 > num1 and num3 > num2:
print(str(num3) ,end =' ' )
if num2 > num1:
print(str(num2) + ' ' + str(num1))
elif num1 > num2:
print(str(num1) + ' ' + str(num2))
# +
dsiplaySortedNumbers(4,9,78)
# -
# - 4
# 
# - 5
# 
# - 6
# 
# - 7
# 
# - 8
# 
# - 9
# 
# 
# - 10
# 
# - 11
# ### 去网上寻找如何用Python代码发送邮件
| 9.14.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:adventofcode]
# language: python
# name: conda-env-adventofcode-py
# ---
# # Checksum
#
# ## Input
# +
import pandas as pd
f_input = 'input.txt'
table = pd.read_csv(f_input, sep='\t', header=None)
# -
# ## Solution: part 1
sum(table.apply(max, axis=1) - table.apply(min, axis=1))
# ## Solution: part 2
def evenly_divide(u):
"""
finds quotient of only pair of entries which evenly divide
u: array-like
"""
sorted_u = sorted(u)
for i, n in enumerate(sorted_u):
for m in sorted_u[i+1:]:
if m % n == 0:
return m // n
sum(table.apply(evenly_divide, axis=1))
| 2017/ferran/day02/day02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbsphinx="hidden"
# This notebook is part of the `nbsphinx` documentation: http://nbsphinx.readthedocs.io/.
# -
# # Explicitly Dis-/Enabling Notebook Execution
#
# If you want to include a notebook without outputs and yet don't want `nbsphinx` to execute it for you, you can explicitly disable this feature.
#
# You can do this globally by setting the following option in [conf.py](conf.py):
#
# ```python
# nbsphinx_execute = 'never'
# ```
#
# Or on a per-notebook basis by adding this to the notebook's JSON metadata:
#
# ```json
# "nbsphinx": {
# "execute": "never"
# },
# ```
#
# There are three possible settings, `"always"`, `"auto"` and `"never"`.
# By default (= `"auto"`), notebooks with no outputs are executed and notebooks with at least one output are not.
# As always, per-notebook settings take precedence over the settings in `conf.py`.
#
# This very notebook has its metadata set to `"never"`, therefore the following cell is not executed:
6 * 7
| doc/never-execute.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="_Jlz8sR53AgC" executionInfo={"status": "ok", "timestamp": 1623226881467, "user_tz": -540, "elapsed": 16905, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="32bfd1c6-3b73-4d4b-d042-12883298a38e"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="zAhivo6h3Dxp" executionInfo={"status": "ok", "timestamp": 1623226886992, "user_tz": -540, "elapsed": 290, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="ff11be47-b6c7-4883-c8b1-6f28e9db81fe"
# cd /content/drive/MyDrive/dataset
# + id="ZhjzEKl73NG_" executionInfo={"status": "ok", "timestamp": 1623226906749, "user_tz": -540, "elapsed": 893, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}}
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# + id="Uy8wzj0w3Ki5" executionInfo={"status": "ok", "timestamp": 1623226907140, "user_tz": -540, "elapsed": 398, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}}
df = pd.read_csv('./heart_failure_clinical_records_dataset.csv')
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="5MajofMG3PTK" executionInfo={"status": "ok", "timestamp": 1623226910108, "user_tz": -540, "elapsed": 262, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="509d774f-938c-4509-b56f-e56f9e4694f5"
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="8sa2IteS3QGy" executionInfo={"status": "ok", "timestamp": 1623226912482, "user_tz": -540, "elapsed": 3, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="d8f06edd-e7e9-45d0-bb91-b0b1d29b2f12"
df.info()
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="7hoqR8Na3Qum" executionInfo={"status": "ok", "timestamp": 1623226973533, "user_tz": -540, "elapsed": 266, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="4d62e1c1-34cf-46a0-b16c-c313913f2139"
df.describe()
# + colab={"base_uri": "https://localhost:8080/"} id="AOCdDlix3foK" executionInfo={"status": "ok", "timestamp": 1623226990146, "user_tz": -540, "elapsed": 262, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="0f266dcf-48cb-4d38-b752-93d2900d37a3"
df.isna().sum() # 결측치가 없다.
# + colab={"base_uri": "https://localhost:8080/"} id="tDDBwSVR3jvR" executionInfo={"status": "ok", "timestamp": 1623227013002, "user_tz": -540, "elapsed": 256, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="d1915556-1f1d-4ee5-c679-584b118d9fcd"
df.isnull().sum() # isna() == isnull()
# + colab={"base_uri": "https://localhost:8080/", "height": 625} id="EoTJY0M03pUa" executionInfo={"status": "ok", "timestamp": 1623227162113, "user_tz": -540, "elapsed": 1683, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="2c1ba7b7-0ef4-4459-da8c-f3775a8b6912"
'''
heatmap
'''
plt.figure(figsize=(10,8))
sns.heatmap(df.corr(), annot=True)
# + id="qYutWkWc38dG" executionInfo={"status": "ok", "timestamp": 1623227350261, "user_tz": -540, "elapsed": 279, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}}
# outlier를 제거하고 보면 또 달라질 것이다. - 전처리 이후에도 다시 확인해봐야 함
# + colab={"base_uri": "https://localhost:8080/", "height": 297} id="t14JILA147pj" executionInfo={"status": "ok", "timestamp": 1623227417840, "user_tz": -540, "elapsed": 562, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="827ce126-5998-4918-a84e-008538f532e4"
sns.histplot(x='age', data=df, hue='DEATH_EVENT', kde=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 343} id="yalF_Fmp5Hic" executionInfo={"status": "ok", "timestamp": 1623227418139, "user_tz": -540, "elapsed": 306, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="caeae8c4-9a2f-4ac5-e264-ef44b08f9f6c"
sns.distplot(x=df['age']) # 분포만 확인(평균값 보기 편하다.) - 분류모델이 아니라면 이 표현이 더 인사이트를 도출할 수 있음
# + colab={"base_uri": "https://localhost:8080/"} id="LK8chUiy5dTv" executionInfo={"status": "ok", "timestamp": 1623227491840, "user_tz": -540, "elapsed": 266, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="d0d9e644-262f-46d5-906d-07d30279e86c"
df.columns
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="FLhIDjZ65MIN" executionInfo={"status": "ok", "timestamp": 1623227504582, "user_tz": -540, "elapsed": 412, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="286189ee-341e-483b-94d6-e323b5876a29"
sns.kdeplot(
data=df['creatinine_phosphokinase'],
shade=True
)
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="luGcOHhK5f3l" executionInfo={"status": "ok", "timestamp": 1623227560405, "user_tz": -540, "elapsed": 705, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="4ea1c005-cd66-4baa-ae6e-79faf6efb046"
sns.kdeplot(
data=df,
x = 'creatinine_phosphokinase',
shade=True,
hue = 'DEATH_EVENT'
)
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="bNYPAHij5u2d" executionInfo={"status": "ok", "timestamp": 1623227682110, "user_tz": -540, "elapsed": 829, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="8b97df06-62d7-4023-84e2-13c52ec49513"
sns.kdeplot(
data=df, x='creatinine_phosphokinase', hue='DEATH_EVENT',
fill=True,
palette='crest',
linewidth=0,
alpha = .5
)
# + colab={"base_uri": "https://localhost:8080/"} id="lcKZTgTw6B9R" executionInfo={"status": "ok", "timestamp": 1623227862061, "user_tz": -540, "elapsed": 272, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="c8439ff8-3705-4070-fa8e-1427b74835b0"
# 그래프가 한 쪽으로 치우쳐져 있는 경우 있다.
# 왜도(skewness)
from scipy.stats import skew
print(skew(df['age']))
print(skew(df['serum_sodium'])) # 안 좋음
print(skew(df['serum_creatinine'])) # 안 좋음
print(skew(df['platelets'])) # 안 좋음
print(skew(df['time']))
print(skew(df['creatinine_phosphokinase'])) # 안 좋음
print(skew(df['ejection_fraction']))
# -1보다 작거나 1보다 크면 분포 자체가 왜곡
# 0: good
# -1 ~ 1: 괜춘
# 그외: 나쁨(왜곡됨)
# + colab={"base_uri": "https://localhost:8080/"} id="PPM-zNUG626W" executionInfo={"status": "ok", "timestamp": 1623228039164, "user_tz": -540, "elapsed": 252, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="21417bda-dc00-47ad-91ac-422360dc564d"
# serum_sodium
# serum_creatinine
# platelets
# creatinine_phosphokinase
df['serum_creatinine'] = np.log(df['serum_creatinine'])
print(skew(df['serum_creatinine']))
# 첨도(오른쪽으로 치우쳐짐)가 크면 로그값(혹은 루트)을 씌워서 정규분포 느낌으로 만들어줌 / 왼쪽으로 치우쳐졌을땐 지수함수 이용
# -1 하거나 +1 하는 경우가 있음(좌표가 0이 되지 않게 하기 위해서)
# + colab={"base_uri": "https://localhost:8080/", "height": 353} id="gmRkKYS37j0u" executionInfo={"status": "ok", "timestamp": 1623228341060, "user_tz": -540, "elapsed": 290, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="fd8232df-ff59-4a60-dc63-4171ec885105"
# 범주형 데이터의 경우 크기값을 구분할 필요 있음
sns.countplot(df['DEATH_EVENT'])
# + colab={"base_uri": "https://localhost:8080/", "height": 401} id="lRhOh43X8tio" executionInfo={"status": "ok", "timestamp": 1623228389821, "user_tz": -540, "elapsed": 746, "user": {"displayName": "\uc774\ud6a8\uc8fc", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gis33vDTc8zSFzhouOl5TXYWcj3Dg7sLxY9Xo7A6A=s64", "userId": "07320265785617279809"}} outputId="9dd07142-5ed0-434f-8a23-0965b6570d4f"
sns.catplot(x='diabetes', y='age', hue='DEATH_EVENT', kind='box', data=df)
# + id="Bmvj0ge985T5"
| kaggle_practice_210609_02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploring the effects of filtering on Radiomics features
# In this notebook, we will explore how different filters change the radiomics features.
# +
# Radiomics package
from radiomics import featureextractor
import six, numpy as np
# -
# ## Setting up data
#
# Here we use `SimpleITK` (referenced as `sitk`, see http://www.simpleitk.org/ for details) to load an image and the corresponding segmentation label map.
# +
import os
import SimpleITK as sitk
from radiomics import getTestCase
# repositoryRoot points to the root of the repository. The following line gets that location if this Notebook is run
# from it's default location in \pyradiomics\examples\Notebooks
repositoryRoot = os.path.abspath(os.path.join(os.getcwd(), ".."))
imagepath, labelpath = getTestCase('brain1', repositoryRoot)
image = sitk.ReadImage(imagepath)
label = sitk.ReadImage(labelpath)
# -
# ## Show the images
#
# Using `matplotlib.pyplot` (referenced as `plt`), display the images in grayscale and labels in color.
# +
# Display the images
# %matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(20,20))
# First image
plt.subplot(1,2,1)
plt.imshow(sitk.GetArrayFromImage(image)[12,:,:], cmap="gray")
plt.title("Brain")
plt.subplot(1,2,2)
plt.imshow(sitk.GetArrayFromImage(label)[12,:,:])
plt.title("Segmentation")
plt.show()
# -
# ## Extract the features
#
# Using the `radiomics` package, first construct an `extractor` object from the parameters set in `Params.yaml`. We will then generate a baseline set of features. Comparing the features after running `SimpleITK` filters will show which features are less sensitive.
# +
import os
# Instantiate the extractor
params = os.path.join(os.getcwd(), '..', 'examples', 'exampleSettings', 'Params.yaml')
extractor = featureextractor.RadiomicsFeatureExtractor(params)
extractor.enableFeatureClassByName('shape', enabled=False) # disable shape as it is independent of gray value
# Construct a set of SimpleITK filter objects
filters = {
"AdditiveGaussianNoise" : sitk.AdditiveGaussianNoiseImageFilter(),
"Bilateral" : sitk.BilateralImageFilter(),
"BinomialBlur" : sitk.BinomialBlurImageFilter(),
"BoxMean" : sitk.BoxMeanImageFilter(),
"BoxSigmaImageFilter" : sitk.BoxSigmaImageFilter(),
"CurvatureFlow" : sitk.CurvatureFlowImageFilter(),
"DiscreteGaussian" : sitk.DiscreteGaussianImageFilter(),
"LaplacianSharpening" : sitk.LaplacianSharpeningImageFilter(),
"Mean" : sitk.MeanImageFilter(),
"Median" : sitk.MedianImageFilter(),
"Normalize" : sitk.NormalizeImageFilter(),
"RecursiveGaussian" : sitk.RecursiveGaussianImageFilter(),
"ShotNoise" : sitk.ShotNoiseImageFilter(),
"SmoothingRecursiveGaussian" : sitk.SmoothingRecursiveGaussianImageFilter(),
"SpeckleNoise" : sitk.SpeckleNoiseImageFilter(),
}
# +
# Filter
results = {}
results["baseline"] = extractor.execute(image, label)
for key, value in six.iteritems(filters):
print ( "filtering with " + key )
filtered_image = value.Execute(image)
results[key] = extractor.execute(filtered_image, label)
# -
# ## Prepare for analysis
#
# Determine which features had the highest variance.
# Keep an index of filters and features
filter_index = list(sorted(filters.keys()))
feature_names = list(sorted(filter ( lambda k: k.startswith("original_"), results[filter_index[0]] )))
# ## Look at the features with highest and lowest coefficient of variation
#
# The [coefficient of variation](https://en.wikipedia.org/wiki/Coefficient_of_variation) gives a standardized measure of dispersion in a set of data. Here we look at the effect of filtering on the different features.
#
# **Spoiler alert** As might be expected, the grey level based features, e.g. `ClusterShade`, `LargeAreaEmphasis`, etc. are most affected by filtering, and shape metrics (based on label mask only) are the least affected.
# +
# Pull in scipy to help find cv
import scipy.stats
features = {}
cv = {}
for key in feature_names:
a = np.array([])
for f in filter_index:
a = np.append(a, results[f][key])
features[key] = a
cv[key] = scipy.stats.variation(a)
# a sorted view of cv
cv_sorted = sorted(cv, key=cv.get, reverse=True)
# Print the top 10
print ("\n")
print ("Top 10 features with largest coefficient of variation")
for i in range(0,10):
print ("Feature: {:<50} CV: {}".format ( cv_sorted[i], cv[cv_sorted[i]]))
print ("\n")
print ("Bottom 10 features with _smallest_ coefficient of variation")
for i in range(-11,-1):
print ("Feature: {:<50} CV: {}".format ( cv_sorted[i], cv[cv_sorted[i]]))
| pyradiomics/notebooks/3.0 Feature Analysis/FilteringEffects USEIT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/anasuya3/PYTHON/blob/main/Project_Euler_Problem.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="AOFyfxgFzH_y"
# ##Multiples of 3 or 5##
# Problem 1
#
# If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
#
# Find the sum of all the multiples of 3 or 5 below 1000.
# + id="IMtA7eQRzVD3"
def isDivisible(n: int)-> bool:
return n % 3 == 0 or n % 5 == 0
def sumOfMultiples(n: int)->int:
return sum([i for i in range(1, n) if isDivisible(i)])
# + colab={"base_uri": "https://localhost:8080/"} id="tgYxhR0I0v1O" outputId="a3d6407a-724d-4ea2-e03d-c67da5019884"
print(sumOfMultiples(10))
print(sumOfMultiples(1000))
# + colab={"base_uri": "https://localhost:8080/"} id="_Yn6OBY-bYTo" outputId="d7ebcbc8-4689-452c-b580-2e5e4b6a1b0f"
def isDivisible1(n: int)-> bool:
return sum(range(3, n, 3)) + sum(range(5, n, 5)) - sum(range(15, n, 15))
isDivisible1(1000)
# + colab={"base_uri": "https://localhost:8080/"} id="0olTZdiBcJoJ" outputId="41e2f2f4-2d18-4593-a9f0-af80e06f542e"
def isDivisible2(n: int)-> bool:
return sum(set(range(3, n, 3)) | set(range(5, n, 5)))
isDivisible2(1000)
# + [markdown] id="r9_K3RhE1dZi"
# ##Even Fibonacci numbers##
#
# Problem - 2
#
# + [markdown] id="X79WVV7Y5gOc"
# METHOD-1
# + id="Cyz_jhLN1vTO"
def fib_sequence(n : int)->[int]:
a, b = 0, 1
seq = []
while a < n:
seq.append(a)
a, b = b, a + b
return seq
#fib_sequence(10)
# + id="sUicQtaj2-OC"
def evenFibSum(n : int)->int:
seq = fib_sequence(n)
return sum([num for num in seq if num % 2 == 0])
# + colab={"base_uri": "https://localhost:8080/"} id="f3ScPgzB3uXa" outputId="d7a87ee6-4eac-4855-eee9-ddd995a5da5b"
print(evenFibSum(10))
print(evenFibSum(4000000))
# + [markdown] id="EhyR1Z7s5bPG"
# METHOD-2
# + id="9ko5YR8w5KMS"
def even_fib(limit):
a, b = 0, 1
while a < limit:
if not a % 2:
#print(a)
yield a
a, b = b, a + b
# + colab={"base_uri": "https://localhost:8080/"} id="1fz9Q18m5Lk0" outputId="131c03b1-e6d7-4b28-90e0-e8a552a1ac4a"
print(sum(even_fib(4000000)))
# + [markdown] id="VrFlkQ198hUE"
# ##Largest prime factor
#
# Problem 3
# + colab={"base_uri": "https://localhost:8080/"} id="sNDRIiw3z5Ms" outputId="670f8891-6468-48c2-a14c-5d5c8013ea7c"
def prime_factors(n):
i = 2
factors = []
while i * i <= n:
if n % i:
i += 1
else:
n //= i
factors.append(i)
if n > 1:
factors.append(n)
return factors
p=prime_factors(600851475143)
print(p[-1])
# + [markdown] id="cCHa4QtHkSBt"
# ##Largest palindrome product
#
# Problem 4
# + id="aT7dE2GhClSt"
def checkPalindrome(num : int)->bool:
return str(num) == str(num)[::-1]
# + colab={"base_uri": "https://localhost:8080/"} id="dcsy6PMPCLbZ" outputId="4748b63c-ecb7-43e9-a2ca-b1c86920e354"
def largestPalindrome()-> int:
return max([i * x for i in range(999, 100, -1) for x in range(999, 100, -1) if checkPalindrome(i * x)])
print(largestPalindrome())
# + [markdown] id="Lt-UsV9G9NSE"
# ##Smallest multiple
#
# Problem 5
# + colab={"base_uri": "https://localhost:8080/"} id="Nwx731CZKyjW" outputId="19018214-4d90-4c2a-b9e2-c94eec08a1f2"
def gcd(x,y): return y and gcd(y, x % y) or x
def lcm(x,y): return x * y / gcd(x,y)
n = 20
for i in range(1, 21):
n = lcm(n, i)
print(int(n))
# + [markdown] id="QrdYCV2EMOFX"
# ##Sum square difference
#
# Problem 6
# + id="Xq8hPE4jMZel"
def sumOfsquares(n: int)->int:
return sum([ (i + 1) ** 2 for i in range(n)])
# + id="XnmWx-3xssrj"
def squareOfSum(n: int)->int:
return sum([i + 1 for i in range(n)]) ** 2
# + colab={"base_uri": "https://localhost:8080/"} id="5RtoRGfwtEwy" outputId="c29a9765-422b-4441-a754-630d7bcc3f86"
def sumSquareDiff(limit: int)->int:
return squareOfSum(limit) - sumOfsquares(limit)
sumSquareDiff(100)
# + [markdown] id="q-nQjH0HuWzA"
# ##10001st prime
#
# Problem 7
# + colab={"base_uri": "https://localhost:8080/"} id="X5SbpuNO1Zoe" outputId="8039f8cf-7610-4b5e-e734-c11a04950f10"
def nth_prime(n):
counter = 2
count = 0
for i in range(3, n**2, 2):
k = 1
count +=1
while k*k < i:
k += 2
if i % k == 0:
break
else:
counter += 1
if counter == n:
return i, count
print(nth_prime(10001))
# + colab={"base_uri": "https://localhost:8080/"} id="HB1nLKadxefv" outputId="f8fd74df-a727-4f4e-d89c-3658b039ca6d"
n=input("enter the nth prime ")
num=4
p=2
while p <int(n):
if all(num%i!=0 for i in range(2,num)):
p=p+1
num=num+1
print("nTH prime number: ",num-1)
# + [markdown] id="EzmxE4E11KF_"
#
# + id="CKEfCsfN5uJy"
def hailstone_Sequence(n):
lst=[]
while(n>=1):
if n%2:
n=n*3+1
else:
n/=2
lst.append(n)
return lst
print(hailstone_Sequence(7))
# + id="7_J3Eey18ycf"
lst=range(10)
print(lst)
# + id="_fCj5Mm29T1m"
def collatz_sequence(num):
print(num)
num= num/2 if num % 2 == 0 else num*3+1
if(num==1):
print(num)
return
collatz_sequence(num)
collatz_sequence(13)
| Project_Euler_Problem.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torchvision
# # Requirements
torch.__version__
# PyTorch deployment requires using an unstable version of PyTorch (1.0.0+).
#
# In order to install this version, use "Preview" option when choosing PyTorch version.
#
# https://pytorch.org/
# Let's create an example model using ResNet-18
model = torchvision.models.resnet18()
model
# Creating a sample of the input
# It will be used to pass it to the network to build the dimensions
sample = torch.rand(size=(1, 3, 224, 224))
# Creating so called "traced Torch script"
traced_script_module = torch.jit.trace(model, sample)
traced_script_module
# The TracedModule is capable of making predictions
sample_prediction = traced_script_module(torch.ones(size=(1, 3, 224, 224)))
sample_prediction.shape
# Serializing the the script module
traced_script_module.save('./models_deployment/model.pt')
# ## The module is ready to be loaded into C++ !
#
# That requires:
# - LibTorch
# - CMake
| 18-11-22-Deep-Learning-with-PyTorch/07-Deploying PyTorch Models/Deploying_PyTorch_models_in_production.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Precise text layout
#
#
# You can precisely layout text in data or axes coordinates. This example shows
# you some of the alignment and rotation specifications for text layout.
#
# +
import matplotlib.pyplot as plt
# Build a rectangle in axes coords
left, width = .25, .5
bottom, height = .25, .5
right = left + width
top = bottom + height
ax = plt.gca()
p = plt.Rectangle((left, bottom), width, height, fill=False)
p.set_transform(ax.transAxes)
p.set_clip_on(False)
ax.add_patch(p)
ax.text(left, bottom, 'left top',
horizontalalignment='left',
verticalalignment='top',
transform=ax.transAxes)
ax.text(left, bottom, 'left bottom',
horizontalalignment='left',
verticalalignment='bottom',
transform=ax.transAxes)
ax.text(right, top, 'right bottom',
horizontalalignment='right',
verticalalignment='bottom',
transform=ax.transAxes)
ax.text(right, top, 'right top',
horizontalalignment='right',
verticalalignment='top',
transform=ax.transAxes)
ax.text(right, bottom, 'center top',
horizontalalignment='center',
verticalalignment='top',
transform=ax.transAxes)
ax.text(left, 0.5 * (bottom + top), 'right center',
horizontalalignment='right',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
ax.text(left, 0.5 * (bottom + top), 'left center',
horizontalalignment='left',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
ax.text(0.5 * (left + right), 0.5 * (bottom + top), 'middle',
horizontalalignment='center',
verticalalignment='center',
transform=ax.transAxes)
ax.text(right, 0.5 * (bottom + top), 'centered',
horizontalalignment='center',
verticalalignment='center',
rotation='vertical',
transform=ax.transAxes)
ax.text(left, top, 'rotated\nwith newlines',
horizontalalignment='center',
verticalalignment='center',
rotation=45,
transform=ax.transAxes)
plt.axis('off')
plt.show()
| matplotlib/gallery_jupyter/text_labels_and_annotations/text_alignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Solo work with Git
# So, we're in our git working directory:
import os
top_dir = os.getcwd()
git_dir = os.path.join(top_dir, 'learning_git')
working_dir=os.path.join(git_dir, 'git_example')
os.chdir(working_dir)
working_dir
# ### A first example file
#
# So let's create an example file, and see how to start to manage a history of changes to it.
# <my editor> index.md # Type some content into the file.
# %%writefile index.md
Mountains in the UK
===================
England is not very mountainous.
But has some tall hills, and maybe a mountain or two depending on your definition.
# + attributes={"classes": [" Bash"], "id": ""}
# cat index.md
# -
# ### Telling Git about the File
#
# So, let's tell Git that `index.md` is a file which is important, and we would like to keep track of its history:
# + attributes={"classes": [" Bash"], "id": ""} language="bash"
# git add index.md
# -
# Don't forget: Any files in repositories which you want to "track" need to be added with `git add` after you create them.
#
# ### Our first commit
#
# Now, we need to tell Git to record the first version of this file in the history of changes:
# + attributes={"classes": [" Bash"], "id": ""} language="bash"
# git commit -m "First commit of discourse on UK topography"
# -
# And note the confirmation from Git.
#
# There's a lot of output there you can ignore for now.
# ### Configuring Git with your editor
#
# If you don't type in the log message directly with -m "Some message", then an editor will pop up, to allow you
# to edit your message on the fly.
# For this to work, you have to tell git where to find your editor.
# + attributes={"classes": [" Bash"], "id": ""} language="bash"
# git config --global core.editor vim
# -
# You can find out what you currently have with:
# + attributes={"classes": [" Bash"], "id": ""} language="bash"
# git config --get core.editor
# -
# To configure Notepad++ on windows you'll need something like the below, ask a demonstrator to help for your machine.
# + [markdown] attributes={"classes": [" Bash"], "id": ""}
# ``` bash
# git config --global core.editor "'C:/Program Files (x86)/Notepad++
# /notepad++.exe' -multiInst -nosession -noPlugin"
# ```
# -
# I'm going to be using `vim` as my editor, but you can use whatever editor you prefer. (Windows users could use "Notepad++", Mac users could use "textmate" or "sublime text", linux users could use `vim`, `nano` or `emacs`.)
# ### Git log
#
# Git now has one change in its history:
# + attributes={"classes": [" Bash"], "id": ""} language="bash"
# git log
# -
# You can see the commit message, author, and date...
# ### Hash Codes
#
# The commit "hash code", e.g.
#
# `c438f1716b2515563e03e82231acbae7dd4f4656`
#
# is a unique identifier of that particular revision.
#
# (This is a really long code, but whenever you need to use it, you can just use the first few characters, however many characters is long enough to make it unique, `c438` for example. )
# ### Nothing to see here
#
# Note that git will now tell us that our "working directory" is up-to-date with the repository: there are no changes to the files that aren't recorded in the repository history:
# + attributes={"classes": [" Bash"], "id": ""} language="bash"
# git status
# -
# Let's edit the file again:
#
# vim index.md
# +
# %%writefile index.md
Mountains in the UK
===================
England is not very mountainous.
But has some tall hills, and maybe a mountain or two depending on your definition.
Mount Fictional, in Barsetshire, U.K. is the tallest mountain in the world.
# + attributes={"classes": [" Bash"], "id": ""}
# cat index.md
# -
# ### Unstaged changes
# + attributes={"classes": [" Bash"], "id": ""} language="bash"
# git status
# -
# We can now see that there is a change to "index.md" which is currently "not staged for commit". What does this mean?
#
# If we do a `git commit` now *nothing will happen*.
#
# Git will only commit changes to files that you choose to include in each commit.
#
# This is a difference from other version control systems, where committing will affect all changed files.
# We can see the differences in the file with:
# + language="bash"
# git diff
# -
# Deleted lines are prefixed with a minus, added lines prefixed with a plus.
# ### Staging a file to be included in the next commit
#
# To include the file in the next commit, we have a few choices. This is one of the things to be careful of with git: there are lots of ways to do similar things, and it can be hard to keep track of them all.
# + attributes={"classes": [" Bash"], "id": ""} language="bash"
# git add --update
# -
# This says "include in the next commit, all files which have ever been included before".
#
# Note that `git add` is the command we use to introduce git to a new file, but also the command we use to "stage" a file to be included in the next commit.
# ### The staging area
#
# The "staging area" or "index" is the git jargon for the place which contains the list of changes which will be included in the next commit.
#
# You can include specific changes to specific files with git add, commit them, add some more files, and commit them. (You can even add specific changes within a file to be included in the index.)
# ### Message Sequence Charts
# In order to illustrate the behaviour of Git, it will be useful to be able to generate figures in Python
# of a "message sequence chart" flavour.
# There's a nice online tool to do this, called "Message Sequence Charts".
# Have a look at https://www.websequencediagrams.com
# Instead of just showing you these diagrams, I'm showing you in this notebook how I make them.
# This is part of our "reproducible computing" approach; always generating all our figures from code.
# Here's some quick code in the Notebook to download and display an MSC illustration, using the Web Sequence Diagrams API:
# +
# %%writefile wsd.py
import requests
import re
import IPython
def wsd(code):
response = requests.post("http://www.websequencediagrams.com/index.php", data={
'message': code,
'apiVersion': 1,
})
expr = re.compile("(\?(img|pdf|png|svg)=[a-zA-Z0-9]+)")
m = expr.search(response.text)
if m == None:
print("Invalid response from server.")
return False
image=requests.get("http://www.websequencediagrams.com/" + m.group(0))
return IPython.core.display.Image(image.content)
# -
from wsd import wsd
# %matplotlib inline
wsd("Sender->Recipient: Hello\n Recipient->Sender: Message received OK")
# ### The Levels of Git
# Let's make ourselves a sequence chart to show the different aspects of Git we've seen so far:
message="""
Working Directory -> Staging Area : git add
Staging Area -> Local Repository : git commit
Working Directory -> Local Repository : git commit -a
"""
wsd(message)
# ### Review of status
# + attributes={"classes": [" Bash"], "id": ""} language="bash"
# git status
# + language="bash"
# git commit -m "Add a lie about a mountain"
# + attributes={"classes": [" Bash"], "id": ""} language="bash"
# git log
# -
# Great, we now have a file which contains a mistake.
# ### Carry on regardless
#
# In a while, we'll use Git to roll back to the last correct version: this is one of the main reasons we wanted to use version control, after all! But for now, let's do just as we would if we were writing code, not notice our mistake and keep working...
# ```bash
# vim index.md
# ```
# +
# %%writefile index.md
Mountains and Hills in the UK
===================
England is not very mountainous.
But has some tall hills, and maybe a mountain or two depending on your definition.
Mount Fictional, in Barsetshire, U.K. is the tallest mountain in the world.
# + attributes={"classes": [" Bash"], "id": ""}
# cat index.md
# -
# ### Commit with a built-in-add
# + language="bash"
# git commit -am "Change title"
# -
# This last command, `git commit -a` automatically adds changes to all tracked files to the staging area, as part of the commit command. So, if you never want to just add changes to some tracked files but not others, you can just use this and forget about the staging area!
# ### Review of changes
# + attributes={"classes": [" Bash"], "id": ""} language="bash"
# git log | head
# -
# We now have three changes in the history:
# + attributes={"classes": [" Bash"], "id": ""} language="bash"
# git log --oneline
# -
# ### Git Solo Workflow
# We can make a diagram that summarises the above story:
message="""
participant "Jim's repo" as R
participant "Jim's index" as I
participant Jim as J
note right of J: vim index.md
note right of J: git init
J->I: create
J->R: create
note right of J: git add index.md
J->I: Add content of index.md
note right of J: git commit
J->R: Commit content of index.md
note right of J: vim index.md
note right of J: git add --update
J->I: Add content of index.md
note right of J: git commit -m "Add a lie"
I->R: Commit change to index.md
note right of J: vim index.md
note right of J: git commit -am "Change title"
J->I: Add content of index.md
J->R: Commit change to index.md
"""
wsd(message)
| ch02git/02Solo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/avani17101/Coursera-GANs-Specialization/blob/main/C2W2_(Optional_Notebook)_Score_Based_Generative_Modeling.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="21v75FhSkfCq"
# # Score-Based Generative Modeling
#
# *Please note that this is an optional notebook meant to introduce more advanced concepts. If you’re up for a challenge, take a look and don’t worry if you can’t follow everything. There is no code to implement—only some cool code for you to learn and run!*
#
# ### Goals
# This is a hitchhiker's guide to score-based generative models, a family of approaches based on [estimating gradients of the data distribution](https://arxiv.org/abs/1907.05600). They have obtained high-quality samples comparable to GANs (like below, figure from [this paper](https://arxiv.org/abs/2006.09011)) without requiring adversarial training, and are considered by some to be [the new contender to GANs](https://ajolicoeur.wordpress.com/the-new-contender-to-gans-score-matching-with-langevin-sampling/).
#
# 
#
#
# + [markdown] id="XCR6m0HjWGVV"
# ## Introduction
#
# ### Score and Score-Based Models
# Given a probablity density function $p(\mathbf{x})$, we define the *score* as $$\nabla_\mathbf{x} \log p(\mathbf{x}).$$ As you might guess, score-based generative models are trained to estimate $\nabla_\mathbf{x} \log p(\mathbf{x})$. Unlike likelihood-based models such as flow models or autoregressive models, score-based models do not have to be normalized and are easier to parameterize. For example, consider a non-normalized statistical model $p_\theta(\mathbf{x}) = \frac{e^{-E_\theta(\mathbf{x})}}{Z_\theta}$, where $E_\theta(\mathbf{x}) \in \mathbb{R}$ is called the energy function and $Z_\theta$ is an unknown normalizing constant that makes $p_\theta(\mathbf{x})$ a proper probability density function. The energy function is typically parameterized by a flexible neural network. When training it as a likelihood model, we need to know the normalizing constant $Z_\theta$ by computing complex high-dimensional integrals, which is typically intractable. In constrast, when computing its score, we obtain $\nabla_\mathbf{x} \log p_\theta(\mathbf{x}) = -\nabla_\mathbf{x} E_\theta(\mathbf{x})$ which does not require computing the normalizing constant $Z_\theta$.
#
# In fact, any neural network that maps an input vector $\mathbf{x} \in \mathbb{R}^d$ to an output vector $\mathbf{y} \in \mathbb{R}^d$ can be used as a score-based model, as long as the output and input have the same dimensionality. This yields huge flexibility in choosing model architectures.
#
# ### Perturbing Data with a Diffusion Process
#
# In order to generate samples with score-based models, we need to consider a [diffusion process](https://en.wikipedia.org/wiki/Diffusion_process) that corrupts data slowly into random noise. Scores will arise when we reverse this diffusion process for sample generation. You will see this later in the notebook.
#
# A diffusion process is a [stochastic process](https://en.wikipedia.org/wiki/Stochastic_process#:~:text=A%20stochastic%20or%20random%20process%20can%20be%20defined%20as%20a,an%20element%20in%20the%20set.) similar to [Brownian motion](https://en.wikipedia.org/wiki/Brownian_motion). Their paths are like the trajectory of a particle submerged in a flowing fluid, which moves randomly due to unpredictable collisions with other particles. Let $\{\mathbf{x}(t) \in \mathbb{R}^d \}_{t=0}^T$ be a diffusion process, indexed by the continuous time variable $t\in [0,T]$. A diffusion process is governed by a stochastic differential equation (SDE), in the following form
#
# \begin{align*}
# d \mathbf{x} = \mathbf{f}(\mathbf{x}, t) d t + g(t) d \mathbf{w},
# \end{align*}
#
# where $\mathbf{f}(\cdot, t): \mathbb{R}^d \to \mathbb{R}^d$ is called the *drift coefficient* of the SDE, $g(t) \in \mathbb{R}$ is called the *diffusion coefficient*, and $\mathbf{w}$ represents the standard Brownian motion. You can understand an SDE as a stochastic generalization to ordinary differential equations (ODEs). Particles moving according to an SDE not only follows the deterministic drift $\mathbf{f}(\mathbf{x}, t)$, but are also affected by the random noise coming from $g(t) d\mathbf{w}$.
#
# For score-based generative modeling, we will choose a diffusion process such that $\mathbf{x}(0) \sim p_0$, where we have a dataset of i.i.d. samples, and $\mathbf{x}(T) \sim p_T$, for which we have a tractable form to sample from.
#
# ### Reversing the Diffusion Process Yields Score-Based Generative Models
# By starting from a sample from $p_T$ and reversing the diffusion process, we will be able to obtain a sample from $p_\text{data}$. Crucially, the reverse process is a diffusion process running backwards in time. It is given by the following reverse-time SDE
#
# \begin{align}
# d\mathbf{x} = [\mathbf{f}(\mathbf{x}, t) - g^2(t)\nabla_{\mathbf{x}}\log p_t(\mathbf{x})] dt + g(t) d\bar{\mathbf{w}},
# \end{align}
#
# where $\bar{\mathbf{w}}$ is a Brownian motion in the reverse time direction, and $dt$ here represents an infinitesimal negative time step. Here $p_t(\mathbf{x})$ represents the distribution of $\mathbf{x}(t)$. This reverse SDE can be computed once we know the drift and diffusion coefficients of the forward SDE, as well as the score of $p_t(\mathbf{x})$ for each $t\in[0, T]$.
#
# The overall intuition of score-based generative modeling with SDEs can be summarized in the illustration below
# 
#
# ### Score Estimation
#
# Based on the above intuition, we can use the time-dependent score function $\nabla_\mathbf{x} \log p_t(\mathbf{x})$ to construct the reverse-time SDE, and then solve it numerically to obtain samples from $p_0$ using samples from a prior distribution $p_T$. We can train a time-dependent score-based model $s_\theta(\mathbf{x}, t)$ to approximate $\nabla_\mathbf{x} \log p_t(\mathbf{x})$, using the following weighted sum of [denoising score matching](http://www.iro.umontreal.ca/~vincentp/Publications/smdae_techreport.pdf) objectives.
#
# \begin{align}
# \min_\theta \mathbb{E}_{t\sim \mathcal{U}(0, T)} [\lambda(t) \mathbb{E}_{\mathbf{x}(0) \sim p_0(\mathbf{x})}\mathbf{E}_{\mathbf{x}(t) \sim p_{0t}(\mathbf{x}(t) \mid \mathbf{x}(0))}[ \|s_\theta(\mathbf{x}(t), t) - \nabla_{\mathbf{x}(t)}\log p_{0t}(\mathbf{x}(t) \mid \mathbf{x}(0))\|_2^2]],
# \end{align}
# where $\mathcal{U}(0,T)$ is a uniform distribution over $[0, T]$, $p_{0t}(\mathbf{x}(t) \mid \mathbf{x}(0))$ denotes the transition probability from $\mathbf{x}(0)$ to $\mathbf{x}(t)$, and $\lambda(t) \in \mathbb{R}^+$ denotes a continuous weighting function.
#
# In the objective, the expectation over $\mathbf{x}(0)$ can be estimated with empirical means over data samples from $p_0$. The expectation over $\mathbf{x}(t)$ can be estimated by sampling from $p_{0t}(\mathbf{x}(t) \mid \mathbf{x}(0))$, which is efficient when the drift coefficient $\mathbf{f}(\mathbf{x}, t)$ is affine. The weight function $\lambda(t)$ is typically chosen to be inverse proportional to $\mathbb{E}[\|\nabla_{\mathbf{x}}\log p_{0t}(\mathbf{x}(t) \mid \mathbf{x}(0)) \|_2^2]$.
#
#
# + [markdown] id="GFuMaPov5HlV"
# ### Time-Dependent Score-Based Model
#
# There are no restrictions on the network architecture of time-dependent score-based models, except that their output should have the same dimensionality as the input, and they should be conditioned on time.
#
# Several useful tips on architecture choice:
# * It usually performs well to use the [U-net](https://arxiv.org/abs/1505.04597) architecture as the backbone of the score network $s_\theta(\mathbf{x}, t)$,
#
# * We can incorporate the time information via [Gaussian random features](https://arxiv.org/abs/2006.10739). Specifically, we first sample $\omega \sim \mathcal{N}(\mathbf{0}, s^2\mathbf{I})$ which is subsequently fixed for the model (i.e., not learnable). For a time step $t$, the corresponding Gaussian random feature is defined as
# \begin{align}
# [\sin(2\pi \omega t) ; \cos(2\pi \omega t)],
# \end{align}
# where $[\vec{a} ; \vec{b}]$ denotes the concatenation of vector $\vec{a}$ and $\vec{b}$. This Gaussian random feature can be used as an encoding for time step $t$ so that the score network can condition on $t$ by incorporating this encoding. We will see this further in the code.
#
# * We can rescale the output of the U-net by $1/\sqrt{\mathbb{E}[\|\nabla_{\mathbf{x}}\log p_{0t}(\mathbf{x}(t) \mid \mathbf{x}(0)) \|_2^2]}$. This is because the optimal $s_\theta(\mathbf{x}(t), t)$ has an $\ell_2$-norm close to $\mathbb{E}[\|\nabla_{\mathbf{x}}\log p_{0t}(\mathbf{x}(t) \mid \mathbf{x}(0))]\|_2$, and the rescaling helps capture the norm of the true score. Recall that the training objective contains sums of the form
# \begin{align*}
# \mathbf{E}_{\mathbf{x}(t) \sim p_{0t}(\mathbf{x}(t) \mid \mathbf{x}(0))}[ \|s_\theta(\mathbf{x}(t), t) - \nabla_{\mathbf{x}(t)}\log p_{0t}(\mathbf{x}(t) \mid \mathbf{x}(0))\|_2^2]].
# \end{align*}
# Therefore, it is natural to expect that the optimal score model $s_\theta(\mathbf{x}, t) \approx \nabla_{\mathbf{x}(t)} \log p_{0t}(\mathbf{x}(t) \mid \mathbf{x}(0))$.
#
# * Use [exponential moving average](https://discuss.pytorch.org/t/how-to-apply-exponential-moving-average-decay-for-variables/10856/3) (EMA) of weights when sampling. This can greatly improve sample quality, but requires slightly longer training time, and requires more work in implementation. We do not include this in this tutorial, but highly recommend it when you employ score-based generative modeling to tackle more challenging real problems.
# + id="YyQtV7155Nht" cellView="form"
#@title Defining a time-dependent score-based model (double click to expand or collapse)
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
class GaussianFourierProjection(nn.Module):
"""Gaussian random features for encoding time steps."""
def __init__(self, embed_dim, scale=8.):
super().__init__()
# Randomly sample weights during initialization. These weights are fixed
# during optimization and are not trainable.
self.W = nn.Parameter(torch.randn(embed_dim // 2) * scale, requires_grad=False)
def forward(self, x):
x_proj = x[:, None] * self.W[None, :] * 2 * np.pi
return torch.cat([torch.sin(x_proj), torch.cos(x_proj)], dim=-1)
class Dense(nn.Module):
"""A fully connected layer that reshapes outputs to feature maps."""
def __init__(self, input_dim, output_dim):
super().__init__()
self.dense = nn.Linear(input_dim, output_dim)
def forward(self, x):
return self.dense(x)[..., None, None]
class ScoreNet(nn.Module):
"""A time-dependent score-based model built upon U-Net architecture."""
def __init__(self, noise_scale, channels=[32, 64, 128, 256], embed_dim=256):
"""
Initialize a time-dependent score-based network.
noise_scale:
a function that takes time t and gives the variance of
the perturbation kernel p_{0t}(x(t) | x(0)).
channels:
the number of channels for feature maps of each resolution.
embed_dim:
the dimensionality of Gaussian random feature embeddings.
"""
super().__init__()
# Gaussian random feature embedding layer for time
self.embed = GaussianFourierProjection(embed_dim=embed_dim)
# Encoding layers where the resolution decreases
self.conv1 = nn.Conv2d(1, channels[0], 3, stride=1, bias=False)
self.dense1 = Dense(embed_dim, channels[0])
self.gnorm1 = nn.GroupNorm(4, num_channels=channels[0])
self.conv2 = nn.Conv2d(channels[0], channels[1], 3, stride=2, bias=False)
self.dense2 = Dense(embed_dim, channels[1])
self.gnorm2 = nn.GroupNorm(32, num_channels=channels[1])
self.conv3 = nn.Conv2d(channels[1], channels[2], 3, stride=2, bias=False)
self.dense3 = Dense(embed_dim, channels[2])
self.gnorm3 = nn.GroupNorm(32, num_channels=channels[2])
self.conv4 = nn.Conv2d(channels[2], channels[3], 3, stride=2, bias=False)
self.dense4 = Dense(embed_dim, channels[3])
self.gnorm4 = nn.GroupNorm(32, num_channels=channels[3])
# Decoding layers where the resolution increases
self.tconv4 = nn.ConvTranspose2d(channels[3], channels[2], 3, stride=2, bias=False)
self.dense5 = Dense(embed_dim, channels[2])
self.tgnorm4 = nn.GroupNorm(32, num_channels=channels[2])
self.tconv3 = nn.ConvTranspose2d(channels[2] + channels[2], channels[1], 3, stride=2, bias=False, output_padding=1)
self.dense6 = Dense(embed_dim, channels[1])
self.tgnorm3 = nn.GroupNorm(32, num_channels=channels[1])
self.tconv2 = nn.ConvTranspose2d(channels[1] + channels[1], channels[0], 3, stride=2, bias=False, output_padding=1)
self.dense7 = Dense(embed_dim, channels[0])
self.tgnorm2 = nn.GroupNorm(32, num_channels=channels[0])
self.tconv1 = nn.ConvTranspose2d(channels[0] + channels[0], 1, 3, stride=1)
# The swish activation function
self.act = lambda x: x * torch.sigmoid(x)
self.noise_scale = noise_scale
def forward(self, x, t):
# Obtain the Gaussian random feature embedding for t
embed = self.act(self.embed(t))
# Encoding path
h1 = self.conv1(x)
## Incorporate information from t
h1 += self.dense1(embed)
## Group normalization
h1 = self.gnorm1(h1)
h1 = self.act(h1)
h2 = self.conv2(h1)
h2 += self.dense2(embed)
h2 = self.gnorm2(h2)
h2 = self.act(h2)
h3 = self.conv3(h2)
h3 += self.dense3(embed)
h3 = self.gnorm3(h3)
h3 = self.act(h3)
h4 = self.conv4(h3)
h4 += self.dense4(embed)
h4 = self.gnorm4(h4)
h4 = self.act(h4)
# Decoding path
h = self.tconv4(h4)
## Skip connection from the encoding path
h += self.dense5(embed)
h = self.tgnorm4(h)
h = self.act(h)
h = self.tconv3(torch.cat([h, h3], dim=1))
h += self.dense6(embed)
h = self.tgnorm3(h)
h = self.act(h)
h = self.tconv2(torch.cat([h, h2], dim=1))
h += self.dense7(embed)
h = self.tgnorm2(h)
h = self.act(h)
h = self.tconv1(torch.cat([h, h1], dim=1))
# Normalize output based on the norm of perturbation kernels.
h = h / self.noise_scale(t)[:, None, None, None]
return h
# + [markdown] id="PpJSwfyY6mJz"
# ## Training with Weighted Sum of Denoising Score Matching Objectives
#
# Now let's get our hands dirty on training. First of all, we need to specify an SDE that perturbs the data distribution $p_0$ to a prior distribution $p_T$. We choose the following SDE
# \begin{align*}
# d \mathbf{x} = \sqrt{\frac{d [\sigma^2(t)]}{dt}} d\mathbf{w},
# \end{align*}
# where $\sigma(t) = \sigma_{\text{min}}(\frac{\sigma_{\text{max}}}{\sigma_{\text{min}}})^t$, $t\in[0,1]$. In this case,
# \begin{align*}
# p_{0t}(\mathbf{x}(t) \mid \mathbf{x}(0)) = \mathcal{N}(\mathbf{x}(t); \mathbf{x}(0), [\sigma^2(t) - \sigma^2(0)]\mathbf{I})
# \end{align*}
# and $\lambda(t) \propto \sigma^2(t) - \sigma^2(0)$.
#
# When $\sigma_{\text{max}}$ is large enough, the distribution of $p_1$ is
# \begin{align*}
# \int p_0(\mathbf{y})\mathcal{N}(\mathbf{x}; \mathbf{y}, [\sigma_{\text{max}}^2 - \sigma_{\text{min}}^2]\mathbf{I}) d \mathbf{y} \approx \mathbf{N}(\mathbf{x}; \mathbf{0}, [\sigma_{\text{max}}^2 - \sigma_{\text{min}}^2]\mathbf{I}),
# \end{align*}
# which is easy to sample from.
#
# Intuitively, this SDE captures a continuum of Gaussian perturbations with variance function $\sigma(t)^2 - \sigma^2(0)$, where $\sigma(t)$ is a strictly increasing function that grows exponentially fast. This continuum of perturbations allows us to gradually transfer samples from a data distribution $p_0$ to a simple Gaussian distribution $p_1$.
# + id="zOsoqPdXHuL5" cellView="form"
#@title Loss function (double click to expand or collapse)
def noise_scale(t, sigma_min=0.01, sigma_max=10, grad=False):
"""
Compute quantities related to the perturbation kernel p_{0t}(x(t) | x(0)).
t: a vector of time steps.
sigma_min: the minimum value of the sigma function.
sigma_max: the maximum value of the sigma function.
grad: if False, only return the variance of p_{0t}(x(t) | x(0)).
Otherwise return both the variance and the gradient of sigma^2(t).
This gradient will be useful for sample generation.
"""
noise = sigma_min * (sigma_max / sigma_min)**t
if not grad:
return torch.sqrt(noise**2 - sigma_min**2)
else:
dnoise_dt = sigma_min * (sigma_max / sigma_min)**t * np.log(sigma_max/sigma_min)
dnoise2_dt = 2 * noise * dnoise_dt
return torch.sqrt(noise**2 - sigma_min**2), dnoise2_dt
def loss_func(model, x, noise_scale, eps=1e-3):
"""
The loss function for training score-based generative models.
model: a PyTorch model instance that represents a time-dependent score-based model.
x: a mini-batch of input images.
noise_scale: a function that computes the variance of perturbation kernels.
eps: a tolerance value for numerical stability.
"""
random_t = torch.rand(x.shape[0], device=x.device) * (1. - eps) + eps
noise_scales = noise_scale(random_t)
z = torch.randn_like(x)
perturbed_x = x + z * noise_scales[:, None, None, None]
score = model(perturbed_x, random_t)
loss = torch.sum((score * noise_scales[:, None, None, None] + z).reshape(x.shape[0], -1)**2, dim=-1).mean()
return loss
# + id="8PPsLx4dGCGa" cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["4708a0d332354a06bbf2368be27396dc", "d434dd2ae1984d658447c8e15917c2cc", "51d7811a4d3e487a9bc8971cc430b150", "<KEY>", "<KEY>", "fc3306483b5545e6b5fff44889a0468a", "1450de42ce7f4fda9727976458467302", "af748bda12e74375886714a25ea1de2d", "c15c5936d6ee4e34825796856263871d", "25cb5b36ce904cab9543d5e776fe61b0", "d3794df84bea4f9393f7d1eee8ba1b2d", "<KEY>", "b8d834f089554d57b83613ef50d8ad80", "929283006dde4988a5f9bf9fd1226ac9", "41d0f70aa00c4ec084a8b81d43234caf", "<KEY>", "e3b2593ee6b049afbe9da109f44284d0", "51aef1be6d2d482db3063f0228da5dc9", "<KEY>", "dd0ad6643ab84ed8a6d81b4f4c19745c", "b445eafc7e3440eabf2c4d5b7009e7de", "18c5e1da7a4b4acd964f31ac323c7493", "<KEY>", "<KEY>", "6fd6d79b18794ea6a57158c00df499c8", "<KEY>", "81e409d0f4b3493290cae2424c619cbe", "2e145c86a9964ea49cdf9d37eaa00418", "<KEY>", "4d35c917cfce41d6af52206c3338b7eb", "<KEY>", "<KEY>", "<KEY>", "434d9e8a93d84dae86bcaf09e3a71eae", "7ba8ca30c0a648ecbad4087dae67ddb5", "33a559d6ed714ad18f29ec3f1bc44ea9", "<KEY>", "8f8b42f4f66d4fa49d5917a7700741c9", "<KEY>", "cee855ef20ed474184b90a2a718c3f8a", "5bfa21bb0aed4f9291e0791a0acc507e", "<KEY>", "c62f4376fd294378997d586edba9e75f", "083501a6e90c40af93906fd735e4beb1", "7469d0f9dd25486bb43320c8fbebe79a", "<KEY>", "189022bd8e8c41e08f1960a9946b8636", "6b34ff622aee4d6ea42c6b8edc30295e", "ff79ec8082ec4992841f09408e0d0c28", "<KEY>", "eca91fd049f04eaba980b4014a8507e6", "e6dd68cefe164140a2fcc4d8e5363ed5", "3a8caca569fe48e69f72157683904604", "<KEY>", "30c220a4bf57452685f4120c1edf8842", "dabec97a72d64098be4050d74188c40e", "15fd80739f5e44918411ee27e5e0c51d", "<KEY>", "<KEY>", "<KEY>", "5797ab5fda4d48eda2ae16417dedc622", "<KEY>", "<KEY>", "<KEY>", "809d6ed4a401423a8ecad0e6282f7263", "<KEY>", "442196b72bda47008e4304dfbe5a0e9c", "<KEY>", "<KEY>", "<KEY>", "6444d2c9a12044c4a4faba4b7f34edec", "<KEY>", "a7859def5ee34835a961c6bde07e3379", "455b8244c7ce49ef93247d4e7e121c64", "0422ad08c3a74580b607388d63ed48d5", "<KEY>", "11859d538fe24528ae6d15a9f300e014", "<KEY>", "79d8b13783164196a372419d8445b79b", "<KEY>", "34ffe80ec1dd404886379d8e48be7567", "feaef34831584ecd860784a05354a513", "e3b00eca84054a6dbdd8a0500569a3a0", "1a1ccef24fbe4b6baaa59830fde21c2a", "<KEY>", "853f176c8c1a48b086e191adb0a9678c", "<KEY>", "581c811835e74ad69e5cebad5e5c2b8e", "<KEY>", "ab618be4681e4aa8a913e1d8db73efac", "bb319e5a7f904e4292b57057e074ed09", "fec66e3f0036479182be722f9727e1d5", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "09bab8e8ec754905bde09fb90b3e6e71", "7709b1bdd27347f18e59431296e610a8", "<KEY>", "<KEY>", "93f270ab62614dd0a4b878ffe5a78dfc", "62ddc66a015d446aba3318d2005d47f9", "<KEY>", "233616d384fb4293a4d46ec7fdb720b9", "<KEY>", "efdbe3b8d22346bd9cd4086ff9acfbed", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "5ff510acea8542b3a7dc868a5e4b245b", "<KEY>", "068f92f660b54ef4897691a8c47aea8a", "<KEY>", "<KEY>", "5ffe3a5a6f104d5384fc91ca195d9809", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "c9085e0ee89a48eebb4ccb3ed4535a6e", "d8f078ffe8ae4af28ba536da08463b83", "87871f0103ef48b7848791ad8bb862a9", "c838e84e23a54f3e9af83ace1fa82410", "9081bcce60944df4a6d8eb5b04c6f522", "33ce9f8ef5664f06938e161e543a165b", "<KEY>", "15bb8e38442c4fef9c38b2577e5580f5", "<KEY>", "e8e6a192dad84f66bb94d46f5dfc0dbe", "<KEY>", "d5f514bdd6c848e39e98a764f351eba8", "<KEY>", "244211b11dfe4203aa98ad33a0b3ca18", "ee461685f6184757a64db49c28befee6", "3e25a4049816449484508f0fb350923a", "<KEY>", "9d0acfe8e3a443829c7b189ff8124fee", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "1cde76fb5ecc45349ada298c32c19c17", "<KEY>", "cf4f325eddae41f8a98490d488b693c2", "<KEY>", "260418cb8079470484ebbb80ee415920", "<KEY>", "d504f42202174e19975b944033feee49", "<KEY>", "9b30a2de689643608675334da78a990b", "71a66df962a04f17872ed3c34d79c63d", "65300c893a6f40459b26ccee159ffe2f", "<KEY>", "5401ca9979004ac69ea5b84b38317bef", "<KEY>", "<KEY>", "<KEY>", "fdca41c791784a90be457db858066bd4", "ef83884f392347e092079122af02493e", "<KEY>", "<KEY>", "52c7005aee9944ae9cd95ea1a6fe6afd", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "dc0273550beb41e5bcc182401a2f9618", "<KEY>", "<KEY>", "eec489a013e44402bd4c34e77a7aef1f", "d6a82175e1f1487f84ee55da058b6bec", "97d2e2c63f66437a83fb0d1e5ea97ce1", "e84c21210e544e4eb3cac01c8d18d3c8", "<KEY>", "37f9ef248eb94f7fa2948cc7f2261121", "<KEY>", "e5072e822d6d4d7b9ea867d7fdf6e887", "<KEY>", "<KEY>", "0138d576e74f449f952f151cb300e77b", "<KEY>", "<KEY>", "6a54f17091b143adb81a9cf33c4a5e9b", "<KEY>", "8ce7a02116eb4eacb3f42e78299114c7", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "0a54a74ce5ca40f983adf813fff1ae12", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "6883f42fd2b547818c044ba64832b31e", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "5d99716f73b745fe8d60618c8efc1ae1", "<KEY>", "57f58cec483749928114855d1b6e7063", "<KEY>", "2681aef99fc44574a2dda802429b60e5", "<KEY>", "<KEY>", "<KEY>", "20edfd250d9a4b84b6f062c85d4ece4f", "5e722f6159cd4d598fd1ca289dfded1b", "195bee9c44d64da39b04537646e5197a", "<KEY>", "d66e3a24662e4f31a2b804e545f21553", "94be594ca3eb44498a2bfc421a0e28e7", "04c5aa32fb664d13b2a930d7818b1927", "<KEY>", "35b6ef3dfad9451eafb37269c01c064d", "26dd2ebf3e2441a6964ba89f61e9ebf1", "fe9cd3c88ce249d1b05bdd36a16d1b8b", "ba750da354154d36958c7e83bc4de977", "bff57da19d264ff88fb945358e42f524", "140ab42024ad42459b03853ab1fca165", "c8e3cd290f9d496db92b74ac342e5350", "<KEY>", "<KEY>", "8d31f4e3d3a94e54a1ddd983add5be0d", "<KEY>", "0df5f24c1fca441abf8a06295b857ed7", "188d24e3dfdf4d8a8eac98918d4d9404", "257f58820b234e9bba7a6befa7accd90", "8b65045a42b2496185a720dba198b9ca", "511c023a14a344f2ae6130725bc23ea1", "<KEY>", "1456d39512a5479fbd3381f5e3354707", "9929e322c88a4960954d081120d258c9", "<KEY>", "1ffa177f003645c782c31d0c53eab90b", "e9ca6b9d01bd4763acaa056be0db21df", "<KEY>", "<KEY>", "<KEY>", "7dd937551feb4a85932ade19fbd33a16", "<KEY>", "1e90e3eed22f42489bf0e5d4f80e18f9", "<KEY>", "10ce849775f74177ba791b98ab9dd4b8", "0efaa0b66ce74122800a29766508b892", "<KEY>", "2c721616cc924523aeee02439ba581d0", "7e1efb94c68e46a080e90f058f72761f", "<KEY>", "<KEY>", "<KEY>", "afa30edf6bd647c39af9386a840c7d9e", "b25f3ecea3c445698aa25c9c27826cfa", "<KEY>", "d43abc348f7e4a5d9131943ec1553919", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "7fd318795a654e438de51839eec0ad1d", "<KEY>", "0ae1af35596d4431a99ead57e46b6178", "<KEY>", "c341b87e2edb49129d229296377086a5", "80df5d7a1822449c9a689f84f2691968", "5766df05626947bb95a8df4ea9ea6554", "<KEY>", "<KEY>", "a2af46ee44f04ebda8eed4a1814634e5", "fadf77cad731404589e20bf53e82f4ab", "912c5dfab33643efaaa146f52ba6c1e8", "f6e0384a634248ccbd7e5070ebe34136", "<KEY>", "ed5b675be2954e34b876adeb44fd930c", "3c7e020d451344aeae2bd0f04de0c348", "<KEY>", "<KEY>", "88123d2c62aa4274b03736eb17267261", "7c13a36885424e729f0539f3d958099c", "<KEY>", "<KEY>", "48d2ec585f11438ab3e55390f5f02db9", "9e101686ae66486c9ad62e7860ad5b0d", "<KEY>", "e2621399ba6e4ee9be952af11e47310c", "5d07a050ebd742e39b3e02d415c69c11", "151392db8889442a9f2816baa11ffd6b", "<KEY>", "<KEY>", "737c046f2a4e4749be2e660d9645a7db", "2837571901884b77813608455fc27156", "07752d8f68944c3fb56e248d8ac56ab6", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "8d9feb26df4f4654af130e0ea978f841", "<KEY>", "42e0fe6e27a74b5092e9aa0b202f4dac", "<KEY>", "dac7af96b85e4a72b438441d7adda59b", "<KEY>", "<KEY>", "d9123e5d01a540ecb7a46ab61522da50", "<KEY>", "e6674cd6ba464acfa3c90cb5a207be0d", "544102e806e340e09fe9e7befca1e6c0", "<KEY>", "3e093199268942d8b6e018471ca61594", "<KEY>", "<KEY>", "fd65fe17122d45d8905a698349f8ed40", "9eba7ddcf1ee4976a239710ec0ac28ad", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "62f860ae6c3a41aa97398150d6fa9958", "f06e1d2c19584fd89789ac81e061771e", "e9a060a3a95545cc8977e03cf58c08af", "<KEY>", "<KEY>", "<KEY>", "84b2f658de8a46ada24f678950a102d2", "6f585a661a0a4332986ded02b0b8257d", "<KEY>", "795fbe4879424a10b396f97e2a4ca7e4", "<KEY>", "e9fae149c08441c49a13a6fe90223ee7", "585ef9adf68a406c96cd1660ff6f53c7", "<KEY>", "<KEY>", "7e514050ac644b35adf18b4842e1ff13", "232c4939a5ad44819d3371489f620877", "<KEY>", "<KEY>", "<KEY>", "23f1eb6e162147e18aed014a49eead1f", "<KEY>", "<KEY>", "<KEY>", "592c269588f543c68e3231d2b6360d7a", "c9436d434cad4196aa0d1142445efcff", "cb910ea9ceaa4e8589bd69983a2b7e4d", "e6d8c223a085443397b795071e154834", "<KEY>", "81deced0870e471a850c5c5039da5177", "<KEY>", "23ac5211233d4db8ade105543470b3e5", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "882c5fc10ab04a3280e95b0a000d9c24", "baa074567b1643b2be4454d6776342dd", "559c09dd0923490eb923ac212e431a03", "8302cd28cca14418b8e3e97cdae498dd", "<KEY>", "2e1102348ad445a4ba42a3960107e75c", "<KEY>", "35ab40667e2b4bf19fabba7f3ddd52c3", "0200ea3aca4c43629d0530577a813111", "<KEY>", "9bfffe338a094f989502e16d3d2fb2fe", "<KEY>", "02466b4ab5524a1b8ba63f1e70cc6b0d", "4b964cabed5f4f8990f3c64f46c1b793", "<KEY>", "<KEY>", "2c89504762af4177be94c78f0310e188", "5dcd673e73114d3688b41bee50187da2", "c148cfd0ffe14beaa9c431771e14cd97", "<KEY>", "<KEY>", "<KEY>", "f098fe1f45d74bedbcda35e523f3eacf", "73dd833ddb8348d684c331c0a9e69a7f", "b6281ee73cc540e4bdd83f543694ba1d", "ed6698188ad4430f80697ab98db355b8", "57d6acf3bd1f4ca68ff91a604c4a746a", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "d7d39cc27e4044b393d8c80eb4ebcc5a", "<KEY>", "<KEY>", "6a037fbe8d304292b01112fa27a9c215", "<KEY>", "4c8fe80ece65402eb639da60b8a29320", "8d1ac6a4b8004b7c9632566e3a598612", "<KEY>", "<KEY>", "10fe498be7e242c280584611deb49306", "<KEY>", "1eca499cd5ea4ea1ba39a331b72a6c17", "bdfad4bce37c4adb86e25d0904fe87f6", "af620062ca504e2aa461077d43567be5", "<KEY>", "1a92be2436754522b4230e6c35e81f41", "8f7fb2ad7a474ae198041877d607be96", "<KEY>", "<KEY>", "<KEY>", "1ab7837e4f9545629faeb3a1d29be1de", "1fc769ee019e41b880d366cc0d8a041c", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "00a6949ffb3d4d99af4d736d99ccad53", "<KEY>", "131096da7f6443fc94e6ed0dc150de28", "<KEY>", "<KEY>", "<KEY>", "71045f7e193d4f0f8e12b045ed2444fd", "5526d044258d4598a2602a4129d4cc85"]} outputId="8c62498c-e1a8-4f33-a6e2-2f959c32c202"
#@title Training (double click to expand or collapse)
import torch
import functools
from torch.optim import Adam
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
from torchvision.datasets import MNIST
import tqdm
device = 'cuda' #@param ['cuda', 'cpu'] {'type':'string'}
sigma_min = 0.01 #@param {'type':'number'}
sigma_max = 22 #@param {'type':'number'}
noise_scale_func = functools.partial(noise_scale, sigma_min=sigma_min, sigma_max=sigma_max, grad=False)
score_model = torch.nn.DataParallel(ScoreNet(noise_scale=noise_scale_func))
score_model = score_model.to(device)
n_epochs = 50 #@param {'type':'integer'}
## size of a mini-batch
batch_size = 32 #@param {'type':'integer'}
## learning rate
lr=1e-4 #@param {'type':'number'}
dataset = MNIST('.', train=True, transform=transforms.ToTensor(), download=True)
data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=True, num_workers=4)
optimizer = Adam(score_model.parameters(), lr=lr)
for epoch in range(n_epochs):
avg_loss = 0.
num_items = 0
for x, y in tqdm.notebook.tqdm(data_loader):
optimizer.zero_grad()
x = x.to(device)
loss = loss_func(score_model, x, noise_scale_func)
loss.backward()
optimizer.step()
avg_loss += loss.item() * x.shape[0]
num_items += x.shape[0]
# Print the averaged training loss so far.
print(f'epoch: {epoch}, average loss: {avg_loss / num_items}')
# Save one checkpoint after each epoch of training.
torch.save(score_model.state_dict(), f'ckpt.pth')
# + [markdown] id="tldaUHUtHuej"
# ## Sampling with Numerical SDE Solvers
# Recall that for any SDE of the form
# \begin{align*}
# d \mathbf{x} = \mathbf{f}(\mathbf{x}, t) dt + g(t) d\mathbf{w},
# \end{align*}
# the reverse-time SDE is given by
# \begin{align*}
# d \mathbf{x} = [\mathbf{f}(\mathbf{x}, t) + g(t)^2 \nabla_\mathbf{x} \log p_t(\mathbf{x})] dt + g(t) d \bar{\mathbf{w}}.
# \end{align*}
# Since we have chosen the forward SDE to be
# \begin{align*}
# d \mathbf{x} = \sqrt{\frac{d [\sigma^2(t)]}{dt}} d\mathbf{w},
# \end{align*}
# where $\sigma(t) = \sigma_{\text{min}}(\frac{\sigma_{\text{max}}}{\sigma_{\text{min}}})^t$, $t\in[0,1]$. The reverse-time SDE is given by
# \begin{align*}
# d\mathbf{x} = -\frac{d[\sigma^2(t)]}{dt} \nabla_\mathbf{x} \log p_t(\mathbf{x}) dt + \sqrt{\frac{d[\sigma^2(t)]}{d t}} d \bar{\mathbf{w}}.
# \end{align*}
# To sample from our time-dependent score-based model $s_\theta(\mathbf{x}, t)$, we can first draw a sample from $p_1 \approx \mathbf{N}(\mathbf{x}; \mathbf{0}, [\sigma_{\text{max}}^2 - \sigma_{\text{min}}^2]\mathbf{I})$, and then solve the reverse-time SDE with numerical methods.
#
# Specifically, using our time-dependent score-based model, the reverse-time SDE can be approximated by
# \begin{align*}
# d\mathbf{x} = -\frac{d[\sigma^2(t)]}{dt} s_\theta(\mathbf{x}, t) dt + \sqrt{\frac{d[\sigma^2(t)]}{d t}} d \bar{\mathbf{w}}
# \end{align*}
#
# Next, one can use numerical methods to solve for the reverse-time SDE, such as the [Euler-Maruyama](https://en.wikipedia.org/wiki/Euler%E2%80%93Maruyama_method) approach. It is based on a simple discretization to the SDE, replacing $dt$ with $\Delta t$ and $d \mathbf{w}$ with $\mathbf{z} \sim \mathcal{N}(\mathbf{0}, g^2(t) \Delta t \mathbf{I})$. When applied to our reverse-time SDE, we can obtain the following iteration rule
# \begin{align}
# \mathbf{x}_{t-\Delta t} = \mathbf{x}_t + \frac{d[\sigma^2(t)]}{dt}s_\theta(\mathbf{x}_t, t)\Delta t + \sqrt{\frac{d[\sigma^2(t)]}{dt}\Delta t} \mathbf{z}_t,
# \end{align}
# where $\mathbf{z}_t \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$.
# + [markdown] id="DC6QVkUQvFyB"
# ## Sampling with Predictor-Corrector Methods
#
# Aside from generic numerical SDE solvers, we can leverage special properties of our reverse-time SDE for better solutions. Since we have an estimate of the score of $p_t(\mathbf{x}(t))$ via the score-based model, i.e., $s_\theta(\mathbf{x}, t) \approx \nabla_{\mathbf{x}(t)} \log p_t(\mathbf{x}(t))$, we can leverage score-based MCMC approaches, such as Langevin MCMC, to correct the solution obtained by numerical SDE solvers.
#
# Score-based MCMC approaches can produce samples from a distribution $p(\mathbf{x})$ once its score $\nabla_\mathbf{x} \log p(\mathbf{x})$ is known. For example, Langevin MCMC operates by running the following iteration rule for $i=1,2,\cdots, N$:
# \begin{align*}
# \mathbf{x}_{i+1} = \mathbf{x}_{i} + \epsilon \nabla_\mathbf{x} \log p(\mathbf{x}_i) + \sqrt{2\epsilon} \mathbf{z}_i,
# \end{align*}
# where $\mathbf{z}_i \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$, $\epsilon > 0$ is the step size, and $\mathbf{x}_1$ is initialized from any prior distribution $\pi(\mathbf{x}_1)$. When $N\to\infty$ and $\epsilon \to 0$, the final value $\mathbf{x}_{N+1}$ becomes a sample from $p(\mathbf{x})$ under some regularity conditions. Therefore, given $s_\theta(\mathbf{x}, t) \approx \nabla_\mathbf{x} \log p_t(\mathbf{x})$, we can get an approximate sample from $p_t(\mathbf{x})$ by running several steps of Langevin MCMC, replacing $\nabla_\mathbf{x} \log p_t(\mathbf{x})$ with $s_\theta(\mathbf{x}, t)$ in the iteration rule.
#
# Predictor-Corrector samplers combine both numerical solvers for the reverse-time SDE and the Langevin MCMC approach. In particular, we first apply one step of numerical SDE solver to obtain $\mathbf{x}_{t-\Delta t}$ from $\mathbf{x}_t$, which is called the "predictor" step. Next, we apply several steps of Langevin MCMC to refine $\mathbf{x}_t$, such that $\mathbf{x}_t$ becomes a more accurate sample from $p_{t-\Delta t}(\mathbf{x})$. This is the "corrector" step as the MCMC helps reduce the error of the numerical SDE solver.
# + [markdown] id="0PdMMadpUbrj"
# ## Sampling with Numerical ODE Solvers
#
# For any SDE of the form
# \begin{align*}
# d \mathbf{x} = \mathbf{f}(\mathbf{x}, t) d t + g(t) d \mathbf{w},
# \end{align*}
# there exists an associated ordinary differential equation (ODE)
# \begin{align*}
# d \mathbf{x} = \bigg[\mathbf{f}(\mathbf{x}, t) - \frac{1}{2}g(t)^2 \nabla_\mathbf{x} \log p_t(\mathbf{x})\bigg] dt,
# \end{align*}
# such that their trajectories have the same mariginal probability density $p_t(\mathbf{x})$. We call this ODE the *probability flow ODE*.
#
# Therefore, we can start from a sample from $p_T$, integrate the ODE in the reverse time direction, and then get a sample from $p_0 = p_\text{data}$. In particular, for our chosen forward SDE, we can integrate the following SDE from $t=T$ to $0$ for sample generation
# \begin{align*}
# d\mathbf{x} = -\frac{1}{2}\frac{d[\sigma^2(t)]}{d t} s_\theta(\mathbf{x}, t) dt.
# \end{align*}
# This can be done using many heavily-optimized black-box ODE solvers provided by packages such as `scipy`.
# + id="6FxBTOSSH2QR" cellView="form"
#@title SDE sampling (double click to expand or collapse)
## The number of sampling steps.
num_steps = 500 #@param {'type':'integer'}
def sde_sampler(score_model, noise_scale, batch_size=64, num_steps=num_steps, device='cuda'):
"""
Generate samples from score-based models with numerical SDE solvers.
score_model: a PyTorch model that represents the time-dependent score-based model.
noise_scale: a function that gives a tuple: (the variance of p_{0t}(x(t) | x(0)) and
, the gradient of sigma^2(t) ).
batch_size: the number of samplers to generate by calling this function once.
num_steps: the number of sampling steps. Also equivalent to the number of discretized time steps.
device: 'cuda' for running on GPUs, and 'cpu' for running on CPUs.
"""
t = torch.ones(batch_size, device=device)
init_x = torch.randn(batch_size, 1, 28, 28, device=device) * noise_scale(t)[0][:, None, None, None]
time_steps = np.linspace(1., 1e-3, num_steps)
step_size = time_steps[0] - time_steps[1]
x = init_x
with torch.no_grad():
for time_step in tqdm.notebook.tqdm(time_steps):
batch_time_step = torch.ones(batch_size, device=device) * time_step
next_x = x + noise_scale(batch_time_step)[1][:, None, None, None] * score_model(x, batch_time_step) * step_size
next_x = next_x + torch.sqrt(noise_scale(batch_time_step)[1] * step_size)[:, None, None, None] * torch.randn_like(x)
x = next_x
return x
# + id="qW1HaPZb9gDM" cellView="form"
#@title PC sampling (double click to expand or collapse)
signal_to_noise_ratio = 0.15 #@param {'type':'number'}
## The number of sampling steps.
num_steps = 500 #@param {'type':'integer'}
def pc_sampler(score_model, noise_scale, batch_size=64, num_steps=num_steps, snr=signal_to_noise_ratio, device='cuda'):
"""
Generate samples from score-based models with Predictor-Corrector method.
score_model: a PyTorch model that represents the time-dependent score-based model.
noise_scale: a function that gives a tuple: (the variance of p_{0t}(x(t) | x(0)) and
, the gradient of sigma^2(t) ).
batch_size: the number of samplers to generate by calling this function once.
num_steps: the number of sampling steps. Also equivalent to the number of discretized time steps.
device: 'cuda' for running on GPUs, and 'cpu' for running on CPUs.
"""
t = torch.ones(batch_size, device=device)
init_x = torch.randn(batch_size, 1, 28, 28, device=device) * noise_scale(t)[0][:, None, None, None]
time_steps = np.linspace(1., 1e-3, num_steps)
step_size = time_steps[0] - time_steps[1]
x = init_x
with torch.no_grad():
for time_step in tqdm.notebook.tqdm(time_steps):
# Corrector step
batch_time_step = torch.ones(batch_size, device=device) * time_step
next_x = x + noise_scale(batch_time_step)[1][:, None, None, None] * score_model(x, batch_time_step) * step_size
next_x = next_x + torch.sqrt(noise_scale(batch_time_step)[1] * step_size)[:, None, None, None] * torch.randn_like(x)
x = next_x
# Predictor step
grad = score_model(x, batch_time_step)
grad_norm = torch.norm(grad.reshape(grad.shape[0], -1), dim=-1).mean()
noise_norm = np.sqrt(np.prod(x.shape[1:]))
langevin_step_size = 2 * (snr * noise_norm / grad_norm)**2
x = x + langevin_step_size * grad + torch.sqrt(2 * langevin_step_size) * torch.randn_like(x)
return x
# + id="nxrCTFM8CfDN" cellView="form"
#@title ODE sampling (double click to expand or collapse)
from scipy import integrate
## The error tolerance for the black-box ODE solver
error_tolerance = 1e-5 #@param {'type': 'number'}
def ode_sampler(score_model, noise_scale, batch_size=64, atol=error_tolerance, rtol=error_tolerance, device='cuda', z=None):
"""
Generate samples from score-based models with black-box ODE solvers.
score_model: a PyTorch model that represents the time-dependent score-based model.
noise_scale: a function that gives a tuple: (the variance of p_{0t}(x(t) | x(0)) and
, the gradient of sigma^2(t) ).
batch_size: the number of samplers to generate by calling this function once.
atol: tolerance of absolute errors.
rtol: tolerance of relative errors.
device: 'cuda' for running on GPUs, and 'cpu' for running on CPUs.
z: the latent code that governs the final sample. If None, we start from p_1;
otherwise, we start from the given z.
"""
t = torch.ones(batch_size, device=device)
# Create the latent code
if z is None:
init_x = torch.randn(batch_size, 1, 28, 28, device=device) * noise_scale(t)[0][:, None, None, None]
else:
init_x = z
shape = init_x.shape
def score_eval_wrapper(sample, time_steps):
"""A wrapper of the score-based model for use by the ODE solver."""
sample = torch.tensor(sample, device=device, dtype=torch.float32).reshape(shape)
time_steps = torch.tensor(time_steps, device=device, dtype=torch.float32).reshape((sample.shape[0], ))
with torch.no_grad():
score = score_model(sample, time_steps)
return score.cpu().numpy().reshape((-1,)).astype(np.float64)
def ode_func(t, x):
"""The ODE function for use by the ODE solver."""
time_steps = np.ones((shape[0],)) * t
return -0.5 * noise_scale(torch.tensor(t))[1].cpu().numpy() * score_eval_wrapper(x, time_steps)
# Run the black-box ODE solver.
res = integrate.solve_ivp(ode_func, (1., 1e-2), init_x.reshape(-1).cpu().numpy(), rtol=rtol, atol=atol, method='RK45')
print(f"Number of function evaluations: {res.nfev}")
x = torch.tensor(res.y[:, -1], device=device).reshape(shape)
return x
# + id="kKoAPnr7Pf2B" cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 406, "referenced_widgets": ["c1e1227cc8d44316821664d029c57882", "7c724f34f51046e9834bb8fb57c96eee", "b3b89343cba94bb6963373cce263bd54", "a4ac82b6a6ee42c8b79e9ad41ae5d3da", "ffb57f2e0fca4d779bc3aa5f9a69f389", "270c5deca80c4f51aa0becf90f6200d6", "6bb4eb1ae9ec46e0affc59ba7bee2480", "e9e4674f63e2417a964bf6c5dd862b90"]} outputId="65a852aa-190a-4ecb-d0cc-fc570bd6ea77"
#@title Sampling (double click to expand or collapse)
from torchvision.utils import make_grid
## Load the pre-trained checkpoint from disk.
device = 'cuda' #@param ['cuda', 'cpu'] {'type':'string'}
ckpt = torch.load('ckpt.pth', map_location=device)
score_model.load_state_dict(ckpt)
noise_scale_func = functools.partial(noise_scale, sigma_min=sigma_min, sigma_max=sigma_max, grad=True)
sample_batch_size = 64 #@param {'type':'integer'}
sampler = pc_sampler #@param ['sde_sampler', 'pc_sampler', 'ode_sampler'] {'type': 'raw'}
## Generate samples using the specified sampler.
samples = sampler(score_model, noise_scale_func, sample_batch_size, device=device)
## Sample visualization.
samples = samples.clamp(0.0, 1.0)
import matplotlib.pyplot as plt
sample_grid = make_grid(samples, nrow=int(np.sqrt(sample_batch_size)))
plt.figure(figsize=(6,6))
plt.axis('off')
plt.imshow(sample_grid.permute(1, 2, 0).cpu(), vmin=0., vmax=1.)
plt.show()
# + [markdown] id="yC49nk6ZXqOS"
# ## Likelihood Computation
#
# A by-product of the probability flow ODE formulation is likelihood computation. Suppose we have a differentiable one-to-one mapping $\mathbf{h}$ that transforms a data sample $\mathbf{x} \sim p_0$ to a prior distribution $\mathbf{h}(\mathbf{x}) \sim p_1$. We can compute the likelihood of $p_0(\mathbf{x})$ via the following [change-of-variable formula](https://en.wikipedia.org/wiki/Probability_density_function#Function_of_random_variables_and_change_of_variables_in_the_probability_density_function)
# \begin{align*}
# p_0(\mathbf{x}) = p_1(\mathbf{h}(\mathbf{x})) |\operatorname{det}(J_\mathbf{h}(\mathbf{x}))|,
# \end{align*}
# where $J_\mathbf{h}(\mathbf{x})$ represents the Jacobian of the mapping $\mathbf{h}$, and we assume it is efficient to evaluate the likelihood of the prior distribution $p_1$.
#
# Similarly, an ODE is also a one-to-one mapping from $\mathbf{x}(0)$ to $\mathbf{x}(1)$. For ODEs of the form
# \begin{align*}
# d \mathbf{x} = \mathbf{f}(\mathbf{x}, t) dt,
# \end{align*}
# there exists an [instantaneous change-of-variable formula](https://arxiv.org/abs/1806.07366) that connects the probability of $p_0(\mathbf{x})$ and $p_1(\mathbf{x})$, given by
# \begin{align*}
# p_0 (\mathbf{x}(0)) = e^{\int_0^1 \operatorname{div} \mathbf{f}(\mathbf{x}(t), t) d t} p_1(\mathbf{x}(1)),
# \end{align*}
# where $\operatorname{div}$ denotes the divergence function (trace of Jacobian).
#
# In practice, this divergence function can be hard to evaluate for general vector-valued function $\mathbf{f}$, but we can use an unbiased estimator, named [Skilling-Hutchinson estimator](http://blog.shakirm.com/2015/09/machine-learning-trick-of-the-day-3-hutchinsons-trick/), to approximate the trace. Let $\boldsymbol \epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$. The Skilling-Hutchinson estimator is based on the fact that
# \begin{align*}
# \operatorname{div} \mathbf{f}(\mathbf{x}) = \mathbb{E}_{\boldsymbol\epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{I})}[\boldsymbol\epsilon^\intercal J_\mathbf{f}(\mathbf{x}) \boldsymbol\epsilon].
# \end{align*}
# Therefore, we can simply sample a random vector $\boldsymbol \epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$, and then use $\boldsymbol \epsilon^\intercal J_\mathbf{f}(\mathbf{x}) \boldsymbol \epsilon$ to estimate the divergence of $\mathbf{f}(\mathbf{x})$. This estimator only requires computing the Jacobian-vector product $J_\mathbf{f}(\mathbf{x})\boldsymbol \epsilon$, which is typically efficient.
#
# As a result, for our probability flow ODE, we can compute the (log) data likelihood with the following
# \begin{align*}
# \log p_0(\mathbf{x}(0)) = \log p_1(\mathbf{x}(1)) -\frac{1}{2}\int_0^1 \frac{d[\sigma^2(t)]}{dt} \operatorname{div} s_\theta(\mathbf{x}(t), t) dt.
# \end{align*}
# With the Skilling-Hutchinson estimator, we can compute the divergence via
# \begin{align*}
# \operatorname{div} s_\theta(\mathbf{x}(t), t) = \mathbb{E}_{\boldsymbol\epsilon \sim \mathcal{N}(\mathbf{0}, \mathbf{I})}[\boldsymbol\epsilon^\intercal J_{s_\theta}(\mathbf{x}(t), t) \boldsymbol\epsilon].
# \end{align*}
# Afterwards, we can compute the integral with numerical integrators. This gives us an unbiased estimate to the true data likelihood, and we can make it more and more accurate when we run it multiple times and take the average. The numerical integrator requires $\mathbf{x}(t)$ as a function of $t$, which can be obtained by solving the original probability flow ODE.
# + id="DfOkg5jBZcjF" cellView="form"
#@title Likelihood function (double click to expand or collapse)
def prior_likelihood(z, sigma):
"""The likelihood of a Gaussian distribution with mean zero and
standard deviation sigma."""
shape = z.shape
N = np.prod(shape[1:])
return -N / 2. * torch.log(2*np.pi*sigma**2) - torch.sum(z**2, dim=(1,2,3)) / (2 * sigma**2)
def ode_likelihood(x, score_model, noise_scale, batch_size=64, device='cuda'):
# Draw the random Gaussian sample for Skilling-Hutchinson's estimator.
epsilon = torch.randn_like(x)
def divergence_eval(sample, time_steps, epsilon):
"""Compute the divergence of the score-based model with Skilling-Hutchinson."""
with torch.enable_grad():
sample.requires_grad_(True)
score_e = torch.sum(score_model(sample, time_steps) * epsilon)
grad_score_e = torch.autograd.grad(score_e, sample)[0]
return torch.sum(grad_score_e * epsilon, dim=(1, 2, 3))
shape = x.shape
def score_eval_wrapper(sample, time_steps):
"""A wrapper for evaluating the score-based model for the black-box ODE solver."""
sample = torch.tensor(sample, device=device, dtype=torch.float32).reshape(shape)
time_steps = torch.tensor(time_steps, device=device, dtype=torch.float32).reshape((sample.shape[0], ))
with torch.no_grad():
score = score_model(sample, time_steps)
return score.cpu().numpy().reshape((-1,)).astype(np.float64)
def divergence_eval_wrapper(sample, time_steps):
"""A wrapper for evaluating the divergence of score for the black-box ODE solver."""
with torch.no_grad():
# Obtain x(t) by solving the probability flow ODE.
sample = torch.tensor(sample, device=device, dtype=torch.float32).reshape(shape)
time_steps = torch.tensor(time_steps, device=device, dtype=torch.float32).reshape((sample.shape[0], ))
# Compute likelihood.
div = divergence_eval(sample, time_steps, epsilon)
return div.cpu().numpy().reshape((-1,)).astype(np.float64)
def ode_func(t, x):
"""The ODE function for the black-box solver."""
time_steps = np.ones((shape[0],)) * t
sample = x[:-shape[0]]
logp = x[-shape[0]:]
sample_grad = -0.5 * noise_scale(torch.tensor(t))[1].cpu().numpy() * score_eval_wrapper(sample, time_steps)
logp_grad = -0.5 * noise_scale(torch.tensor(t))[1].cpu().numpy() * divergence_eval_wrapper(sample, time_steps)
return np.concatenate([sample_grad, logp_grad], axis=0)
init = np.concatenate([x.cpu().numpy().reshape((-1,)), np.zeros((shape[0],))], axis=0)
# Black-box ODE solver
res = integrate.solve_ivp(ode_func, (1e-3, 1.), init, rtol=1e-5, atol=1e-5, method='RK45')
zp = torch.tensor(res.y[:, -1], device=device)
z = zp[:-shape[0]].reshape(shape)
delta_logp = zp[-shape[0]:].reshape(shape[0])
sigma_max = noise_scale(torch.ones((), device=device))[0]
prior_logp = prior_likelihood(z, sigma_max)
bpd = -(prior_logp + delta_logp) / np.log(2)
N = np.prod(shape[1:])
bpd = bpd / N + 8.
return z, bpd
# + id="0H1Rq5DTmW8o" cellView="form" colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["c86a5c4f4d9a4ad3adae643552c926ea", "d3fb8b9a47144b46b4ecbb0d2aa2925d", "1f3a2788d70e46019065eadea58731be", "14a60a85e2394f6d819aa951c714efbc", "585063e99cc0438aa5f94832589d2a2f", "fbc926ebdd6f4536b1511ed9217be5e3", "4f43a6b6a34e47c48b7c151eae6ac8c3", "3911dc75f8b54e16bd08e1b741541600"]} outputId="a507e53c-450c-4c7f-9425-98425c0828fa"
#@title Computing likelihood on the dataset (double click to expand or collapse)
device = 'cuda' #@param ['cuda', 'cpu'] {'type':'string'}
ckpt = torch.load('ckpt.pth', map_location=device)
score_model.load_state_dict(ckpt)
noise_scale_func = functools.partial(noise_scale, sigma_min=sigma_min, sigma_max=sigma_max, grad=True)
all_bpds = 0.
all_items = 0
try:
for x, _ in tqdm.notebook.tqdm(data_loader):
x = x.to(device)
# uniform dequantization
x = (x * 255. + torch.rand_like(x)) / 256.
_, bpd = ode_likelihood(x, score_model, noise_scale_func, x.shape[0], device=device)
all_bpds += bpd.sum()
all_items += bpd.shape[0]
print(f"bpd (running average): {all_bpds / all_items}")
print(f"bpd (full average): {all_bpds/all_items}")
except KeyboardInterrupt:
# Remove the error message when interuptted by keyboard or GUI.
pass
# + [markdown] id="mHsx75Yft-6u"
# ## Further Resources
#
# If you're interested in learning more about score-based generative models, the following papers would be a good start:
#
# * <NAME>, and <NAME>. "[Generative modeling by estimating gradients of the data distribution.](https://arxiv.org/pdf/1907.05600.pdf)" Advances in Neural Information Processing Systems. 2019.
# * <NAME>, and <NAME>. "[Improved Techniques for Training Score-Based Generative Models.](https://arxiv.org/pdf/2006.09011.pdf)" Advances in Neural Information Processing Systems. 2020.
# * <NAME>, <NAME>, and <NAME>. "[Denoising diffusion probabilistic models.](https://arxiv.org/pdf/2006.11239.pdf)" Advances in Neural Information Processing Systems. 2020.
| Course 2/C2W2_(Optional_Notebook)_Score_Based_Generative_Modeling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
movies=pd.read_csv('tmdb_5000_movies.csv')
credits=pd.read_csv('tmdb_5000_credits.csv')
# +
movies.head(1)
# -
movies=movies.merge(credits, on='title')
movies.head(1)
# + active=""
# # This is content based recommonder system so we are creating Tags so that we'll choose only those colums which will help craeting tags
# +
# genres
# id
# keywords
# title
# overview
# cast
# crew
movies=movies[['movie_id','title','overview','genres','keywords','cast','crew']]
# -
movies.head()
movies.isnull().sum()
movies.dropna(inplace=True)
movies.duplicated().sum()
movies.iloc[0].genres
import ast
# + active=""
# #ast. literal_eval(input()) would be useful if you expected a list (or something similar) by the user. For example '[1,2]' would be converted to [1,2] . If the user is supposed to provide a number ast
# -
def convert(obj):
L=[]
for i in ast.literal_eval(obj):
L.append(i['name'])
return L
movies['genres']=movies['genres'].apply(convert)
movies.head()
movies['keywords']=movies['keywords'].apply(convert)
movies.head(1)
def convert3(obj):
L=[]
counter=0
for i in ast.literal_eval(obj):
if counter !=3:
L.append(i['name'])
counter+=1
else:
break
return L
movies['cast']=movies['cast'].apply(convert3)
movies.head(1)
movies['crew'][0]
def fetch_director(obj):
L=[]
for i in ast.literal_eval(obj):
if i['job']=='Director':
L.append(i['name'])
break
return L
movies['crew']=movies['crew'].apply(fetch_director)
movies.head(1)
movies['overview']=movies['overview'].apply(lambda x:x.split())
movies.head(1)
# +
#removing space between first and second name
movies['genres']=movies['genres'].apply(lambda x:[i.replace(" ","")for i in x])
movies['keywords']=movies['keywords'].apply(lambda x:[i.replace(" ","")for i in x])
movies['cast']=movies['cast'].apply(lambda x:[i.replace(" ","")for i in x])
movies['crew']=movies['crew'].apply(lambda x:[i.replace(" ","")for i in x])
# -
movies.head(5)
movies['tags']=movies['overview']+movies['genres']+movies['keywords']+movies['cast']+movies['crew']
movies.head(2)
new_df= movies[['movie_id','title','tags']]
new_df
#converting tags into string
new_df['tags']=new_df['tags'].apply(lambda x:" ".join(x))
new_df['tags'][0]
new_df['tags']=new_df['tags'].apply(lambda x:x.lower())
new_df.head(2)
import nltk
from nltk.stem.porter import PorterStemmer
ps=PorterStemmer()
def stem(text):
y=[]
for i in text.split():
y.append(ps.stem(i))
return " ".join(y)
new_df['tags']=new_df['tags'].apply(stem)
from sklearn.feature_extraction.text import CountVectorizer
cv= CountVectorizer(max_features=5000,stop_words='english')
vectors=cv.fit_transform(new_df['tags']).toarray()
vectors
vectors[0]
cv.get_feature_names()
from sklearn.metrics.pairwise import cosine_similarity
similarity=cosine_similarity(vectors)
similarity[0]
#step-1
# we want to find out 5 most similar movies but we can't apply sort funtion on similarity because we cant loose the index position on similaruly.
# so we use enumerate function which create list of tuples
list(enumerate(similarity[0]))
#step-2
#we want to sort value on the basis of second number thats way we are using lambda x:x[1] ,after sorting selsct 5 most similar values
sorted(list(enumerate(similarity[0])),reverse=True , key=lambda x:x[1])[1:6]
# +
#using step-1 and step-2 in this funtion
def recommend(movie):
movie_index =new_df[new_df['title']==movie].index[0]
distances = similarity[movie_index]
movies_list = sorted(list(enumerate(distances)),reverse=True , key=lambda x:x[1])[1:6]
for i in movies_list:
print(new_df.iloc[i[0]].title)
# -
recommend('Avatar')
new_df.iloc[1216].title
import pickle
pickle.dump(new_df,open('movie.pkl','wb'))
# +
#due to some reason pickle of previous dataframe is not able to open with streamlit.
#thas why we r going to make pickle file of dictionary
pickle.dump(new_df.to_dict(),open('movie_dict.pkl','wb'))
# -
pickle.dump(similarity,open('similarity.pkl','wb'))
| Movie_Recommender_System.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="../../images/banners/python-advanced.png" width="600"/>
# # <img src="../../images/logos/python.png" width="23"/> Data Classes in Python 3.7+
#
# ## <img src="../../images/logos/toc.png" width="20"/> Table of Contents
# * [Alternatives to Data Classes](#alternatives_to_data_classes)
# * [Basic Data Classes](#basic_data_classes)
# * [Default Values](#default_values)
# * [Type Hints](#type_hints)
# * [Adding Methods](#adding_methods)
# * [More Flexible Data Classes](#more_flexible_data_classes)
# * [Advanced Default Values](#advanced_default_values)
# * [You Need Representation?](#you_need_representation?)
# * [Comparing Cards](#comparing_cards)
# * [Immutable Data Classes](#immutable_data_classes)
# * [Inheritance](#inheritance)
# * [Optimizing Data Classes](#optimizing_data_classes)
# * [Conclusion & Further Reading](#conclusion_&_further_reading)
#
# ---
# One [new and exciting feature coming in Python 3.7](https://realpython.com/python37-new-features/) is the data class. A data class is a class typically containing mainly data, although there aren’t really any restrictions. It is created using the new `@dataclass` decorator, as follows:
# +
from dataclasses import dataclass
@dataclass
class DataClassCard:
rank: str
suit: str
# -
# A data class comes with basic functionality already implemented. For instance, you can instantiate, print, and compare data class instances straight out of the box:
queen_of_hearts = DataClassCard('Q', 'Hearts')
queen_of_hearts.rank
queen_of_hearts
queen_of_hearts == DataClassCard('Q', 'Hearts')
# Compare that to a regular class. A minimal regular class would look something like this:
class RegularCard:
def __init__(self, rank, suit):
self.rank = rank
self.suit = suit
# While this is not much more code to write, you can already see signs of the boilerplate pain: `rank` and `suit` are both repeated three times simply to initialize an object. Furthermore, if you try to use this plain class, you’ll notice that the representation of the objects is not very descriptive, and for some reason a queen of hearts is not the same as a queen of hearts:
queen_of_hearts = RegularCard('Q', 'Hearts')
queen_of_hearts.rank
queen_of_hearts
queen_of_hearts == RegularCard('Q', 'Hearts')
# Seems like data classes are helping us out behind the scenes. By default, data classes implement a [`.__repr__()` method](https://realpython.com/operator-function-overloading/) to provide a nice string representation and an `.__eq__()` method that can do basic object comparisons. For the `RegularCard` class to imitate the data class above, you need to add these methods as well:
class RegularCard
def __init__(self, rank, suit):
self.rank = rank
self.suit = suit
def __repr__(self):
return (f'{self.__class__.__name__}'
f'(rank={self.rank!r}, suit={self.suit!r})')
def __eq__(self, other):
if other.__class__ is not self.__class__:
return NotImplemented
return (self.rank, self.suit) == (other.rank, other.suit)
# In this tutorial, you will learn exactly which conveniences data classes provide. In addition to nice representations and comparisons, you’ll see:
# - How to add default values to data class fields
# - How data classes allow for ordering of objects
# - How to represent immutable data
# - How data classes handle inheritance
#
# We will soon dive deeper into those features of data classes. However, you might be thinking that you have already seen something like this before.
# <a class="anchor" id="alternatives_to_data_classes"></a>
#
# ## Alternatives to Data Classes
# For simple data structures, you have probably already used [a `tuple` or a `dict`](https://dbader.org/blog/records-structs-and-data-transfer-objects-in-python). You could represent the queen of hearts card in either of the following ways:
queen_of_hearts_tuple = ('Q', 'Hearts')
queen_of_hearts_dict = {'rank': 'Q', 'suit': 'Hearts'}
# It works. However, it puts a lot of responsibility on you as a programmer:
# - You need to remember that the `queen_of_hearts_...` [variable](https://realpython.com/python-variables/) represents a card.
# - For the `tuple` version, you need to remember the order of the attributes. Writing `('Spades', 'A')` will mess up your program but probably not give you an easily understandable error message.
# - If you use the `dict` version, you must make sure the names of the attributes are consistent. For instance `{'value': 'A', 'suit': 'Spades'}` will not work as expected.
#
# Furthermore, using these structures is not ideal:
queen_of_hearts_tuple[0] # No named access
queen_of_hearts_dict['suit'] # Would be nicer with .suit
# A better alternative is the [`namedtuple`](https://dbader.org/blog/writing-clean-python-with-namedtuples). It has long been used to create readable small data structures. We can in fact recreate the data class example above using a `namedtuple` like this:
# +
from collections import namedtuple
NamedTupleCard = namedtuple('NamedTupleCard', ['rank', 'suit'])
# -
# This definition of `NamedTupleCard` will give the exact same output as our `DataClassCard` example did:
queen_of_hearts = NamedTupleCard('Q', 'Hearts')
queen_of_hearts.rank
queen_of_hearts
queen_of_hearts == NamedTupleCard('Q', 'Hearts')
# So why even bother with data classes? First of all, data classes come with many more features than you have seen so far. At the same time, the `namedtuple` has some other features that are not necessarily desirable. By design, a `namedtuple` is a regular tuple. This can be seen in comparisons, for instance:
queen_of_hearts == ('Q', 'Hearts')
# While this might seem like a good thing, this lack of awareness about its own type can lead to subtle and hard-to-find bugs, especially since it will also happily compare two different `namedtuple` classes:
Person = namedtuple('Person', ['first_initial', 'last_name'])
ace_of_spades = NamedTupleCard('A', 'Spades')
ace_of_spades == Person('A', 'Spades')
# The `namedtuple` also comes with some restrictions. For instance, it is hard to add default values to some of the fields in a `namedtuple`. A `namedtuple` is also by nature immutable. That is, the value of a `namedtuple` can never change. In some applications, this is an awesome feature, but in other settings, it would be nice to have more flexibility:
card = NamedTupleCard('7', 'Diamonds')
card.rank = '9'
# Data classes will not replace all uses of `namedtuple`. For instance, if you need your data structure to behave like a tuple, then a named tuple is a great alternative!
# Another alternative, and one of the [inspirations for data classes](https://mail.python.org/pipermail/python-dev/2017-December/151034.html), is the [`attrs` project](http://www.attrs.org). With `attrs` installed (`pip install attrs`), you can write a card class as follows:
# +
import attr
@attr.s
class AttrsCard:
rank = attr.ib()
suit = attr.ib()
# -
# This can be used in exactly the same way as the `DataClassCard` and `NamedTupleCard` examples earlier. The `attrs` project is great and does support some features that data classes do not, including converters and validators. Furthermore, `attrs` has been around for a while and is supported in Python 2.7 as well as Python 3.4 and up. However, as `attrs` is not a part of the standard library, it does add an external [dependency](https://realpython.com/courses/managing-python-dependencies/) to your projects. Through data classes, similar functionality will be available everywhere.
# In addition to `tuple`, `dict`, `namedtuple`, and `attrs`, there are [many other similar projects](https://www.python.org/dev/peps/pep-0557/#rationale), including [`typing.NamedTuple`](https://docs.python.org/library/typing.html#typing.NamedTuple), [`namedlist`](https://pypi.org/project/namedlist/), [`attrdict`](https://pypi.org/project/attrdict/), [`plumber`](https://pypi.org/project/plumber/), and [`fields`](https://pypi.org/project/fields/). While data classes are a great new alternative, there are still use cases where one of the older variants fits better. For instance, if you need compatibility with a specific API expecting tuples or need functionality not supported in data classes.
# <a class="anchor" id="basic_data_classes"></a>
#
# ## Basic Data Classes
# Let us get back to data classes. As an example, we will create a `Position` class that will represent geographic positions with a name as well as the latitude and longitude:
# +
from dataclasses import dataclass
@dataclass
class Position:
name: str
lon: float
lat: float
# -
# What makes this a data class is the [`@dataclass` decorator](https://realpython.com/primer-on-python-decorators/) just above the class definition. Beneath the `class Position:` line, you simply list the fields you want in your data class. The `:` notation used for the fields is using a new feature in Python 3.6 called [variable annotations](https://www.python.org/dev/peps/pep-0526/) which was discussed in Python Type Checking section.
# Those few lines of code are all you need. The new class is ready for use:
pos = Position('Oslo', 10.8, 59.9)
print(pos)
pos.lat
print(f'{pos.name} is at {pos.lat}°N, {pos.lon}°E')
# You can also create data classes similarly to how named tuples are created. The following is (almost) equivalent to the definition of `Position` above:
# +
from dataclasses import make_dataclass
Position = make_dataclass('Position', ['name', 'lat', 'lon'])
# -
# A data class is a regular Python class. The only thing that sets it apart is that it has basic [data model methods](https://docs.python.org/reference/datamodel.html#basic-customization) like `.__init__()`, `.__repr__()`, and `.__eq__()` implemented for you.
# <a class="anchor" id="default_values"></a>
#
# ### Default Values
# It is easy to add default values to the fields of your data class:
# +
from dataclasses import dataclass
@dataclass
class Position:
name: str
lon: float = 0.0
lat: float = 0.0
# -
# This works exactly as if you had specified the default values in the definition of the `.__init__()` method of a regular class:
Position('Null Island')
Position('Greenwich', lat=51.8)
Position('Vancouver', -123.1, 49.3)
# [Later](#advanced-default-values) you will learn about `default_factory`, which gives a way to provide more complicated default values.
# <a class="anchor" id="type_hints"></a>
#
# ### Type Hints
# So far, we have not made a big fuss of the fact that data classes support [typing](https://realpython.com/python-type-checking/) out of the box. You have probably noticed that we defined the fields with a type hint: `name: str` says that `name` should be a [text string](https://realpython.com/python-strings/) (`str` type).
# In fact, adding some kind of type hint is mandatory when defining the fields in your data class. Without a type hint, the field will not be a part of the data class. However, if you do not want to add explicit types to your data class, use `typing.Any`:
# +
from dataclasses import dataclass
from typing import Any
@dataclass
class WithoutExplicitTypes:
name: Any
value: Any = 42
# -
# While you need to add type hints in some form when using data classes, these types are not enforced at runtime. The following code runs without any problems:
Position(3.14, 'pi day', 2018)
# This is how typing in Python usually works: [Python is and will always be a dynamically typed language](https://www.python.org/dev/peps/pep-0484/#non-goals). To actually catch type errors, type checkers like [Mypy](http://mypy-lang.org) can be run on your source code.
# <a class="anchor" id="adding_methods"></a>
#
# ### Adding Methods
# You already know that a data class is just a regular class. That means that you can freely add your own methods to a data class. As an example, let us calculate the distance between one position and another, along the Earth’s surface. One way to do this is by using [the haversine formula](https://en.wikipedia.org/wiki/Haversine_formula):
# <img src="images/data-classes-in-python-3.7+-(guide)/haversine_formula_150.fb2b87d122a4.png" width="600px">
# You can add a `.distance_to()` method to your data class just like you can with normal classes:
# +
from dataclasses import dataclass
from math import asin, cos, radians, sin, sqrt
@dataclass
class Position:
name: str
lon: float = 0.0
lat: float = 0.0
def distance_to(self, other):
r = 6371 # Earth radius in kilometers
lam_1, lam_2 = radians(self.lon), radians(other.lon)
phi_1, phi_2 = radians(self.lat), radians(other.lat)
h = (sin((phi_2 - phi_1) / 2)**2
+ cos(phi_1) * cos(phi_2) * sin((lam_2 - lam_1) / 2)**2)
return 2 * r * asin(sqrt(h))
# -
# It works as you would expect:
oslo = Position('Oslo', 10.8, 59.9)
vancouver = Position('Vancouver', -123.1, 49.3)
oslo.distance_to(vancouver)
# <a class="anchor" id="more_flexible_data_classes"></a>
#
# ## More Flexible Data Classes
# So far, you have seen some of the basic features of the data class: it gives you some convenience methods, and you can still add default values and other methods. Now you will learn about some more advanced features like parameters to the `@dataclass` decorator and the `field()` function. Together, they give you more control when creating a data class.
# Let us return to the playing card example you saw at the beginning of the tutorial and add a class containing a deck of cards while we are at it:
# +
from dataclasses import dataclass
from typing import List
@dataclass
class PlayingCard:
rank: str
suit: str
@dataclass
class Deck:
cards: List[PlayingCard]
# -
# A simple deck containing only two cards can be created like this:
queen_of_hearts = PlayingCard('Q', 'Hearts')
ace_of_spades = PlayingCard('A', 'Spades')
two_cards = Deck([queen_of_hearts, ace_of_spades])
# <a class="anchor" id="advanced_default_values"></a>
#
# ### Advanced Default Values
# Say that you want to give a default value to the `Deck`. It would for example be convenient if `Deck()` created a [regular (French) deck](https://en.wikipedia.org/wiki/French_playing_cards) of 52 playing cards. First, specify the different ranks and suits. Then, add a function `make_french_deck()` that creates a [list](https://realpython.com/python-lists-tuples/) of instances of `PlayingCard`:
# +
RANKS = '2 3 4 5 6 7 8 9 10 J Q K A'.split()
SUITS = '♣ ♢ ♡ ♠'.split()
def make_french_deck():
return [PlayingCard(r, s) for s in SUITS for r in RANKS]
# -
# For fun, the four different suits are specified using their [Unicode symbols](https://en.wikipedia.org/wiki/Playing_cards_in_Unicode).
# To simplify comparisons of cards later, the ranks and suits are also listed in their usual order.
make_french_deck()
# In theory, you could now use this function to specify a default value for `Deck.cards`:
# +
from dataclasses import dataclass
from typing import List
@dataclass
class Deck: # Will NOT work
cards: List[PlayingCard] = make_french_deck()
# -
# Don’t do this! This introduces one of the most common anti-patterns in Python: [using mutable default arguments](http://docs.python-guide.org/en/latest/writing/gotchas/#mutable-default-arguments). The problem is that all instances of `Deck` will use the same list object as the default value of the `.cards` property. This means that if, say, one card is removed from one `Deck`, then it disappears from all other instances of `Deck` as well. Actually, data classes try to [prevent you from doing this](https://www.python.org/dev/peps/pep-0557/#mutable-default-values), and the code above will raise a `ValueError`.
# Instead, data classes use something called a `default_factory` to handle mutable default values. To use `default_factory` (and many other cool features of data classes), you need to use the `field()` specifier:
# +
from dataclasses import dataclass, field
from typing import List
@dataclass
class Deck:
cards: List[PlayingCard] = field(default_factory=make_french_deck)
# -
# The argument to `default_factory` can be any zero parameter callable. Now it is easy to create a full deck of playing cards:
Deck()
# The `field()` specifier is used to customize each field of a data class individually. You will see some other examples later. For reference, these are the parameters `field()` supports:
# - `default`: Default value of the field
# - `default_factory`: Function that returns the initial value of the field
# - `init`: Use field in `.__init__()` method? (Default is `True`.)
# - `repr`: Use field in `repr` of the object? (Default is `True`.)
# - `compare`: Include the field in comparisons? (Default is `True`.)
# - `hash`: Include the field when calculating `hash()`? (Default is to use the same as for `compare`.)
# - `metadata`: A mapping with information about the field
#
# In the `Position` example, you saw how to add simple default values by writing `lat: float = 0.0`. However, if you also want to customize the field, for instance to hide it in the `repr`, you need to use the `default` parameter: `lat: float = field(default=0.0, repr=False)`. You may not specify both `default` and `default_factory`.
# The `metadata` parameter is not used by the data classes themselves but is available for you (or third party packages) to attach information to fields. In the `Position` example, you could for instance specify that latitude and longitude should be given in degrees:
# +
from dataclasses import dataclass, field
@dataclass
class Position:
name: str
lon: float = field(default=0.0, metadata={'unit': 'degrees'})
lat: float = field(default=0.0, metadata={'unit': 'degrees'})
# -
# The metadata (and other information about a field) can be retrieved using the `fields()` function (note the plural *s*):
from dataclasses import fields
fields(Position)
lat_unit = fields(Position)[2].metadata['unit']
lat_unit
# <a class="anchor" id="you_need_representation?"></a>
#
# ### You Need Representation?
# Recall that we can create decks of cards out of thin air:
Deck()
# While this representation of a `Deck` is explicit and readable, it is also very verbose. I have deleted 48 of the 52 cards in the deck in the output above. On an 80-column display, simply printing the full `Deck` takes up 22 lines! Let us add a more concise representation. In general, a Python object has [two different string representations](https://dbader.org/blog/python-repr-vs-str):
# -
# `repr(obj)` is defined by `obj.__repr__()` and should return a developer-friendly representation of `obj`. If possible, this should be code that can recreate `obj`. Data classes do this.
#
#
#
# -
# `str(obj)` is defined by `obj.__str__()` and should return a user-friendly representation of `obj`. Data classes do not implement a `.__str__()` method, so Python will fall back to the `.__repr__()` method.
#
#
#
#
# Let us implement a user-friendly representation of a `PlayingCard`:
# +
from dataclasses import dataclass
@dataclass
class PlayingCard:
rank: str
suit: str
def __str__(self):
return f'{self.suit}{self.rank}'
# -
# The cards now look much nicer, but the deck is still as verbose as ever:
ace_of_spades = PlayingCard('A', '♠')
ace_of_spades
print(ace_of_spades)
print(Deck())
# To show that it is possible to add your own `.__repr__()` method as well, we will violate the principle that it should return code that can recreate an object. [Practicality beats purity](https://www.python.org/dev/peps/pep-0020/) after all. The following code adds a more concise representation of the `Deck`:
# +
from dataclasses import dataclass, field
from typing import List
@dataclass
class Deck:
cards: List[PlayingCard] = field(default_factory=make_french_deck)
def __repr__(self):
cards = ', '.join(f'{c!s}' for c in self.cards)
return f'{self.__class__.__name__}({cards})'
# -
# Note the `!s` specifier in the `{c!s}` format string. It means that we explicitly want to use the `str()` representation of each `PlayingCard`. With the new `.__repr__()`, the representation of `Deck` is easier on the eyes:
Deck()
# <a class="anchor" id="comparing_cards"></a>
#
# ### Comparing Cards
# In many card games, cards are compared to each other. For instance in a typical trick taking game, the highest card takes the trick. As it is currently implemented, the `PlayingCard` class does not support this kind of comparison:
queen_of_hearts = PlayingCard('Q', '♡')
ace_of_spades = PlayingCard('A', '♠')
ace_of_spades > queen_of_hearts
# This is, however, (seemingly) easy to rectify:
# +
from dataclasses import dataclass
@dataclass(order=True)
class PlayingCard:
rank: str
suit: str
def __str__(self):
return f'{self.suit}{self.rank}'
# -
# The `@dataclass` decorator has two forms. So far you have seen the simple form where `@dataclass` is specified without any parentheses and parameters. However, you can also give parameters to the `@dataclass()` decorator in parentheses. The following parameters are supported:
# - `init`: Add `.__init__()` method? (Default is `True`.)
# - `repr`: Add `.__repr__()` method? (Default is `True`.)
# - `eq`: Add `.__eq__()` method? (Default is `True`.)
# - `order`: Add ordering methods? (Default is `False`.)
# - `unsafe_hash`: Force the addition of a `.__hash__()` method? (Default is `False`.)
# - `frozen`: If `True`, assigning to fields raise an exception. (Default is `False`.)
#
# See [the original PEP](https://www.python.org/dev/peps/pep-0557/#id7) for more information about each parameter. After setting `order=True`, instances of `PlayingCard` can be compared:
queen_of_hearts = PlayingCard('Q', '♡')
ace_of_spades = PlayingCard('A', '♠')
ace_of_spades > queen_of_hearts
# How are the two cards compared though? You have not specified how the ordering should be done, and for some reason Python seems to believe that a Queen is higher than an Ace…
# It turns out that data classes compare objects as if they were tuples of their fields. In other words, a Queen is higher than an Ace because `'Q'` comes after `'A'` in the alphabet:
('A', '♠') > ('Q', '♡')
# That does not really work for us. Instead, we need to define some kind of sort index that uses the order of `RANKS` and `SUITS`. Something like this:
RANKS = '2 3 4 5 6 7 8 9 10 J Q K A'.split()
SUITS = '♣ ♢ ♡ ♠'.split()
card = PlayingCard('Q', '♡')
RANKS.index(card.rank) * len(SUITS) + SUITS.index(card.suit)
# For `PlayingCard` to use this sort index for comparisons, we need to add a field `.sort_index` to the class. However, this field should be calculated from the other fields `.rank` and `.suit` automatically. This is exactly what the special method `.__post_init__()` is for. It allows for special processing after the regular `.__init__()` method is called:
# +
from dataclasses import dataclass, field
RANKS = '2 3 4 5 6 7 8 9 10 J Q K A'.split()
SUITS = '♣ ♢ ♡ ♠'.split()
@dataclass(order=True)
class PlayingCard:
sort_index: int = field(init=False, repr=False)
rank: str
suit: str
def __post_init__(self):
self.sort_index = (
RANKS.index(self.rank) * len(SUITS) + SUITS.index(self.suit)
)
def __str__(self):
return f'{self.suit}{self.rank}'
# -
# Note that `.sort_index` is added as the first field of the class. That way, the comparison is first done using `.sort_index` and only if there are ties are the other fields used. Using `field()`, you must also specify that `.sort_index` should not be included as a parameter in the `.__init__()` method (because it is calculated from the `.rank` and `.suit` fields). To avoid confusing the user about this implementation detail, it is probably also a good idea to remove `.sort_index` from the `repr` of the class.
# Finally, aces are high:
queen_of_hearts = PlayingCard('Q', '♡')
ace_of_spades = PlayingCard('A', '♠')
ace_of_spades > queen_of_hearts
# You can now easily create a sorted deck:
Deck(sorted(make_french_deck()))
# Or, if you don’t care about [sorting](https://realpython.com/sorting-algorithms-python/), this is how you draw a random hand of 10 cards:
from random import sample
Deck(sample(make_french_deck(), k=10))
# Of course, you don’t need `order=True` for that…
# <a class="anchor" id="immutable_data_classes"></a>
#
# ## Immutable Data Classes
# One of the defining features of the `namedtuple` you saw earlier is that it is [immutable](https://medium.com/@meghamohan/mutable-and-immutable-side-of-python-c2145cf72747). That is, the value of its fields may never change. For many types of data classes, this is a great idea! To make a data class immutable, set `frozen=True` when you create it. For example, the following is an immutable version of the `Position` class [you saw earlier](#basic-data-classes):
# +
from dataclasses import dataclass
@dataclass(frozen=True)
class Position:
name: str
lon: float = 0.0
lat: float = 0.0
# -
# In a frozen data class, you can not assign values to the fields after creation:
pos = Position('Oslo', 10.8, 59.9)
pos.name
pos.name = 'Stockholm'
# Be aware though that if your data class contains mutable fields, those might still change. This is true for all nested data structures in Python (see [this video for further info](https://www.youtube.com/watch?v=p9ppfvHv2Us)):
# +
from dataclasses import dataclass
from typing import List
@dataclass(frozen=True)
class ImmutableCard:
rank: str
suit: str
@dataclass(frozen=True)
class ImmutableDeck:
cards: List[ImmutableCard]
# -
# Even though both `ImmutableCard` and `ImmutableDeck` are immutable, the list holding `cards` is not. You can therefore still change the cards in the deck:
queen_of_hearts = ImmutableCard('Q', '♡')
ace_of_spades = ImmutableCard('A', '♠')
deck = ImmutableDeck([queen_of_hearts, ace_of_spades])
deck
deck.cards[0] = ImmutableCard('7', '♢')
deck
# To avoid this, make sure all fields of an immutable data class use immutable types (but remember that types are not enforced at runtime). The `ImmutableDeck` should be implemented using a tuple instead of a list:
deck = ImmutableDeck((queen_of_hearts, ace_of_spades))
deck
deck.cards[0] = ImmutableCard('7', '♢')
# <a class="anchor" id="inheritance"></a>
#
# ## Inheritance
# You can [subclass](https://realpython.com/python3-object-oriented-programming/) data classes quite freely. As an example, we will extend our `Position` example with a `country` field and use it to record capitals:
# +
from dataclasses import dataclass
@dataclass
class Position:
name: str
lon: float
lat: float
@dataclass
class Capital(Position):
country: str
# -
# In this simple example, everything works without a hitch:
Capital('Oslo', 10.8, 59.9, 'Norway')
# The `country` field of `Capital` is added after the three original fields in `Position`. Things get a little more complicated if any fields in the base class have default values:
# +
from dataclasses import dataclass
@dataclass
class Position:
name: str
lon: float = 0.0
lat: float = 0.0
@dataclass
class Capital(Position):
country: str # Does NOT work
# -
# This code will immediately crash with a `TypeError` complaining that “non-default argument ‘country’ follows default argument.” The problem is that our new `country` field has no default value, while the `lon` and `lat` fields have default values. The data class will try to write an `.__init__()` method with the following signature:
def __init__(name: str, lon: float = 0.0, lat: float = 0.0, country: str):
...
# However, this is not valid Python. [If a parameter has a default value, all following parameters must also have a default value](https://docs.python.org/reference/compound_stmts.html#function-definitions). In other words, if a field in a base class has a default value, then all new fields added in a subclass must have default values as well.
# Another thing to be aware of is how fields are ordered in a subclass. Starting with the base class, fields are ordered in the order in which they are first defined. If a field is redefined in a subclass, its order does not change. For example, if you define `Position` and `Capital` as follows:
# +
from dataclasses import dataclass
@dataclass
class Position:
name: str
lon: float = 0.0
lat: float = 0.0
@dataclass
class Capital(Position):
country: str = 'Unknown'
lat: float = 40.0
# -
# Then the order of the fields in `Capital` will still be `name`, `lon`, `lat`, `country`. However, the default value of `lat` will be `40.0`.
Capital('Madrid', country='Spain')
# <a class="anchor" id="optimizing_data_classes"></a>
#
# ## Optimizing Data Classes
# I’m going to end this tutorial with a few words about [slots](https://docs.python.org/reference/datamodel.html#slots). Slots can be used to make classes faster and use less memory. Data classes have no explicit syntax for working with slots, but the normal way of creating slots works for data classes as well. (They really are just regular classes!)
# +
from dataclasses import dataclass
@dataclass
class SimplePosition:
name: str
lon: float
lat: float
@dataclass
class SlotPosition:
__slots__ = ['name', 'lon', 'lat']
name: str
lon: float
lat: float
# -
# Essentially, slots are defined using `.__slots__` to list the variables on a class. Variables or attributes not present in `.__slots__` may not be defined. Furthermore, a slots class may not have default values.
# The benefit of adding such restrictions is that certain optimizations may be done. For instance, slots classes take up less memory, as can be measured using [Pympler](https://pythonhosted.org/Pympler/):
# !pip install pympler
from pympler import asizeof
simple = SimplePosition('London', -0.1, 51.5)
slot = SlotPosition('Madrid', -3.7, 40.4)
asizeof.asizesof(simple, slot)
# Similarly, slots classes are typically faster to work with. The following example measures the speed of attribute access on a slots data class and a regular data class using [timeit](https://docs.python.org/library/timeit.html) from the standard library.
# ```python
# >>> from timeit import timeit
# >>> timeit('slot.name', setup="slot=SlotPosition('Oslo', 10.8, 59.9)", globals=globals())
# 0.05882283499886398
# >>> timeit('simple.name', setup="simple=SimplePosition('Oslo', 10.8, 59.9)", globals=globals())
# 0.09207444800267695
# ```
# In this particular example, the slot class is about 35% faster.
# <a class="anchor" id="conclusion_&_further_reading"></a>
#
# ## Conclusion & Further Reading
# Data classes are one of the new features of Python 3.7. With data classes, you do not have to write boilerplate code to get proper initialization, representation, and comparisons for your objects.
# You have seen how to define your own data classes, as well as:
# - How to add default values to the fields in your data class
# - How to customize the ordering of data class objects
# - How to work with immutable data classes
# - How inheritance works for data classes
#
# If you want to dive into all the details of data classes, have a look at [PEP 557](https://www.python.org/dev/peps/pep-0557/) as well as the discussions in the original [GitHub repo](https://github.com/ericvsmith/dataclasses/issues?utf8=%E2%9C%93&q=).
# In addition, Raymond Hettinger’s PyCon 2018 talk [Dataclasses: The code generator to end all code generators](https://www.youtube.com/watch?v=T-TwcmT6Rcw) is well worth watching.
# If you do not yet have Python 3.7, there is also a [data classes backport for Python 3.6](https://github.com/ericvsmith/dataclasses). And now, go forth and write less code!
| 01. Python/04. Advanced/10.2 Data Classes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from plr import slope
from plr import intercept
from plr import predict
import pandas as pd
test = pd.read_csv("https://raw.githubusercontent.com/ThomasJewson/datasets/master/AAPL-MSFT-ClosePrice2010-2020.csv")
# -
slope(test)
intercept(test)
predict(test,["AAPL close","MSFT close"],160)
| pandas-linear-regression/Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# # !wget https://f000.backblazeb2.com/file/malay-dataset/dumping/clean/filtered-dumping-academia.txt
# # !wget https://f000.backblazeb2.com/file/malay-dataset/dumping/clean/filtered-dumping-wiki.txt
# # !wget https://f000.backblazeb2.com/file/malay-dataset/dumping/clean/dumping-cleaned-news.txt
# # !wget https://f000.backblazeb2.com/file/malay-dataset/dumping/clean/dumping-iium.txt
# # !wget https://f000.backblazeb2.com/file/malay-dataset/dumping/clean/dumping-parliament.txt
# # !wget https://f000.backblazeb2.com/file/malay-dataset/dumping/clean/dumping-watpadd.txt
# # !wget https://f000.backblazeb2.com/file/malay-dataset/dumping/clean/filtered-dumping-cleaned-common-crawl.txt
files = ['filtered-dumping-academia.txt',
'filtered-dumping-wiki.txt',
'dumping-cleaned-news.txt',
'dumping-iium.txt',
'dumping-parliament.txt',
'dumping-watpadd.txt',
'filtered-dumping-cleaned-common-crawl.txt']
# +
import os
import string
vocabs = list(string.ascii_lowercase + string.digits) + [' ']
directory = '/home/husein/pure-text'
# +
import unicodedata
import re
import itertools
def preprocessing_text(string):
string = unicodedata.normalize('NFC', string.lower())
string = ''.join([c if c in vocabs else ' ' for c in string])
string = re.sub(r'[ ]+', ' ', string).strip()
string = (
''.join(''.join(s)[:2] for _, s in itertools.groupby(string))
)
return string
# +
texts = []
for f in files:
print(f)
with open(os.path.join(directory, f)) as fopen:
text = list(filter(None, fopen.read().split('\n')))
texts.extend(text)
# +
import json
with open('bahasa-asr-train-combined.json') as fopen:
data = json.load(fopen)
texts.extend(data['Y'])
with open('bahasa-asr-test.json') as fopen:
data = json.load(fopen)
texts.extend(data['Y'])
# +
import mp
from tqdm import tqdm
def loop(texts):
texts, _ = texts
cleaned_texts = []
for i in tqdm(range(len(texts))):
t = preprocessing_text(texts[i])
if len(t):
cleaned_texts.append(t)
return cleaned_texts
cleaned_texts = mp.multiprocessing(texts, loop, cores = 16)
# -
len(texts), len(cleaned_texts)
with open('text.txt', 'w') as fopen:
fopen.write('\n'.join(cleaned_texts))
# !./kenlm/build/bin/lmplz --text text.txt --arpa out.arpa -o 3 --prune 0 1 1
# !./kenlm/build/bin/build_binary -q 8 -b 7 -a 256 trie out.arpa out.trie.klm
# !rm text.txt out.arpa
b2_application_key_id = os.environ['b2_application_key_id']
b2_application_key = os.environ['b2_application_key']
from b2sdk.v1 import *
info = InMemoryAccountInfo()
b2_api = B2Api(info)
application_key_id = b2_application_key_id
application_key = b2_application_key
b2_api.authorize_account("production", application_key_id, application_key)
file_info = {'how': 'good-file'}
b2_bucket = b2_api.get_bucket_by_name('malaya-speech-model')
outPutname = 'language-model/dump-combined/model.trie.klm'
b2_bucket.upload_local_file(
local_file='out.trie.klm',
file_name=outPutname,
file_infos=file_info,
)
| pretrained-model/prepare-lm/build-lm-combined.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="NAaMUymYVEBf" outputId="b3751096-0bb2-4bf4-eb5f-249288661c69"
from google.colab import drive
drive.mount('/content/drive/')
# + colab={"base_uri": "https://localhost:8080/"} id="LemiRDflWA6G" outputId="d250f4ed-3e9c-4242-917d-7612d056de96"
import os
os.chdir('/content/drive/My Drive/code/')
print(os.getcwd())
# + id="dBnrgF_IVCXE"
import torch
from fruit_360_small_dataset import fruit_360_small
import matplotlib.pyplot as plt
from torch.utils.data import DataLoader
from torchvision.models import vgg16
from torchvision import transforms
import numpy as np
from torch import nn
# + id="mCqqnmtYVCXH"
transform = transforms.Compose([
transforms.Normalize(mean=0.0,std =255.0),
transforms.Resize(300),
])
# + id="QXfoWztnVCXH"
train_dat =fruit_360_small("./Datasets/fruit-360-small/new",train = True, transform = transform)
test_dat = fruit_360_small("./Datasets/fruit-360-small/new",train = False, transform = transform)
# + id="StDzwrIWVCXI"
#train_dat[2][0].shape
# + colab={"base_uri": "https://localhost:8080/"} id="zMKkmHqSVCXJ" outputId="e436d6c5-9a64-4ae5-9ec4-0d355a58e9bc"
class_labels = train_dat.class_labels
print(class_labels)
# + id="Fu2QMc5sVCXJ"
batch_size = 64
train_load = DataLoader(train_dat, shuffle=True, batch_size = batch_size)
test_load = DataLoader(test_dat, shuffle=True, batch_size = batch_size)
# + colab={"base_uri": "https://localhost:8080/", "height": 373} id="0Kd7XzziVCXJ" outputId="4f61de73-3b96-4d59-f74d-477789658bca"
rows = 3
cols = 3
fig,axs = plt.subplots(rows,cols, figsize=(6,6))
for i in range(rows):
for j in range(cols):
ax = axs[i,j]
idx = np.random.randint(len(train_dat))
img,label = train_dat[idx]
img = img.permute(1,2,0)
ax.set_title(class_labels[label])
ax.axis("off")
ax.imshow(img)
# + colab={"base_uri": "https://localhost:8080/"} id="rak1W69EVCXK" outputId="d1c7775c-05b4-4eee-8ae2-4878511d20d9"
device =torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
# + colab={"base_uri": "https://localhost:8080/", "height": 83, "referenced_widgets": ["ec24ec7153ae4483ad3d7e593840943c", "f80e34b958954df9920a3ba01dedccc2", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "b134b9e23af44ebcaee0ec8080ea6d98", "dc6e8272ea14469ca6eb17a6a85bb9eb"]} id="VHz_D4P9VCXK" outputId="875d6fed-e45e-410b-a89b-62a346bb630c"
model = vgg16(pretrained=True)
# + id="C_Tjr0EUVCXK"
model = model.to(device)
# + colab={"base_uri": "https://localhost:8080/"} id="Xrc8AkVbVCXL" outputId="1bc2048b-8bf4-4bc0-85d4-1d1c685a99a9"
print(model)
# + id="WhChHY-hVCXL"
for params in model.parameters():
params.require_grad = False
model.classifier._modules['6'] = nn.Linear(4096,2)
model = model.to(device)
# + id="lWY4fuE2VCXL"
loss_fn = torch.nn.CrossEntropyLoss()
optim = torch.optim.Adam(model.parameters())
# + id="FnlJB4uMVCXL"
def train(train_load, model, loss_fn, optim, size):
total_loss = 0.0
total_acc = 0.0
for batch,(x,y) in enumerate(train_load):
x = x.to(device)
y = y.to(device)
pred = model(x)
loss = loss_fn(pred,y)
total_loss+=loss.item()
total_acc+= (pred.argmax(1)==y).sum().item()
print(f"{batch+1}th batch: Training loss: {total_loss}")
optim.zero_grad()
loss.backward()
optim.step()
total_loss/=size
total_acc = total_acc/size*100
return total_loss, total_acc
# + id="xMYzaaxGVCXM"
def test(test_load, model, loss_fn, size):
total_loss = 0.0
total_acc = 0.0
model.eval()
with torch.no_grad():
for (x,y) in test_load:
x = x.to(device)
y = y.to(device)
pred = model(x)
loss = loss_fn(pred,y)
total_loss+=loss.item()
total_acc+= (pred.argmax(1)==y).sum().item()
total_loss/=size
total_acc = total_acc/size*100
return total_loss, total_acc
# + colab={"base_uri": "https://localhost:8080/"} id="n0z0CHjvVCXM" outputId="80d3896e-4333-4dca-85b8-1b0cd1515ec1"
epoch = 10
train_loss = []
train_acc=[]
test_loss = []
test_acc = []
train_size = len(train_dat)
test_size = len(test_dat)
for e in range(1,epoch+1):
print(f"Epoch {e} begins-------------------")
loss,acc = train(train_load, model, loss_fn, optim, train_size)
print(f"\n Training Loss {loss} \t Training Accuracy {acc}")
train_loss.append(loss)
train_acc.append(acc)
loss,acc = test(test_load, model, loss_fn, test_size)
print(f"\n Test Loss {loss} \t Test Accuracy {acc}")
test_loss.append(loss)
test_acc.append(acc)
print("Done-----------------")
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="ZjjMNsGyVCXN" outputId="c8e53fba-d547-4b24-a6f8-3568d9fb52f7"
plt.title("Loss vs iteration")
plt.plot(train_loss, 'r', label = 'train')
plt.plot(test_loss, 'b', label = 'test')
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="GazHT1JJaViX" outputId="f615f55b-370f-4297-de39-1c394f5ad85f"
plt.title("Accuracy vs iteration")
plt.plot(train_acc, 'r', label = 'train')
plt.plot(test_acc, 'b', label = 'test')
plt.ylim(ymin=0)
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 551} id="pKxrlNZ_cdyO" outputId="4d0a444f-f35c-4d0b-f92a-cc712a089307"
rows = 3
cols = 3
fig,axs = plt.subplots(rows,cols,figsize=(9,9))
for i in range(rows):
for j in range(cols):
ax = axs[i,j]
idx = np.random.randint(len(test_dat))
img,label = test_dat[idx]
model.eval()
with torch.no_grad():
x = img.to(device).unsqueeze(0)
pred = model(x).argmax()
truth_label = class_labels[label]
pred_label = class_labels[label]
img = img.permute(1,2,0)
ax.axis('off')
ax.imshow(img)
ax.set_title(f"Truth:{truth_label}\n Predicted:{pred_label}")
# + id="UEcfLSc6eTam"
| Week4/VGG_small_fruit_dataset (2).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.1 64-bit
# language: python
# name: python3
# ---
# ## Instructions
#
# Build a Black Jack game
logo = """
.------. _ _ _ _ _
|A_ _ |. | | | | | | (_) | |
|( \/ ).-----. | |__ | | __ _ ___| | ___ __ _ ___| | __
| \ /|K /\ | | '_ \| |/ _` |/ __| |/ / |/ _` |/ __| |/ /
| \/ | / \ | | |_) | | (_| | (__| <| | (_| | (__| <
`-----| \ / | |_.__/|_|\__,_|\___|_|\_\ |\__,_|\___|_|\_\\
| \/ K| _/ |
`------' |__/
"""
import random
#create a list with all the possible cards
cards = [11, 2, 3, 4, 5, 6, 7, 8, 9, 10, 10, 10, 10]
#def a funcion that deals cards
def deal_card():
card = random.choice(cards)
return card
# +
#def a function that calculates score
def calculate_score(list):
if sum(list) == 21 and len(list) == 2:
return 0
if 11 in list and sum(list) > 21:
list.remove(11)
list.append(1)
return sum(list)
# +
#def a function that compares user's and computer's scores
def check(user_score, computer_score):
if user_score > 21 and computer_score > 21:
return "You went over. You lose"
if user_score == computer_score:
return "Draw "
elif computer_score == 0:
return "You lose. Opponent has a Blackjack"
elif user_score == 0:
return "Win with a Blackjack"
elif user_score > 21:
return "You went over. You lose"
elif computer_score > 21:
return "Opponent went over. You win"
elif user_score > computer_score:
return "You win"
else:
return "You lose"
# +
#def a function to play the game
def play_blackjack():
print(logo)
user_cards = []
computer_cards = []
end_of_game = False
for _ in range(2):
user_cards.append(deal_card())
computer_cards.append(deal_card())
while not end_of_game:
user_score = calculate_score(user_cards)
computer_score = calculate_score(computer_cards)
print(f" Your cards: {user_cards}, current score: {user_score}")
print(f" Computer's first card: {computer_cards[0]}")
if user_score == 0 or computer_score == 0 or user_score > 21:
end_of_game = True
else:
game_cont = input("Type 'y' to get another card, type 'n' to pass: ")
if game_cont == "y":
user_cards.append(deal_card())
else:
end_of_game = True
#The computer should keep drawing cards as long as it has a score less than 17.
while computer_score != 0 and computer_score < 17:
computer_cards.append(deal_card())
computer_score = calculate_score(computer_cards)
print(f" Your final hand: {user_cards}, final score: {user_score}")
print(f" Computer's final hand: {computer_cards}, final score: {computer_score}")
print(
check(user_score, computer_score)
)
# +
#call the game function
play_blackjack()
| Day-11/Blackjack Capstone Project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from mountain_car_stds import *
import rl_lib
from rl_lib.agents.q_learning import DNNQLearningAgent
import tensorflow as tf
class MCAgent(DNNQLearningAgent):
def __init__(self):
super().__init__(state_dims=2,
actions_num=3,
hidden_layers=[64, 32],
activations=[tf.nn.relu, tf.nn.relu],
drop_out=.3,
lr=1e-1,
mapper=rl_lib.utils.UnitMapper(state_low, state_high),
epsilon_factor=1)
agent = MCAgent()
# -
df = run(agent, 1000, verbose=1)
show_episode(df, episode=-1)
show_Q(agent)
show_progress(df, agent)
# # DNN Q-Learning agent was unable to solve that problem
#
# The only difference with the RBF Q-Learning agent is the type of the function approximator. So lets check the ability of the NN to approximate the Q function.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
q = pd.read_csv('Q.csv')
q.head()
states = np.array(q[['state1', 'state2']])
action1, action2, action3 = np.array(q['action1']), np.array(q['action2']), np.array(q['action3'])
scale = state_high-state_low
def scaler(state):
scaled = 2*(state-state_low)/scale-1
return scaled
norm_states = np.array([scaler(s) for s in states])
x, y = norm_states, np.reshape(action1, (-1, 1))
from rl_lib.utils.nets import *
net = FullyConnectedDNN(2,
1,
hidden_layers=[64, 32],
activations=[tf.nn.relu, tf.nn.relu],
drop_out=.1,
lr=1e-2)
plt.figure(figsize=(15, 8))
plt.subplot(1, 3, 1)
plot(x, y[:, 0])
plt.colorbar(orientation='horizontal')
pred = np.array([net.predict(xy) for xy in x])[:, 0]
print(pred.shape)
plt.subplot(1, 3, 2)
plot(x, pred)
plt.colorbar(orientation='horizontal')
print(x.shape, y.shape)
for i in np.random.randint(y.shape[0], size=10000):
net.partial_fit(np.array(x[i]), np.array(y[i]))
pred = np.array([net.predict(xy) for xy in x])[:, 0]
print(pred.shape)
plt.subplot(1, 3, 3)
plot(x, pred)
plt.colorbar(orientation='horizontal')
# -
# As we can see the deep neural network has the ability to approximate the Q function. That means that the problem lies on the training procedure.
#
# Maybe if we use some batching technique (such as minibatching) in order to train the NN more efficiently would solve the problem, but it is not part of this exercise.
| examples/DNNQLearningAgent.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="mBQ9k0A4P8_R"
# !pip install scikit-learn==1.0
# !pip install xgboost==1.4.2
# !pip install catboost==0.26.1
# !pip install pandas==1.3.3
# !pip install radiant-mlhub==0.3.0
# !pip install rasterio==1.2.8
# !pip install numpy==1.21.2
# !pip install pathlib==1.0.1
# !pip install tqdm==4.62.3
# !pip install joblib==1.0.1
# !pip install matplotlib==3.4.3
# !pip install Pillow==8.3.2
# !pip install torch==1.9.1
# !pip install plotly==5.3.1
# + id="l5ypuk3gP76W"
import pandas as pd
import numpy as np
from sklearn.neighbors import NearestNeighbors
from joblib import Parallel,delayed
import gc
# + id="MNYvLYc9P76a"
train_df_mean = pd.read_csv('train_mean.csv')
train_df_coordinates = pd.read_csv('train_coordinates_lat_lon.csv')
test_df_coordinates = pd.read_csv('test_coordinates_lat_lon.csv')
train_df_coordinates = train_df_coordinates.merge(train_df_mean,on=['field_id'],how='left')[['field_id','lat','long','label']]
# + id="Kpply6YBP76a"
train_df_coordinates = train_df_coordinates[train_df_coordinates['label'].isin(range(1,10))]
# + id="G5dExQDQP76b"
# + id="lnP4ds_sP76b" outputId="3653cf7f-300d-452c-9eb2-6aa9a6a118df"
merge_df = train_df_coordinates.append(test_df_coordinates)
merge_df = merge_df.fillna(10)
merge_df.shape
# + id="En7jjPn0P76c" outputId="043b0782-e0fd-4afa-9955-7ef93832dde8"
field_label_dict = dict(zip(merge_df['field_id'].values,merge_df['label'].values))
field_range_dict = dict(zip(range(len(merge_df)),merge_df['field_id'].values))
len(field_label_dict)
# + [markdown] id="lRpjwjacP76d"
# ### Nearest points within radius of 0.25
# + id="vxYZvBIWP76e"
vals = merge_df[['lat','long']].values
radius = 0.25
# + id="k-Plp3jjP76f"
neigh = NearestNeighbors(radius=radius,metric='haversine',n_jobs=-1).fit(vals)
# + id="I_TJ7OMfP76f"
def get_frequency(closest_indices):
count_dict = {1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0,9:0,10:0}
for i in closest_indices:
label = field_label_dict[field_range_dict[i]]
count_dict[label]+=1
return [count_dict[i] for i in range(1,11)]
def get_nearest(point,radius):
nbrs = neigh.radius_neighbors([point],radius=radius)
indices = nbrs[1][0]
return get_frequency(indices)
# + id="Yn8oHP3rP76g" outputId="b5d061f7-5985-49fe-ba7d-0da41e81600b"
outs_train = Parallel(n_jobs=-1,timeout=100000,backend="multiprocessing", verbose=1)(delayed(get_nearest)(point=vals[i],radius=radius) for i in range(vals.shape[0]))
# + id="tXJvB5AwP76h" outputId="e7cfc605-4161-4bd7-ee38-1c2310f737e2"
column_names = [f'Crop_{i}_{radius}' for i in range(1,11)]
nearest_data = pd.DataFrame(data = outs_train,columns = column_names)
nearest_data['field_id'] = merge_df['field_id'].values.tolist()
nearest_data
# + id="c4oyWCPBP76h"
nearest_data[f'count_{radius}'] = nearest_data[column_names].sum(axis=1)
for i in column_names:
nearest_data[i] = 100*(nearest_data[i]/nearest_data[f'count_{radius}'])
# + id="2rJU26hSP76h"
nearest_data.to_csv(f'full_nearest_radius_{radius}.csv',index=False)
# + [markdown] id="WtIrr_vxP76i"
# ### Nearest data within radius of 0.4
# + id="C9poiFmeP76i"
vals = merge_df[['lat','long']].values
radius = 0.4
# + id="AbVUc89lP76i"
neigh = NearestNeighbors(radius=radius,metric='haversine',n_jobs=-1).fit(vals)
# + id="6uSQgxOxP76i"
def get_frequency(closest_indices):
count_dict = {1:0,2:0,3:0,4:0,5:0,6:0,7:0,8:0,9:0,10:0}
for i in closest_indices:
label = field_label_dict[field_range_dict[i]]
count_dict[label]+=1
return [count_dict[i] for i in range(1,11)]
def get_nearest(point,radius):
nbrs = neigh.radius_neighbors([point],radius=radius)
indices = nbrs[1][0]
return get_frequency(indices)
# + id="izDy5EugP76j" outputId="8c58c533-4bfb-4152-b0d4-34897cb57e55"
outs_train = Parallel(n_jobs=-1,timeout=100000,backend="multiprocessing", verbose=1)(delayed(get_nearest)(point=vals[i],radius=radius) for i in range(vals.shape[0]))
# + id="5LGe-9qXP76j" outputId="b2e52fe5-56c3-4168-eea7-8cb6fd8296e0"
column_names = [f'Crop_{i}_{radius}' for i in range(1,11)]
nearest_data = pd.DataFrame(data = outs_train,columns = column_names)
nearest_data['field_id'] = merge_df['field_id'].values.tolist()
nearest_data
# + id="kZAGh5bCP76j"
nearest_data[f'count_{radius}'] = nearest_data[column_names].sum(axis=1)
for i in column_names:
nearest_data[i] = 100*(nearest_data[i]/nearest_data[f'count_{radius}'])
# + id="DfNjffyEP76k" outputId="135a8a39-a767-4d77-8715-2f3c6661d586"
nearest_data
# + id="7iqdwVVkP76k"
nearest_data.to_csv(f'full_nearest_radius_{radius}.csv',index=False)
# + id="AdMQeN8aP76k" outputId="b94f1789-524d-4d45-800a-d4708978dabc"
# train_nearest = nearest_data[:train_df_coordinates.shape[0]]
# test_nearest = nearest_data[train_df_coordinates.shape[0]:]
# # train_nearest.to_csv('nearest_data_train.csv',index=False)
# # test_nearest.to_csv('nearest_data_test.csv',index=False)
# train_nearest['label'] = train_df_coordinates['label'].values.tolist()
# train_nearest.shape,test_nearest.shape
# + id="p_giVZ-PP76l"
# + id="hkAKdsh1P76l"
| 6th Place - Click Click Boom/zindi_sentinel2_codes/step7_nearest_points.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # 2mo_benzene example with new cutting strategy (2 fragments) R label only
# +
import os
from rmgpy import settings
from rmgpy.species import Species
from rmgpy.data.rmg import RMGDatabase
import afm.react
import afm.fragment
# +
# load kinetics database
db_path = settings['database.directory']
database = RMGDatabase()
# forbidden structure loading
database.loadForbiddenStructures(os.path.join(db_path, 'forbiddenStructures.py'))
# kinetics family loading
database.loadKinetics(os.path.join(db_path, 'kinetics'),
kineticsFamilies='default',
reactionLibraries=[]
)
#def test_react_fragments1(self):
frag1 = afm.fragment.Fragment(label='frag1').from_SMILES_like_string('c1ccccc1CC(C)CR')
## 2 fragments reactions
frag2 = afm.fragment.Fragment(label='frag2').from_SMILES_like_string('CCCCCR')
# -
fragment_tuple = (frag1, frag2)
reactions = afm.react.react_fragments(database.kinetics,
fragment_tuple,
only_families=[],
prod_resonance=False)
2**15
# means 2^15 = 32768
rxn_index = range(0,len(reactions))
for index in rxn_index:
display(reactions[index])
len(reactions)
# +
frag1 = afm.fragment.Fragment(label='frag1').from_SMILES_like_string('CCCCCR')
spec1 = Species(molecule=[frag1])
spec_tuple = (spec1,)
reactions = database.kinetics.generate_reactions_from_families(spec_tuple)
# -
len(reactions)
rxn_index = range(0,len(reactions))
for index in rxn_index:
display(reactions[index])
reactions[0]
| examples/2mobenzene/2mo_2_frag_1_slded.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### 中文
# #### 部分文档
# ### 1.分词
# +
# demo 分词
import jieba
with open('in_the_name_of_people/text1.txt',encoding='utf-8') as f:
document = f.read()
# document_decode = document.decode('gbk')
document_cut = jieba.cut(document)
result = ' '.join(document_cut)
# result = result.encode('utf-8')
with open('in_the_name_of_people/nlp_test1.txt', 'w',encoding="utf-8") as f2:
f2.write(result)
f.close()
f2.close()
# -
# ### 2. 设置 人名地名等确定的分词
# +
# 可以发现对于一些人名和地名,jieba处理的不好,不过我们可以帮jieba加入词汇如下:
jieba.suggest_freq('沙瑞金', True)
jieba.suggest_freq('田国富', True)
jieba.suggest_freq('高育良', True)
jieba.suggest_freq('侯亮平', True)
jieba.suggest_freq('钟小艾', True)
jieba.suggest_freq('陈岩石', True)
jieba.suggest_freq('欧阳菁', True)
jieba.suggest_freq('易学习', True)
jieba.suggest_freq('王大路', True)
jieba.suggest_freq('蔡成功', True)
jieba.suggest_freq('孙连城', True)
jieba.suggest_freq('季昌明', True)
jieba.suggest_freq('丁义珍', True)
jieba.suggest_freq('郑西坡', True)
jieba.suggest_freq('赵东来', True)
jieba.suggest_freq('高小琴', True)
jieba.suggest_freq('赵瑞龙', True)
jieba.suggest_freq('林华华', True)
jieba.suggest_freq('陆亦可', True)
jieba.suggest_freq('刘新建', True)
jieba.suggest_freq('刘庆祝', True)
with open('in_the_name_of_people/text2.txt',encoding='utf-8') as f:
document = f.read()
document_cut = jieba.cut(document)
result = ' '.join(document_cut)
# result = result.encode('utf-8')
with open('in_the_name_of_people/nlp_test2.txt', 'w',encoding="utf-8") as f2:
f2.write(result)
f.close()
f2.close()
# -
# ### 3.导入停用词表 类似a an the 的 、地、得等 不用的分词 stop_words.txt 网上下载
#
#从文件导入停用词表
stpwrdpath = "stop_words/stop_words.txt"
stpwrd_dic = open(stpwrdpath)
stpwrd_content = stpwrd_dic.read()
#将停用词表转换为list
stpwrdlst = stpwrd_content.splitlines()
stpwrd_dic.close()
print(stpwrdlst)
# ### 4.计算每一段文档的ti-idf
# +
with open('in_the_name_of_people/nlp_test1.txt',encoding="UTF-8") as f3:
res1 = f3.read()
print (res1)
with open('in_the_name_of_people/nlp_test2.txt',encoding="UTF-8") as f4:
res2 = f4.read()
print(res2)
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = [res1,res2]
# 加入停顿词
vector = TfidfVectorizer(stop_words=stpwrdlst)
tfidf = vector.fit_transform(corpus)
print(tfidf)
# -
# ### 5.输出ti-idf
wordlist = vector.get_feature_names()#获取词袋模型中的所有词
# tf-idf矩阵 元素a[i][j]表示j词在i类文本中的tf-idf权重
weightlist = tfidf.toarray()
#打印每类文本的tf-idf词语权重,第一个for遍历所有文本,第二个for便利某一类文本下的词语权重
for i in range(len(weightlist)):
print ("-------第",i,"段文本的词语tf-idf权重------")
for j in range(len(wordlist)):
print (wordlist[j],weightlist[i][j] )
# ### 有了每段文本的TF-IDF的特征向量,我们就可以利用这些数据建立分类模型,或者聚类模型了,或者进行主题模型的分析。
#
# 参考 https://www.jianshu.com/p/0d7b5c226f39
#
#
# TF(Term Frequency,词频)表示一个给定词语t在一篇给定文档d中出现的频率。TF越高,则词语t对文档d来说越重要,TF越低,则词语t对文档d来说越不重要。那是否可以以TF作为文本相似度评价标准呢?答案是不行的,举个例子,常用的中文词语如“我”,“了”,“是”等,在给定的一篇中文文档中出现的频率是很高的,但这些中文词几乎在每篇文档中都具有非常高的词频,如果以TF作为文本相似度评价标准,那么几乎每篇文档都能被命中。
#
# IDF(Inverse Document Frequency,逆向文件频率)的主要思想是:如果包含词语t的文档越少,则IDF越大,说明词语t在整个文档集层面上具有很好的类别区分能力。IDF说明了什么问题呢?还是举个例子,常用的中文词语如“我”,“了”,“是”等在每篇文档中几乎具有非常高的词频,那么对于整个文档集而言,这些词都是不重要的。对于整个文档集而言,评价词语重要性的标准就是IDF。
#
# ## 全部文档 http://files.cnblogs.com/files/pinard/in_the_name_of_people.zip
# ### 分词
# +
import jieba
import jieba.analyse
#文件位置需要改为自己的存放路径
#将文本分词
with open('in_the_name_of_people\in_the_name_of_people.txt',encoding='utf-8') as f:
document = f.read()
document_cut = jieba.cut(document)
result = ' '.join(document_cut)
with open('in_the_name_of_people/in_the_name_of_people_segment.txt', 'w',encoding="utf-8") as f2:
f2.write(result)
f.close()
f2.close()
# -
# ### 训练word2vec模型
# +
from gensim.test.utils import common_texts
from gensim.models import word2vec
#加载语料
sentences = word2vec.LineSentence('in_the_name_of_people/in_the_name_of_people_segment.txt')
#训练语料
path = get_tmpfile("word2vec.model") #创建临时文件
model = word2vec.Word2Vec(sentences, hs=1,min_count=1,window=10,size=100)
# 保存模型 加载模型
# model.save("word2vec.model")
# model = Word2Vec.load("word2vec.model")
# -
#输入与“贪污”相近的100个词
for key in model.wv.similar_by_word('贪污', topn =100):
print(key)
# ### 同上 只是多了打印日志
# +
# import modules & set up logging 训练模型
import logging
import os
from gensim.models import word2vec
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentences = word2vec.LineSentence('in_the_name_of_people/in_the_name_of_people_segment.txt')
model = word2vec.Word2Vec(sentences, hs=1,min_count=1,window=3,size=100)
# -
# ### 以下为word2vec的应用
# +
##找出某一个词向量最相近的词集合,代码如下 我们看看沙书记最相近的一些5个字的词(主要是人名)如下:
req_count = 5
for key in model.wv.similar_by_word('沙瑞金', topn =100):
if len(key[0])==3:
req_count -= 1
print (key[0], key[1])
if req_count == 0:
break;
# -
#第二个应用是看两个词向量的相近程度,这里给出了书中两组人的相似程度:
print(model.wv.similarity('沙瑞金', '高育良'))
print(model.wv.similarity('李达康', '王大路'))
# +
#第三个应用是找出不同类的词,这里给出了人物分类题:
print(model.wv.doesnt_match(u"沙瑞金 高育良 李达康 刘庆祝".split()))
| nlp/peoplename.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
import os
import pickle
import numpy as np
from sklearn.feature_extraction.text import HashingVectorizer
def predict(example):
example=list(example)
X = vect.transform(example)
prediction=label[clf.predict(X)[0]]
percentage=np.max(clf.predict_proba(X))*100
return prediction,percentage
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text.lower())
text = re.sub('[\W]+', ' ', text.lower()) \
+ ' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
cur_dir = os.getcwd()
stop = pickle.load(open(os.path.join(cur_dir,'stopwords.pkl'),'rb'))
vect = HashingVectorizer(decode_error='ignore',n_features=2**21, preprocessor=None,tokenizer=tokenizer)
clf=pickle.load(open(os.path.join('classifier.pkl'),'rb'))
label = {0:'negative', 1:'positive'}
example = ['Tired of sobby melodramas and stupid comedies? Why not watch a film with a difference? American Beauty by <NAME> is both a drama and a comedy, which definitely absorbed the best features of the genres, creating a powerful and mind-boggling cocktail of love, hatred, sinful passion, rebellion, loneliness, fear and total liberation.']
X = vect.transform(example)
print('Prediction: %s \nProbability: %.2f%%' %(label[clf.predict(X)[0]],
np.max(clf.predict_proba(X))*100))
| Vectorizer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''api_book'': venv)'
# language: python
# name: python3
# ---
# # JWT based authentification
#
# In the API world, authentification is a process where we want to authenticate a user. In real world applications, only authenticated users can access the API. Additionaly, we may want to track how much does a specific user query an API.
#
# To solve the complex issue of authentification, the current golden standart are the `JWT tokens`.
#
# `JWT` stands for JSON Web Token.
#
# The high level graph of the process:
#
# 
# 1) The user requests a token, sending over his credentials (username and password).
#
# 2) The server checks the credentials and if they are correct, it generates a JWT token. The token gets sent back to the user.
#
# 3) Every time the user makes a request to any of the APIs on a certain server, it has to include the JWT token. Only the JWT token is used to authenticate the user.
#
# # JWT token
#
# A JWT token is just a string that has three parts separated by dots:
#
# ```
# <header>.<payload>.<signature>
# ```
#
# An example may look like this:
#
# `<KEY>`
#
# Thats it, the above string is a JWT token that has alot of information encoded into it. There are many libraries that can be used both to create and to decode a JWT token. In the subsequent chapters we will use Python implementations of JWT authentification and go through the details of the JWT token system.
#
# # The authentification flow
#
# All the code is in the `jwt-toke-example` directory. Be sure to run
#
# ```
# docker-compose up
# ```
#
# To spin up a PSQL server.
#
# Additionaly, start the API from the same directory:
#
# ```
# uvicorn app:app --port 8000
# ```
#
# ## Step 1: Requesting a token
#
# ### User registration
#
# In the JWT flow, we still cannot escape the good old username and password combination. We need to store this information somewhere in the server and every time a user requests a new token, we need to check if the user credentials are correct. For this, we need to create an endpoint for user registration and then for token generation. Because of this reason, the whole process of authentification ideally should be done via HTTPS and not HTTP. For the purpose of this tutorial, we will use HTTP, because the concepts are exactly the same. HTTPS only adds a layer of obfuscation and encodes the transactions between user and server.
#
# The user database table is very straightforward. It contains the username, the password and the date it was created:
# !cat jwt-token-example/models.py
# The endpoint for user creation is `/users/register`. To register we need to send a POST request with the following data:
#
# ```
# {
# "username": <username>,
# "password": <password>
# }
# ```
# +
# Importing the request making lib
import requests
# Making the request to the API to register the user
response = requests.post(
"http://localhost:8000/users/register",
json={"username": "eligijus", "password": "<PASSWORD>"}
)
if response.status_code in [200, 201]:
print(f"Response: {response.json()}")
# -
# Now that we have a registered user we can start implementing the logic of JWT token creation.
#
# ## Step 2: Creating the JWT token
#
# The library that creates the JWT token is called `pyjwt`. It is a Python library that can be used to create and decode JWT tokens. It is fully compliant with the [JSON Web Token standard](https://tools.ietf.org/html/rfc7519).
#
# The token creation and inspection script is:
# !cat jwt-token-example/jwt_tokens.py
# The logic of creating the token is in the `create_token()` function. Remember the JWT token structure:
#
# ```
# <header>.<payload>.<signature>
# ```
#
# The `header` part encodes the algorithm and type needed to decode the token.
#
# The `payload` part holds the dictionary of claims. The claims are the information that gets encoded into the token as a dictionary.
#
# The `signature` part is the signature of the token. It is used to verify the token by the python library. The `_SECRET` constant is used to construct the signature. That it why it should be kept only as a runtime variable in the variable where no one can access it.
#
# Lets query the endpoint `/token` using the credentials we used to register the user.
# +
# Making the request to the API to get the token
response = requests.post(
"http://localhost:8000/token",
json={"username": "eligijus", "password": "<PASSWORD>"}
)
# Extracting the token
token = response.json().get('token')
# Printing out the gotten token
print(f"Token: {token}")
# -
# The above token will be valid for 60 minutes and can be used to make requests to the API. If we make a request with a non existing user, we will get a `401 Unauthorized` error:
# +
# Making the request to the API to get the token
response = requests.post(
"http://localhost:8000/token",
json={"username": "eligijus", "password": "<PASSWORD>"}
)
# Printing out the status code
print(f"Response code: {response.status_code}")
# -
# ## Step 3: Using the JWT token
#
# Every time a user makes a request to the API, we need to include the JWT token in the request. We will use the `Authorization` header to include the token and will send a GET request to our very well know number root calculating API.
# +
# Defining the parameteres to send
number = 88
n = 0.88
# Making the request with the token
response = requests.get(
f"http://localhost:8000/root?number={number}&n={n}",
headers={"Authorization": f"{token}"}
)
# Printing out the status code and the result
print(f"Response code: {response.status_code}")
print(f"Root {n} of {number} is: {response.json()}")
# -
# If we use a bad JWT code, a user does not exist in the database or the token has expired, we will get a 401 Unauthorized response error:
# +
# Making the request with the token
response = requests.get(
f"http://localhost:8000/root?number={number}&n={n}",
headers={"Authorization": "Hello I am a really legit token"}
)
# Printing out the status code and the result
print(f"Response code: {response.status_code}")
print(f"Root {n} of {number} is: {response.json()}")
| api-book/_build/html/_sources/chapter-6-production-tools/JWT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/rajagopal17/spacy-notebooks/blob/master/Glove_Embeddings.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="r5UkzOZPDeUd" colab_type="text"
# ### https://medium.com/analytics-vidhya/basics-of-using-pre-trained-glove-vectors-in-python-d38905f356db
# + id="VnS3RpBl8VXn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="1b18cbc3-5aed-471b-a547-e6962398103b"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.rc('font', size=14)
sns.set(style='white')
sns.set(style='whitegrid', color_codes=True)
import csv
import re
from __future__ import unicode_literals
import spacy
from spacy.tokens import doc
nlp=spacy.load('en')
import en_core_web_sm
#nlp=en_core_web_md.load()
from spacy.lang.en.stop_words import STOP_WORDS
from spacy.lang.en import English
parser = English()
import string
punctuations=string.punctuation
from sklearn import model_selection
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
# + id="xwzZ-GQu84k-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="1ce36db0-6f7d-4130-92ac-2c951f09bfb5"
# !git clone https://github.com/MohammadWasil/Sentiment-Analysis-IMDb-Movie-Review.git
# + id="4IzkCmME8uLZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="4fc44747-eea6-4257-d291-784cf331cd5f"
data_train =pd.read_csv('/content/Sentiment-Analysis-IMDb-Movie-Review/labeledTrainData.tsv', delimiter='\t',encoding="utf-8")
#data_test =('/content/Sentiment-Analysis-IMDb-Movie-Review/testData.tsv')
data_file = data_train[['sentiment','review']].copy()
train_ds = data_file.head(15000).copy()
test_ds = data_file.tail(5000).copy()
train_ds.to_csv('/content/train_ds.csv',index=False)
test_ds.to_csv('/content/test_ds.csv',index=False)
train_ds.head()
# + [markdown] id="XmWHI76Dq5ch" colab_type="text"
# # Convert column to string, apply nlp pipe lines for lemmatization,ents, etc
#
#
# 1. Convert column to list and feed it back to new data frame
# 2. Get frequency and plot on graph
#
#
# + id="hl5nJ0sX-Npx" colab_type="code" colab={}
#Convert each row of reviews column to string and store it in a file
temp_file=data_train['Reviews'].apply(lambda x:str(x))
temp_file=data_train['Reviews'].apply(lambda x:re.sub('[^A-Z 0-9 a-z-]+','',x))
temp_file
# + id="GH25cwkF-9me" colab_type="code" colab={}
lemma_list=list([token.lemma_ for token in doc if token.is_stop==False] for doc in nlp.pipe(temp_file, n_threads=2,batch_size=1000,disable=['tagger','parser','ner']))
# + id="P96LPwrBDqKS" colab_type="code" colab={}
lemma_names=list([token.text for token in doc if token.pos_=='PROPN'] for doc in nlp.pipe(temp_file,batch_size=1000))
# + id="Np7l1SV-Xtnm" colab_type="code" colab={}
lemma_person=list([token.text for token in doc if token.ent_type_=='PERSON'] for doc in nlp.pipe(temp_file,batch_size=1000))
# + id="Dxgwmrm9Gm50" colab_type="code" colab={}
get_names=[]
for x in lemma_person:
for y in x:
get_names.append(y)
get_names
# + id="zJmTYGofHgsu" colab_type="code" colab={}
data_clean['lemma']=lemma_list
data_clean['Reviews']=lemma_names
data_clean['names']=lemma_person
# + id="VUzNTmpjH3s8" colab_type="code" colab={}
data_clean.head(20)
# + id="loZ92U2uNIv0" colab_type="code" colab={}
from collections import Counter
word_freq=Counter(get_names)
word_freq
# + id="yQPzLY4lOzoQ" colab_type="code" colab={}
word_freq_graph =pd.DataFrame(list(word_freq.items()),columns=['name','freq'])
word_freq_graph
final_df=word_freq_graph[word_freq_graph['freq']>800 ]
final_df
# + id="55Xj64CRQ4rt" colab_type="code" colab={}
import matplotlib.pyplot as plot
final_df.plot.barh(x='name', y='freq', title="Frequency of the mention of lead character's name");
plot.show(block=True);
# + [markdown] id="gMis6YlKrf7S" colab_type="text"
# # Torch Text
# + id="CNWfhL6srjmJ" colab_type="code" colab={}
from torchtext.data import Field,TabularDataset,BucketIterator
# + id="bk2CUAZs1b5f" colab_type="code" colab={}
#X_train,X_test,y_train,y_test = train_test_split('/content/Sentiment-Analysis-IMDb-Movie-Review/labeledTrainData.tsv')
# + id="l8IoZ2OE6Yc9" colab_type="code" colab={"base_uri": "https://localhost:8080/"} outputId="e4f0edc4-a506-41e7-a8de-0af7feaa9772"
# !git clone https://github.com/AladdinPerzon/Machine-Learning-Collection.git
# + id="zKUy_sPaY42G" colab_type="code" colab={}
def tokenize(text):
return[token.text for token in nlp(text)]
# + id="Sff1UpVqsUEC" colab_type="code" colab={}
#tokenize= lambda x: x.split()
TEXT = Field(sequential= True,use_vocab=True,tokenize=tokenize,lower=True)
LABEL = Field(sequential=False,use_vocab=False)
fields={'sentiment':LABEL,'review':TEXT}
# + [markdown] id="x26LuCqjuf_8" colab_type="text"
# #Split dataset for train & test
# + id="ypDgBQ3tuhEy" colab_type="code" colab={}
#train_data, test_data= TabularDataset(path='/content/sample_data/test_d.csv',format='csv',fields={'sentiment':sentiment,'review':review})
train = TabularDataset(path='/content/sample_data/train_ds.csv',
format='csv',
fields=[("sentiment",LABEL),
("review",TEXT)],
skip_header=True)
test = TabularDataset(path='/content/sample_data/test_ds.csv',
format='csv',
fields= [("sentiment",LABEL),
("review",TEXT)],
skip_header=True)
# + id="ilvVd5MMUEeX" colab_type="code" colab={}
TEXT.build_vocab(train,max_size=10000,min_freq=2)
# + id="nbTAQIWQXnWu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="4a4f4f86-a15e-4c32-b188-fda95a80d53f"
TEXT.vocab.itos
#TEXT.vocab.stoi['movie']
# + id="-OGzS9g_jn4X" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="aea2ce72-64a0-4b7e-ca28-ac7071ca0bd6"
train_ds.head()
# + [markdown] id="cg3v7Dk5C8MM" colab_type="text"
# # Glove Embeddings
# + id="h2TPQnHh6X0j" colab_type="code" colab={}
X=train_ds[['review']].copy()
y=train_ds[['sentiment']].copy()
Xtrain,Xtest,ytrain,ytest=train_test_split(X,y,train_size=0.7,random_state=32,shuffle=True)
# + id="IOpARSKe7XDA" colab_type="code" colab={}
lines=Xtest['review'].apply(lambda x: str(x))
lines_list=list([token.lemma_.lower() for token in doc if token.is_alpha and token.is_stop==False] for doc in nlp.pipe(lines, batch_size=1000,disable=['tagger','parser','ner']))
lines_list
# + [markdown] id="poBl_2-VEEtA" colab_type="text"
# # How to use standard Glove vectors for finding similarities without Gensim
#
# To load the pre-trained vectors, we must first create a dictionary that will hold the mappings between words, and the embedding vectors of those words.
# + id="0ly5NzFHDi9o" colab_type="code" colab={}
import os
import numpy as np
GLOVE_DIR ='/content/drive/My Drive/Python'
print('Indexing word vectors.')
embeddings_index = {}
f = open(os.path.join(GLOVE_DIR, 'glove.6B.300d.txt'),encoding="utf8")
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
#print(embeddings_index['banana'])
# + id="TNhaoY-9uWIb" colab_type="code" colab={}
from scipy import spatial
def find_closest_embeddings(embedding):
return sorted(embeddings_index.keys(), key=lambda word: spatial.distance.euclidean(embeddings_index[word], embedding))
# + id="RAdh10DauWjN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="0fe52a00-7211-4237-f4f9-181ccc744ea2"
find_closest_embeddings(embeddings_index["japanese"])[:10]
# + [markdown] id="tOQYadkz4dWD" colab_type="text"
# # Converting Standard Glove vectors into Gensim- Word2Vec format for finding similarities
# + id="ZKOunvXkGgtm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 241} outputId="a0134088-3583-40eb-ca87-0ee1e7a2ff4a"
# !pip3 install glove_python
#https://medium.com/analytics-vidhya/word-vectorization-using-glove-76919685ee0b
#https://medium.com/analytics-vidhya/basics-of-using-pre-trained-glove-vectors-in-python-d38905f356db
# + id="GT9tced9mU7Y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="8bef0cb4-99b7-4a02-8a92-ed91c46e639d"
from gensim.scripts.glove2word2vec import glove2word2vec
glove2word2vec(glove_input_file='/content/drive/My Drive/Python/glove.6B.300d.txt', word2vec_output_file="/content/drive/My Drive/Python/gensim_glove_vectors.txt")
from gensim.models.keyedvectors import KeyedVectors
model = KeyedVectors.load_word2vec_format("/content/drive/My Drive/Python/gensim_glove_vectors.txt", binary=False)
# + id="-wj5nx9PnEF-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 241} outputId="cb4d762b-d1d7-43eb-a21c-d26ceccdd470"
model.most_similar('japanese')
# + [markdown] id="tOXJSuk4hY70" colab_type="text"
# # Train Glove model on own corpus, find similarities and save the model
#
# https://github.com/alexeygrigorev/avito-duplicates-kaggle/blob/master/prepare_glove_model.py
# + id="jCohbt5NYke7" colab_type="code" colab={}
from time import time
def train_glove(sentences):
print ('training glove model...')
t0 = time()
num_features = 300 # Word vector dimensionality
context = 5 # Context window size
learning_rate = 0.05
corpus = Corpus()
corpus.fit(sentences, window=context)
glove = Glove(no_components=num_features, learning_rate=learning_rate)
glove.fit(corpus.matrix, epochs=30, no_threads=8, verbose=True)
glove.add_dictionary(corpus.dictionary)
print('took %0.5fs.' % (time() - t0))
return (glove)
# + id="iklmg3QPYnXk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 578} outputId="7bfe0343-bd11-49c0-fcce-19116b5a1b10"
gl_model=train_glove(lines_list)
# + id="dfVlPCEXbfkk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="b365687e-2b22-403b-f027-fe32da398b36"
gl_model.most_similar('cast',number=6)
# + id="Li9x8hvvgeKn" colab_type="code" colab={}
gl_model.save('glove_final')
# + id="xGofyKp1g1nI" colab_type="code" colab={}
rmodel=gl_model.load('glove_final')
# + id="WtpjiiRXhBvo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="f8ad274c-99c4-451c-a116-79f7191eb2d1"
rmodel.most_similar('cast',number=6)
| Glove_Embeddings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <font color='gray'> How to get data of adpd4000 using gen3_sdk</font>
#
#
# <font color='gray'>Import the gen3_sdk library from the path and import all the dependancies that are going to be used in the project like serial library for connection, matplotlib for plotting </font>
import sys,os,serial,threading, time,struct,glob
if sys.platform == "linux" or sys.platform == "linux2":
sys.path.append('./../../../bin/linux/python/')
elif sys.platform == "darwin":
sys.path.append('./../../../bin/macOS/python/')
elif sys.platform == "win32":
if ((struct.calcsize("P") * 8) == 64):
sys.path.append('./../../../bin/windows/x64/python/') # 32 bit python
else:
sys.path.append('./../../../bin/windows/Win32/python/') # 64 bit python
elif sys.platform == "win64":
sys.path.append('./../../../bin/windows/x64/python/')
import gen3_sdk
import matplotlib.pyplot as plt
# Status for the command streams
dict = {0: 'M2M2_APP_COMMON_STATUS_OK',
1: 'M2M2_APP_COMMON_STATUS_ERROR',
2: 'M2M2_APP_COMMON_STATUS_STREAM_STARTED',
3: 'M2M2_APP_COMMON_STATUS_STREAM_STOPPED',
4: 'M2M2_APP_COMMON_STATUS_STREAM_IN_PROGRESS',
5: 'M2M2_APP_COMMON_STATUS_STREAM_DEACTIVATED',
6: 'M2M2_APP_COMMON_STATUS_STREAM_COUNT_DECREMENT',
7: 'M2M2_APP_COMMON_STATUS_STREAM_NOT_STARTED',
8: 'M2M2_APP_COMMON_STATUS_STREAM_NOT_STOPPED',
9: 'M2M2_APP_COMMON_STATUS_SUBSCRIBER_ADDED',
10: 'ADI_SDK_PACKET_TIMED_OUT'}
# # <font color='brown'>TX Callback </font>
#
# The SDK requires the user provide functions for transmitting to and receiving data from the hardware.
#
# Transmission is done via this callback function – which is called by the SDK when data is to be transmitted to the watch.
#
# SDK will form the m2m2 packets and will call this API to transmit the data over the physical layer
#
class serial_tx_cb(gen3_sdk.watch_phy_callback):
def __init__(self, serial_object):
gen3_sdk.watch_phy_callback.__init__(self)
self.serial = serial_object
def call(self, data):
self.serial.write(data)
def sys_alert_call(self,alert):
print("SDK ALERT : {}".format(dict[alert]))
# # <font color='brown'> RX Callback </font>
#
# For reception, a thread is generally used to receive data from the physical layer.
#
# When the m2m2 packets is fully received from the physical layer, it is dispatched to the SDK.
class serial_rx_cb():
def __init__(self, serial_obj):
self.serial = serial_obj
self.thread_run = True
self.thread_rx = True
self.rx_thread = threading.Thread(target=self.rx_fn)
self.rx_thread.setDaemon(True)
self.rx_thread.start()
def close(self):
self.thread_run = False
self.thread_rx = False
def rx_fn(self):
while (self.thread_rx):
try:
hdr = ser.read(8)
if (hdr) :
length = (hdr[4] << 8) + (hdr[5])
#print(length)
body = ser.read((length ) - 8)
pkt = hdr + body
active_watch.dispatch(pkt)
except serial.SerialException as e:
print(e)
def thread_start(self):
self.serial_rx_thread.start()
# # <font color='brown'> Serial Connection</font>
#
# User need to connect the VSM watch with our system for communication
#
# So here we list down the number of serial ports that are available for connection.
#
# Enter the correct serial port of the watch for communication.
#
# +
# Find a list of available serial ports
ser = serial.Serial()
result = []
if sys.platform.startswith('win'):
ports = ['COM%s' % (i + 1) for i in range(256)]
elif sys.platform.startswith('linux') or sys.platform.startswith('cygwin'):
# this excludes your current terminal "/dev/tty"
ports = glob.glob('/dev/tty[A-Za-z]*')
elif sys.platform.startswith('darwin'):
ports = glob.glob('/dev/tty.*')
else:
raise EnvironmentError('Unsupported platform')
for port in ports:
try:
s = serial.Serial(port)
s.close()
result.append(port)
except (OSError, serial.SerialException):
pass
print("Available serial ports are:")
for p in result:
print("==> {}".format(p))
def connect(arg):
try:
if(ser.isOpen()== True):
print("Port Already Connected,Please disconnect and try again")
print('Connecting to Motherboard...')
ser.baudrate =921600
ser.port = arg
ser.open()
print('Connected to Motherboard: ' + ser.port)
except serial.serialutil.SerialException as e:
print("Error opening the serial device!")
print("The port might be already selected,or have given an incorrect serial device identifier.")
print("Error was:\n\t{}".format(e))
return
if sys.platform.startswith('win'):
port = input("Enter the port (ex COM30) and press Enter: ")
elif sys.platform.startswith('linux') or sys.platform.startswith('cygwin'):
# this excludes your current terminal "/dev/tty"
port = input("Enter the port (ex /dev/tty.usbserial-DM3HUW9M) and press Enter: ")
elif sys.platform.startswith('darwin'):
port = input("Enter the port (ex /dev/tty.usbserial-DM3HUW9M) and press Enter: ")
else:
raise EnvironmentError('Unsupported platform')
connect(port)
rx_cb = serial_rx_cb(ser)
# -
# # <font color='brown'> Creating SDK instance</font>
#
# Create an instance for the watch class with passing the transmission callback through the function.
#
# Then set the platform of the watch to python
active_watch = gen3_sdk.watch(serial_tx_cb(ser).__disown__())
active_watch.set_platform(gen3_sdk.watch.python)
# # <font color='brown'> ADPD4000 Stream</font>
#
# SDK will provide the streams in the callback function.
#
# So before starting adpd4000 streams, we need to initialize the adpd4000_cb callback to receive the adpd4000 data streams.
# +
#array to store the dark and signal values for ADPD4000 streams
adpd_arr = []
class adpd4000_cb(gen3_sdk.adpd4000_stream_callback):
def __init__(self):
gen3_sdk.adpd4000_stream_callback.__init__(self)
self.seq_num = None
adpd_arr.clear()
def call(self, data, sequence_num):
for d in data:
adpd_s_arr = []
adpd_d_arr = []
if d.channel_num == 1:
for y in range(0, d.adpddata_s.size()):
adpd_s_arr.append(d.adpddata_s[y])
for y in range(0, d.adpddata_d.size()):
adpd_d_arr.append(d.adpddata_d[y])
adpd_arr.append([d.adpd_stream,d.channel_num,adpd_s_arr,adpd_d_arr])
else:
for y in range(0, d.adpddata_s.size()):
adpd_s_arr.append(d.adpddata_s[y])
for y in range(0, d.adpddata_d.size()):
adpd_d_arr.append(d.adpddata_d[y])
adpd_arr.append([d.adpd_stream,d.channel_num,adpd_s_arr,adpd_d_arr])
######################################################################################
#Split the adpd4000 stream array to signal, dark arrays separately
#s1 - signal array value for channel 1
#d1 - dark array value for channel 1
#s2 - signal array value for channel 2
#d2 - dark array value for channel 2
#return the values based on the input of source
def deserialize_adpd4000(source):
s1 = []
d1 = []
s2 = []
d2 = []
for x in adpd_arr:
if x[0] == source:
if (x[1] == 1):
s1 += x[2]
d1 += x[3]
else:
s2 += x[2]
d2 += x[3]
return s1,d1,s2,d2
# -
# # <font color='brown'> Load configuration</font>
#
# Loading the cofiguration for adpd4000 from a dcfg file
#
# Read the register address and value from the file.
#
# Then will write the address and value through SDK
# +
import string
def LoadCfg(filename):
line_number = 0;
with open(filename) as file:
for line in file:
line_number = line_number + 1
line = line.rstrip();
line = line.lstrip();
line = line[:line.find('#')]
if (len(line) != 0):
reg_addr_pair = line.split(' ')
if (len(reg_addr_pair) < 2):
print ("Error read config file: {0}", filename)
raise Exception('LoadCfgV1: {}'.format(line_number, line))
return None #Is this needed?
reg_op = []
reg_op.append([int(reg_addr_pair[0], 16), int(reg_addr_pair[1], 16)])
ret = active_watch.adpd4000_app.register_write(reg_op)
#Uncomment the below codes to print the address and value that was written by the SDK
# for i in ret:
# print("address: {} value: {}".format(hex(i[0]), hex(i[1])))
# -
# # <font color='brown'> Commands to start the adpd4000 stream.</font>
#
# There are two command structure for starting any streams,
#
# 1) L1 commands - Each L1 command is mapped to an equivalent M2M2 command
#
# Ex :<font color='olive'>active_watch.adpd4000_app.adpd4000_stream1.subscribe(adpd4000_cb().__disown__())
#
# active_watch.adpd4000_app.adpd4000_stream1.start() </font>
#
# 2) L2 commands - L2 commands encapsulate a group of L1 commands to make using the API more easier.
#
# Ex :<font color='olive'> active_watch.start_adpd4000(adpd4000_cb().__disown__()) </font>
#
if(ser.isOpen()== True):
LoadCfg('config/ADPD4000_defaultA.DCFG')
active_watch.start_adpd4000(adpd4000_cb().__disown__())
print("ADPD4000 stream started\nWait for the samples to be collected")
time.sleep(5)
active_watch.stop_adpd4000()
print("ADPD4000 stream stopped")
else:
print("Serial port is not connected")
# # <font color='brown'> Plotting ADPD4000 stream</font>
#
# Plot the dark and signal for the ADPD4000 stream 1.
# +
#############################################################
############## Plotting for stream1 ####################
#############################################################
stream1_data = deserialize_adpd4000(1)
samples_s1 = []
for index in range(len(stream1_data[0])):
samples_s1.append(index)
samples_s2 = []
for index in range(len(stream1_data[2])):
samples_s2.append(index)
samples_d1 = []
for index in range(len(stream1_data[1])):
samples_d1.append(index)
samples_d2 = []
for index in range(len(stream1_data[3])):
samples_d2.append(index)
#Plotting for adpd4000 dark
plt.plot(samples_d1,stream1_data[1],label = "adpd_d1", color = "blue")
plt.plot(samples_d2,stream1_data[3],label = "adpd_d2", color = "purple")
#Plotting for adpd4000 signal
plt.plot(samples_s1,stream1_data[0],label = "adpd_s1", color = "red")
plt.plot(samples_s2,stream1_data[2],label = "adpd_s2", color = "orange")
#############################################################
############## Plotting for stream2 ####################
#############################################################
# stream2_data = deserialize_adpd4000(2)
# samples_s1 = []
# for index in range(len(stream2_data[0])):
# samples_s1.append(index)
# samples_s2 = []
# for index in range(len(stream2_data[2])):
# samples_s2.append(index)
# samples_d1 = []
# for index in range(len(stream2_data[1])):
# samples_d1.append(index)
# samples_d2 = []
# for index in range(len(stream2_data[3])):
# samples_d2.append(index)
# # Plotting for adpd4000 dark
# plt.plot(samples_d1,stream2_data[1],label = "stream2_d1", color = "olive")
# plt.plot(samples_d2,stream2_data[3],label = "stream2_d2", color = "gray")
# # Plotting for adpd4000 signal
# plt.plot(samples_s1,stream2_data[0],label = "stream2_s1", color = "green")
# plt.plot(samples_s2,stream2_data[2],label = "stream2_s2", color = "brown")
######################################################################
#Set graph properties
plt.title('ADPD4000 Stream')
plt.ylabel('amplitude')
plt.xlabel('Samples')
plt.legend()
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
# -
# # <font color='brown'> Disconnect</font>
#
# Disconnect the serial port and stop the receiver thread.
rx_cb.close()
ser.close()
| M4_Eval_SDK/Source/samples/python/jupiterNotebook/adpd_4000.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.047501, "end_time": "2021-11-09T02:39:17.429121", "exception": false, "start_time": "2021-11-09T02:39:17.381620", "status": "completed"} tags=[]
# **This notebook relates to Gitcoin Bouunty for PARSIQ. Here is the repository of the Project: https://github.com/Pfed-prog/PARSIQ-AXS**
# + [markdown] papermill={"duration": 0.046381, "end_time": "2021-11-09T02:39:17.527587", "exception": false, "start_time": "2021-11-09T02:39:17.481206", "status": "completed"} tags=[]
# 
# + [markdown] papermill={"duration": 0.051718, "end_time": "2021-11-09T02:39:17.624876", "exception": false, "start_time": "2021-11-09T02:39:17.573158", "status": "completed"} tags=[]
# ## Import Modules
# + papermill={"duration": 0.060067, "end_time": "2021-11-09T02:39:17.732467", "exception": false, "start_time": "2021-11-09T02:39:17.672400", "status": "completed"} tags=[]
import pandas as pd
import matplotlib.pyplot as plt
# + [markdown] papermill={"duration": 0.048666, "end_time": "2021-11-09T02:39:17.829516", "exception": false, "start_time": "2021-11-09T02:39:17.780850", "status": "completed"} tags=[]
# ## Reading Data
# + papermill={"duration": 0.104957, "end_time": "2021-11-09T02:39:17.981794", "exception": false, "start_time": "2021-11-09T02:39:17.876837", "status": "completed"} tags=[]
df_eth = pd.read_csv("/kaggle/input/parsiq/ERC20AXS.csv").replace('0x1a2a1c938ce3ec39b6d47113c7955baa9dd454f2', 'Ronin').replace('0x28c6c06298d514db089934071355e5743bf21d60', 'Binance')
df_bsc = pd.read_csv("../input/parsiq/BinanceSmartChain.csv").replace('0x8894e0a0c962cb723c1976a4421c95949be2d4e3', 'Binance6').replace('0xe2fc31f816a9b94326492132018c3aecc4a93ae1', 'Binance7')
# + [markdown] papermill={"duration": 0.049921, "end_time": "2021-11-09T02:39:18.085185", "exception": false, "start_time": "2021-11-09T02:39:18.035264", "status": "completed"} tags=[]
# ## Inspect Datasets
# + papermill={"duration": 0.080308, "end_time": "2021-11-09T02:39:18.277508", "exception": false, "start_time": "2021-11-09T02:39:18.197200", "status": "completed"} tags=[]
df_eth.info()
# + papermill={"duration": 0.08124, "end_time": "2021-11-09T02:39:18.406834", "exception": false, "start_time": "2021-11-09T02:39:18.325594", "status": "completed"} tags=[]
df_eth.head()
# + papermill={"duration": 0.068699, "end_time": "2021-11-09T02:39:18.524894", "exception": false, "start_time": "2021-11-09T02:39:18.456195", "status": "completed"} tags=[]
df_bsc.info()
# + papermill={"duration": 0.076235, "end_time": "2021-11-09T02:39:18.647647", "exception": false, "start_time": "2021-11-09T02:39:18.571412", "status": "completed"} tags=[]
df_bsc.head(2)
# + papermill={"duration": 0.346072, "end_time": "2021-11-09T02:39:19.043374", "exception": false, "start_time": "2021-11-09T02:39:18.697302", "status": "completed"} tags=[]
df_eth[df_eth['to']== 'Binance'].set_index('block_timestamp')['value'].plot(figsize=(10,6))
df_eth[df_eth['to']== 'Ronin'].set_index('block_timestamp')['value'].plot(figsize=(10,6))
plt.xticks(fontsize=6)
plt.show()
# + papermill={"duration": 0.320293, "end_time": "2021-11-09T02:39:19.413815", "exception": false, "start_time": "2021-11-09T02:39:19.093522", "status": "completed"} tags=[]
df_eth[df_eth['from']== 'Binance'].set_index('block_timestamp')['value'].plot(figsize=(10,6))
df_eth[df_eth['from']== 'Ronin'].set_index('block_timestamp')['value'].plot(figsize=(10,6))
plt.xticks(fontsize=6)
plt.show()
# + [markdown] papermill={"duration": 0.049884, "end_time": "2021-11-09T02:39:19.516676", "exception": false, "start_time": "2021-11-09T02:39:19.466792", "status": "completed"} tags=[]
# ## Transforming the Data
# + papermill={"duration": 0.058795, "end_time": "2021-11-09T02:39:19.624525", "exception": false, "start_time": "2021-11-09T02:39:19.565730", "status": "completed"} tags=[]
# Divide by token type
df_eth_axs = df_eth[df_eth.symbol=='AXS']
df_eth_slp = df_eth[df_eth.symbol=='SLP']
# + [markdown] papermill={"duration": 0.051219, "end_time": "2021-11-09T02:39:19.728536", "exception": false, "start_time": "2021-11-09T02:39:19.677317", "status": "completed"} tags=[]
# # Analysis. AXS on Ethereum
# + papermill={"duration": 0.059864, "end_time": "2021-11-09T02:39:19.839370", "exception": false, "start_time": "2021-11-09T02:39:19.779506", "status": "completed"} tags=[]
len(df_eth_axs)
# + papermill={"duration": 0.060036, "end_time": "2021-11-09T02:39:19.949576", "exception": false, "start_time": "2021-11-09T02:39:19.889540", "status": "completed"} tags=[]
df_eth_axs.to.value_counts()
# + papermill={"duration": 0.060975, "end_time": "2021-11-09T02:39:20.059684", "exception": false, "start_time": "2021-11-09T02:39:19.998709", "status": "completed"} tags=[]
df_eth_axs['from'].value_counts()
# + papermill={"duration": 0.060884, "end_time": "2021-11-09T02:39:20.170604", "exception": false, "start_time": "2021-11-09T02:39:20.109720", "status": "completed"} tags=[]
df_eth_axs['to'].value_counts()
# + papermill={"duration": 0.068362, "end_time": "2021-11-09T02:39:20.288950", "exception": false, "start_time": "2021-11-09T02:39:20.220588", "status": "completed"} tags=[]
eth_axs_counts_from = df_eth_axs['from'].value_counts().reset_index(name="count").query("count > 10").set_index('index')
eth_axs_counts_to = df_eth_axs['to'].value_counts().reset_index(name="count").query("count > 10").set_index('index')
# + [markdown] papermill={"duration": 0.049341, "end_time": "2021-11-09T02:39:20.388963", "exception": false, "start_time": "2021-11-09T02:39:20.339622", "status": "completed"} tags=[]
# ### Countplot of transactions
# + papermill={"duration": 0.30161, "end_time": "2021-11-09T02:39:20.741185", "exception": false, "start_time": "2021-11-09T02:39:20.439575", "status": "completed"} tags=[]
eth_axs_counts_from.plot(kind='bar')
plt.xlabel("Type", labelpad=14)
plt.ylabel("Count of transactions", labelpad=14)
plt.title("Count of AXS transactions from an address on Ethereum", y=1.02)
plt.savefig('eth_axs_counts_from.png');
# + papermill={"duration": 0.26053, "end_time": "2021-11-09T02:39:21.053047", "exception": false, "start_time": "2021-11-09T02:39:20.792517", "status": "completed"} tags=[]
eth_axs_counts_to.plot(kind='bar')
plt.xlabel("Type", labelpad=14)
plt.ylabel("Count of transactions", labelpad=14)
plt.title("Count of AXS transactions to an address on Ethereum", y=1.02)
plt.savefig('eth_axs_counts_to.png');
# + [markdown] papermill={"duration": 0.053213, "end_time": "2021-11-09T02:39:21.160028", "exception": false, "start_time": "2021-11-09T02:39:21.106815", "status": "completed"} tags=[]
# Great Sign for AXS: more transactions to the Ronin Bridge than Binance. Also more transactions to Ronin than from Ronin.
# + [markdown] papermill={"duration": 0.052355, "end_time": "2021-11-09T02:39:21.264651", "exception": false, "start_time": "2021-11-09T02:39:21.212296", "status": "completed"} tags=[]
# ### Inspecing value of AXS tokens
# + papermill={"duration": 0.095369, "end_time": "2021-11-09T02:39:21.412599", "exception": false, "start_time": "2021-11-09T02:39:21.317230", "status": "completed"} tags=[]
df_eth_axs.sort_values(by=['value']).tail(25)
# + papermill={"duration": 0.065231, "end_time": "2021-11-09T02:39:21.532485", "exception": false, "start_time": "2021-11-09T02:39:21.467254", "status": "completed"} tags=[]
df_eth_axs.groupby(['from'])['value'].sum().sort_values()
# + papermill={"duration": 0.065781, "end_time": "2021-11-09T02:39:21.652246", "exception": false, "start_time": "2021-11-09T02:39:21.586465", "status": "completed"} tags=[]
df_eth_axs.groupby(['to'])['value'].sum().sort_values()
# + [markdown] papermill={"duration": 0.054322, "end_time": "2021-11-09T02:39:21.760964", "exception": false, "start_time": "2021-11-09T02:39:21.706642", "status": "completed"} tags=[]
# # Calculating flows
# + papermill={"duration": 0.061983, "end_time": "2021-11-09T02:39:21.876786", "exception": false, "start_time": "2021-11-09T02:39:21.814803", "status": "completed"} tags=[]
# total sum of flow values
df_eth_axs['value'].sum()
# + papermill={"duration": 0.065917, "end_time": "2021-11-09T02:39:21.996629", "exception": false, "start_time": "2021-11-09T02:39:21.930712", "status": "completed"} tags=[]
df_eth_axs.groupby(['from'])['value'].sum().sort_values()
# + papermill={"duration": 0.064197, "end_time": "2021-11-09T02:39:22.118320", "exception": false, "start_time": "2021-11-09T02:39:22.054123", "status": "completed"} tags=[]
AXS_from_Ronin = df_eth_axs.groupby(['from'])['value'].sum().sort_values()['Ronin']
AXS_to_Ronin = df_eth_axs.groupby(['to'])['value'].sum().sort_values()['Ronin']
# + papermill={"duration": 0.078646, "end_time": "2021-11-09T02:39:22.251997", "exception": false, "start_time": "2021-11-09T02:39:22.173351", "status": "completed"} tags=[]
df_eth_axs.groupby(['to'])['value', 'gas_used'].sum().sort_values(by=['value'])
# + papermill={"duration": 0.065946, "end_time": "2021-11-09T02:39:22.376360", "exception": false, "start_time": "2021-11-09T02:39:22.310414", "status": "completed"} tags=[]
AXS_from_Binance = df_eth_axs.groupby(['from'])['value'].sum().sort_values()['Binance']
AXS_to_Binance = df_eth_axs.groupby(['to'])['value'].sum().sort_values()['Binance']
# + papermill={"duration": 0.223853, "end_time": "2021-11-09T02:39:22.656594", "exception": false, "start_time": "2021-11-09T02:39:22.432741", "status": "completed"} tags=[]
df_eth_axs[df_eth_axs['to']== 'Binance'].set_index('block_timestamp')['value'].plot(figsize=(10,6))
#df_eth_axs[df_eth_axs['to']== 'Ronin'].set_index('block_timestamp')['value'].plot(figsize=(10,6))
plt.xticks(fontsize=8)
plt.show()
# + papermill={"duration": 0.071247, "end_time": "2021-11-09T02:39:22.785211", "exception": false, "start_time": "2021-11-09T02:39:22.713964", "status": "completed"} tags=[]
df_eth_axs[df_eth_axs['to']== 'Ronin'].set_index('block_timestamp')['value']
# + papermill={"duration": 0.231013, "end_time": "2021-11-09T02:39:23.073515", "exception": false, "start_time": "2021-11-09T02:39:22.842502", "status": "completed"} tags=[]
df_eth_axs[df_eth_axs['from']== 'Binance'].set_index('block_timestamp')['value'].plot(figsize=(10,6))
df_eth_axs[df_eth_axs['from']== 'Ronin'].set_index('block_timestamp')['value'].plot(figsize=(10,6))
plt.xticks(fontsize=6)
plt.show()
# + papermill={"duration": 0.303873, "end_time": "2021-11-09T02:39:23.436319", "exception": false, "start_time": "2021-11-09T02:39:23.132446", "status": "completed"} tags=[]
df_eth[df_eth['from']== 'Binance'].set_index('block_timestamp')['value'].plot(figsize=(10,6))
df_eth[df_eth['from']== 'Ronin'].set_index('block_timestamp')['value'].plot(figsize=(10,6))
plt.xticks(fontsize=6)
plt.show()
# + papermill={"duration": 0.291225, "end_time": "2021-11-09T02:39:23.787305", "exception": false, "start_time": "2021-11-09T02:39:23.496080", "status": "completed"} tags=[]
df = pd.DataFrame({'Group 1': {'from Ronin': AXS_from_Ronin, 'to Ronin': AXS_to_Ronin, 'from Binance': AXS_from_Binance, 'to Binance': AXS_to_Binance}})
width = 0.8
bottom = 0
for i in df.columns:
plt.bar(df.index, df[i], width=width, bottom=bottom)
bottom += df[i]
plt.ylabel('Value')
plt.xticks(fontsize=8)
plt.tight_layout()
# + [markdown] papermill={"duration": 0.059926, "end_time": "2021-11-09T02:39:23.906951", "exception": false, "start_time": "2021-11-09T02:39:23.847025", "status": "completed"} tags=[]
# # AXS on Binance Smart Chain
# + papermill={"duration": 0.07835, "end_time": "2021-11-09T02:39:24.046686", "exception": false, "start_time": "2021-11-09T02:39:23.968336", "status": "completed"} tags=[]
bsc_axs_counts_from = df_bsc['from'].value_counts().reset_index(name="count").query("count > 2").set_index('index')
bsc_axs_counts_to = df_bsc['to'].value_counts().reset_index(name="count").query("count > 2").set_index('index')
# + papermill={"duration": 0.069354, "end_time": "2021-11-09T02:39:24.177695", "exception": false, "start_time": "2021-11-09T02:39:24.108341", "status": "completed"} tags=[]
len(df_bsc)
# + papermill={"duration": 0.072431, "end_time": "2021-11-09T02:39:24.310951", "exception": false, "start_time": "2021-11-09T02:39:24.238520", "status": "completed"} tags=[]
bsc_axs_counts_to
# + papermill={"duration": 0.072067, "end_time": "2021-11-09T02:39:24.443440", "exception": false, "start_time": "2021-11-09T02:39:24.371373", "status": "completed"} tags=[]
bsc_axs_counts_from
# + papermill={"duration": 0.264684, "end_time": "2021-11-09T02:39:24.769289", "exception": false, "start_time": "2021-11-09T02:39:24.504605", "status": "completed"} tags=[]
df = pd.DataFrame({'Group 1': {'outflows':66, 'inflows':26}, 'Go': {'outflows': 65}, })
width = 0.8
bottom = 0
for i in df.columns:
plt.bar(df.index, df[i], width=width, bottom=bottom)
bottom += df[i]
plt.ylabel('Count')
plt.xticks(fontsize=8)
plt.tight_layout()
# + papermill={"duration": 0.074451, "end_time": "2021-11-09T02:39:24.905469", "exception": false, "start_time": "2021-11-09T02:39:24.831018", "status": "completed"} tags=[]
AXS_from_6_bsc = df_bsc.groupby(['from'])['block.header.nonce'].sum().sort_values()['Binance6']
AXS_from_7_bsc = df_bsc.groupby(['from'])['block.header.nonce'].sum().sort_values()['Binance7']
AXS_to_6_bsc = df_bsc.groupby(['to'])['block.header.nonce'].sum().sort_values()['Binance6']
# + papermill={"duration": 0.076835, "end_time": "2021-11-09T02:39:25.045843", "exception": false, "start_time": "2021-11-09T02:39:24.969008", "status": "completed"} tags=[]
df_bsc.groupby(['from'])['block.header.nonce'].sum().sort_values()
# + papermill={"duration": 0.076948, "end_time": "2021-11-09T02:39:25.187278", "exception": false, "start_time": "2021-11-09T02:39:25.110330", "status": "completed"} tags=[]
df_bsc.groupby(['to'])['block.header.nonce'].sum().sort_values()
# + papermill={"duration": 0.369684, "end_time": "2021-11-09T02:39:25.619514", "exception": false, "start_time": "2021-11-09T02:39:25.249830", "status": "completed"} tags=[]
df = pd.DataFrame({'Group 1': {'outflows':AXS_from_6_bsc, 'inflows':AXS_to_6_bsc}, 'Go': {'outflows': AXS_from_7_bsc}, })
width = 0.8
bottom = 0
for i in df.columns:
plt.bar(df.index, df[i], width=width, bottom=bottom)
bottom += df[i]
plt.ylabel('Value')
plt.xticks(fontsize=8)
plt.tight_layout()
# + [markdown] papermill={"duration": 0.063472, "end_time": "2021-11-09T02:39:25.745706", "exception": false, "start_time": "2021-11-09T02:39:25.682234", "status": "completed"} tags=[]
# ### Comparing BSC to Ethereum
# + papermill={"duration": 0.312065, "end_time": "2021-11-09T02:39:26.120845", "exception": false, "start_time": "2021-11-09T02:39:25.808780", "status": "completed"} tags=[]
df = pd.DataFrame({'Group 1': {'BSC outflows':AXS_from_6_bsc, 'BSC inflows':AXS_to_6_bsc,'ETH outflows': AXS_from_Binance, 'ETH inflows': AXS_to_Binance }, 'Go': {'BSC outflows': AXS_from_7_bsc}, 'Group3':{}})
#df = pd.DataFrame({'Group 1': {'from Ronin': AXS_from_Ronin, 'to Ronin': AXS_to_Ronin, 'from Binance': AXS_from_Binance, 'to Binance': AXS_to_Binance}})
width = 0.8
bottom = 0
for i in df.columns:
plt.bar(df.index, df[i], width=width, bottom=bottom)
bottom += df[i]
plt.ylabel('Value')
plt.xticks(fontsize=8)
plt.tight_layout()
# + [markdown] papermill={"duration": 0.063343, "end_time": "2021-11-09T02:39:26.247959", "exception": false, "start_time": "2021-11-09T02:39:26.184616", "status": "completed"} tags=[]
# ##
# + [markdown] papermill={"duration": 0.063661, "end_time": "2021-11-09T02:39:26.375979", "exception": false, "start_time": "2021-11-09T02:39:26.312318", "status": "completed"} tags=[]
# ## getScore
# + papermill={"duration": 0.082697, "end_time": "2021-11-09T02:39:26.522534", "exception": false, "start_time": "2021-11-09T02:39:26.439837", "status": "completed"} tags=[]
df_eth[df_eth['from']== 'Ronin'].mean()
# + papermill={"duration": 0.078116, "end_time": "2021-11-09T02:39:26.666177", "exception": false, "start_time": "2021-11-09T02:39:26.588061", "status": "completed"} tags=[]
df_eth[df_eth['from']== 'Binance'].mean()
# + papermill={"duration": 0.079985, "end_time": "2021-11-09T02:39:26.812044", "exception": false, "start_time": "2021-11-09T02:39:26.732059", "status": "completed"} tags=[]
df_eth[df_eth['to']== 'Ronin'].mean()
# + papermill={"duration": 0.080951, "end_time": "2021-11-09T02:39:26.957783", "exception": false, "start_time": "2021-11-09T02:39:26.876832", "status": "completed"} tags=[]
df_eth[df_eth['to']== 'Binance'].mean()
# + papermill={"duration": 0.066157, "end_time": "2021-11-09T02:39:27.090662", "exception": false, "start_time": "2021-11-09T02:39:27.024505", "status": "completed"} tags=[]
| parsiq-axs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/iamdsc/text_biasness_detection/blob/master/text_biasness.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="LFjMaqbFKV9Y" colab_type="code" colab={}
pip install allennlp
# + id="hEPzzox7KRfi" colab_type="code" colab={}
pip install flair
# + id="pG19CQ6bj5MR" colab_type="code" colab={}
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_hub as hub
from keras import backend as K
from keras import initializers, regularizers, constraints
import keras.layers as layers
from keras.models import Model, Sequential, load_model
from keras.engine.topology import Layer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import plot_model
from keras.callbacks import ModelCheckpoint
from flair.data import Sentence
from flair.models import SequenceTagger
from flair.embeddings import ELMoEmbeddings
# + [markdown] id="Y2LpgCqsELDi" colab_type="text"
# ## Loading the data
# + id="MCBgXwMZkXOj" colab_type="code" colab={}
def read_data_from_txt(filename, label):
data = {}
data['sentence'] = []
data['label'] = []
with open(filename, 'r') as f:
for line in f:
data['sentence'].append(line)
data['label'].append(label)
data_df = pd.DataFrame(data)
return data_df
# + id="20n2smwNkzwz" colab_type="code" outputId="cb206f4c-8e90-425b-faa9-d53f91e49ad3" colab={"base_uri": "https://localhost:8080/", "height": 206}
biased_data = read_data_from_txt('biased.txt', 1)
biased_data.head()
# + id="sthHg9IonjIH" colab_type="code" outputId="175bbef5-f42e-4f2b-b307-39eef8f69b19" colab={"base_uri": "https://localhost:8080/", "height": 35}
biased_data.shape
# + id="CrupBdLink8O" colab_type="code" outputId="7d871537-b898-4a0b-987e-9ff274233234" colab={"base_uri": "https://localhost:8080/", "height": 206}
neutral_hard_data = read_data_from_txt('neutral_cw-hard.txt', 0)
neutral_hard_data.head()
# + id="4FEf4WKin3Qe" colab_type="code" outputId="ac49fa6c-acaf-4293-f031-d8e1b7078dbf" colab={"base_uri": "https://localhost:8080/", "height": 35}
neutral_hard_data.shape
# + id="f_MdNvl6oA_Y" colab_type="code" outputId="ec30aa30-c46c-4f69-b955-1272ad8e21f8" colab={"base_uri": "https://localhost:8080/", "height": 206}
neutral_featured_data = read_data_from_txt('neutral_featured.txt', 0)
neutral_featured_data.head()
# + id="jfF-3ixZoawm" colab_type="code" outputId="7633c2fd-435c-42ca-99fc-e6d59dbe8797" colab={"base_uri": "https://localhost:8080/", "height": 35}
neutral_featured_data.shape
# + id="yzFEix6nofj3" colab_type="code" outputId="c82b8431-5fbb-4f53-dd7d-f83fcc07b019" colab={"base_uri": "https://localhost:8080/", "height": 206}
neutral_type_balanced_data = read_data_from_txt('neutral_type_balanced.txt', 0)
neutral_type_balanced_data.head()
# + id="1ZqAYnaoptCR" colab_type="code" outputId="d497b0e6-19d8-4370-fd6b-d629b59d1c40" colab={"base_uri": "https://localhost:8080/", "height": 35}
neutral_type_balanced_data.shape
# + [markdown] id="fNAcK1WxqABg" colab_type="text"
# ### Number of Samples:
# 1. Biased Data: 1843
# 2. Neutral CW-Hard: 3109
# 3. Neutral Featured: 5000
# 4. Neutral Type-balanced: 1994
#
#
# + id="OXxQqUcRpywW" colab_type="code" outputId="0805d56a-41f4-41c4-b958-77cf0224604f" colab={"base_uri": "https://localhost:8080/", "height": 35}
# taking equal number of neutral sentences
neutral_type_balanced_data = neutral_type_balanced_data[:len(biased_data)]
neutral_type_balanced_data.shape
# + id="m9rrh4tfb67F" colab_type="code" colab={}
# concatenating and shuffling the data
text_data = pd.concat([biased_data, neutral_type_balanced_data]).sample(frac=1).reset_index(drop=True)
# + id="OX3MnrPtdPGp" colab_type="code" outputId="90a49f9e-db75-453e-9288-b8331a8c352c" colab={"base_uri": "https://localhost:8080/", "height": 35}
text_data.shape
# + id="NibhnbWQdRMV" colab_type="code" outputId="0e187d01-2ce3-4f0b-e0bb-78266377bd1f" colab={"base_uri": "https://localhost:8080/", "height": 206}
text_data.head()
# + [markdown] id="M-2v3oTEExeJ" colab_type="text"
# ## Defining helper functions
# + id="mcXDCv3I3oT1" colab_type="code" colab={}
# to compute fmeasure as custom metric
def f1(y_true, y_pred):
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
# + id="4wOwQcfI4H7j" colab_type="code" colab={}
# helper function to plot the results
def plot_result(history):
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
f1 = history.history['f1']
val_f1 = history.history['val_f1']
epochs = range(1, len(acc)+1)
plt.plot(epochs, acc, label='Training acc')
plt.plot(epochs, val_acc, label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('epochs')
plt.ylabel('acc')
plt.legend()
plt.figure()
plt.plot(epochs, loss, label='Training loss')
plt.plot(epochs, val_loss, label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend()
plt.figure()
plt.plot(epochs, f1, label='Training fmeasure')
plt.plot(epochs, val_f1, label='Validation fmeasure')
plt.title('Training and validation fmeasure')
plt.xlabel('epochs')
plt.ylabel('f1')
plt.legend()
plt.show()
# + id="ZmhXGtB0E4bX" colab_type="code" colab={}
maxlen = 50
# + id="0PcFulj23e7S" colab_type="code" colab={}
# Create dataset taking max 50 words in each sentence
data = text_data['sentence'].tolist()
data = [' '.join(t.split()[:maxlen]) for t in data]
data = np.array(data, dtype=object)[:, np.newaxis]
label = text_data['label'].tolist()
flatten_data = data.flatten()
# + [markdown] id="_bSCpOXOKat4" colab_type="text"
# ## Building Elmo Embeddings using flair
# + id="WIbs13BmDDfS" colab_type="code" outputId="fe868fbf-1201-4ab1-dfd3-7cacbeeea31f" colab={"base_uri": "https://localhost:8080/", "height": 35}
# init embedding
embedding = ELMoEmbeddings()
# Embedding array - [3686, 50, 3072]
elmo_embedding = []
for sent in flatten_data:
sent_embedding = []
sent = sent.split()[:50]
sent.extend(['PAD']*(50-len(sent)))
sent = ' '.join(sent)
sentence = Sentence(sent)
embedding.embed(sentence)
for token in sentence:
sent_embedding.append(token.embedding.cpu().numpy())
elmo_embedding.append(np.array(sent_embedding))
elmo_embedding = np.array(elmo_embedding)
print(elmo_embedding.shape)
# + id="BcZBGTHc_D77" colab_type="code" colab={}
# build the model 1
def build_elmo_dense():
""" Using Dense layer over elmo-embedding layer """
inp = layers.Input(shape=(maxlen, 3072,))
dense = layers.Dense(64, activation='relu')(inp)
x = layers.Dropout(0.5)(dense)
x = layers.Flatten()(x)
pred = layers.Dense(1, activation='sigmoid')(x)
model = Model(inputs=inp, outputs=pred)
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy', f1])
model.summary()
return model
# + id="8iM30L2S_D4O" colab_type="code" outputId="f6c6615b-e0d6-49ee-c05a-5f59186df5a4" colab={"base_uri": "https://localhost:8080/", "height": 348}
elmo_dense = build_elmo_dense()
# + id="X5Ja3EpP_D0K" colab_type="code" outputId="5c41ed2a-9f78-48d3-ed85-33425adc27b5" colab={"base_uri": "https://localhost:8080/", "height": 202}
# fit the model
history = elmo_dense.fit(elmo_embedding, label, validation_split=0.1, epochs=4, batch_size=32)
# + id="3kE0iiGw_Du0" colab_type="code" outputId="3996a730-945e-4f44-f718-f36ed77ec52c" colab={"base_uri": "https://localhost:8080/", "height": 851}
plot_result(history)
# + id="PsGAs9lT_Dm1" colab_type="code" colab={}
#build the model 2
def build_elmo_lstm():
""" Using Bidirectional-LSTM over elmo-embedding layer """
inp = layers.Input(shape=(maxlen, 3072,), name='input')
lstm = layers.LSTM(128, dropout=0.3, name='lstm')(inp)
x = layers.Dense(128, activation='relu', name='dense1')(lstm)
x = layers.Dense(64, activation='relu', name='dense2')(x)
pred = layers.Dense(1, activation='sigmoid', name='dense3')(x)
model = Model(inputs=inp, outputs=pred, name='Elmo-LSTM Model')
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy', f1])
model.summary()
return model
# + id="QAjMqiJd_DdG" colab_type="code" outputId="9d4f5c75-041d-4b96-fcd8-c3130c129b45" colab={"base_uri": "https://localhost:8080/", "height": 348}
elmo_lstm = build_elmo_lstm()
# + id="hVMF7U74_DEd" colab_type="code" outputId="395db869-6c51-44f0-ff29-c7dbf818e5ec" colab={"base_uri": "https://localhost:8080/", "height": 329}
# checkpoint
filepath="elmo_lstm_model-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
# fit the model
history = elmo_lstm.fit(elmo_embedding, label, validation_split=0.1, epochs=4, batch_size=32, callbacks=callbacks_list)
# + id="IiNgeQSm_Cze" colab_type="code" outputId="54bec752-017d-4722-975c-334d3025e917" colab={"base_uri": "https://localhost:8080/", "height": 851}
plot_result(history)
# + id="bvQY3u5A6yT_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 466} outputId="c2947c54-7e3f-4018-dd0a-f6b44dde0b29"
plot_model(elmo_lstm, to_file='elmo_lstm.png')
# + [markdown] id="E-yIkv_sFV-q" colab_type="text"
# ## Creating Custom Attention Layer
# + id="oBhqh_wYFULS" colab_type="code" colab={}
def dot_product(x, kernel):
"""
Wrapper for dot product operation, in order to be compatible with both
Theano and Tensorflow
Args:
x (): input
kernel (): weights
Returns:
"""
if K.backend() == 'tensorflow':
return K.squeeze(K.dot(x, K.expand_dims(kernel)), axis=-1)
else:
return K.dot(x, kernel)
class AttentionWithContext(Layer):
"""
Attention operation, with a context/query vector, for temporal data.
Supports Masking.
Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf]
"Hierarchical Attention Networks for Document Classification"
by using a context vector to assist the attention
# Input shape
3D tensor with shape: `(samples, steps, features)`.
# Output shape
2D tensor with shape: `(samples, features)`.
How to use:
Just put it on top of an RNN Layer (GRU/LSTM/SimpleRNN) with return_sequences=True.
The dimensions are inferred based on the output shape of the RNN.
Note: The layer has been tested with Keras 2.0.6
Example:
model.add(LSTM(64, return_sequences=True))
model.add(AttentionWithContext())
# next add a Dense layer (for classification/regression) or whatever...
"""
def __init__(self,
W_regularizer=None, u_regularizer=None, b_regularizer=None,
W_constraint=None, u_constraint=None, b_constraint=None,
bias=True, **kwargs):
self.supports_masking = True
self.init = initializers.get('glorot_uniform')
self.W_regularizer = regularizers.get(W_regularizer)
self.u_regularizer = regularizers.get(u_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
self.W_constraint = constraints.get(W_constraint)
self.u_constraint = constraints.get(u_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
super(AttentionWithContext, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) == 3
self.W = self.add_weight((input_shape[-1], input_shape[-1],),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
if self.bias:
self.b = self.add_weight((input_shape[-1],),
initializer='zero',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
self.u = self.add_weight((input_shape[-1],),
initializer=self.init,
name='{}_u'.format(self.name),
regularizer=self.u_regularizer,
constraint=self.u_constraint)
super(AttentionWithContext, self).build(input_shape)
def compute_mask(self, input, input_mask=None):
# do not pass the mask to the next layers
return None
def call(self, x, mask=None):
uit = dot_product(x, self.W)
if self.bias:
uit += self.b
uit = K.tanh(uit)
ait = dot_product(uit, self.u)
a = K.exp(ait)
# apply mask after the exp. will be re-normalized next
if mask is not None:
# Cast the mask to floatX to avoid float64 upcasting in theano
a *= K.cast(mask, K.floatx())
# in some cases especially in the early stages of training the sum may be almost zero
# and this results in NaN's. A workaround is to add a very small positive number ε to the sum.
# a /= K.cast(K.sum(a, axis=1, keepdims=True), K.floatx())
a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())
a = K.expand_dims(a)
weighted_input = x * a
return K.sum(weighted_input, axis=1)
def compute_output_shape(self, input_shape):
return input_shape[0], input_shape[-1]
# + id="m5Hi8z0dEvbB" colab_type="code" colab={}
def build_elmo_lstm_attention():
inp = layers.Input(shape=(maxlen, 3072,))
x = layers.LSTM(64, dropout=0.5, return_sequences=True)(inp)
x = AttentionWithContext()(x)
x = layers.Dense(128, activation='relu')(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(64, activation='relu')(x)
x = layers.Dropout(0.5)(x)
pred = layers.Dense(1, activation='sigmoid')(x)
model = Model(inputs=inp, outputs=pred)
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy', f1])
model.summary()
return model
# + id="m1SaY0g8Mn-N" colab_type="code" outputId="1e2f490a-8e19-4d14-cd70-03a1f5f3bb1d" colab={"base_uri": "https://localhost:8080/", "height": 458}
elmo_lstm_attention = build_elmo_lstm_attention()
# + id="XcTHKaLzMxIJ" colab_type="code" outputId="5f86682f-602f-43e2-a473-9b1286639aa9" colab={"base_uri": "https://localhost:8080/", "height": 182}
# fit the model
history = elmo_lstm_attention.fit(elmo_embedding, label, validation_split=0.1, epochs=4, batch_size=32)
# + id="NG3zH5YjQpcj" colab_type="code" outputId="bbd2d063-ac09-44e9-97c9-65209c8979d2" colab={"base_uri": "https://localhost:8080/", "height": 851}
plot_result(history)
# + [markdown] id="mcZ8L6riFbxz" colab_type="text"
# ## Using POS tags of words in sentences
# + id="_HMr7TKKEIDF" colab_type="code" colab={}
pos_tagger = SequenceTagger.load('pos-fast')
pos_tags = []
for sent in flatten_data:
sentence = Sentence(sent)
pos_tagger.predict(sentence)
tagged_sentence = sentence.to_tagged_string()
## Replace that word from sentence
for j in sent.split():
tagged_sentence = str(tagged_sentence).replace(j,"",1)
## Removing < > symbols ##
for j in ['<','>']:
tagged_sentence = str(tagged_sentence).replace(j,"")
## Removing redundant spaces
tagged_sentence = re.sub(' +', ' ', str(tagged_sentence))
tagged_sentence = str(tagged_sentence).lstrip()
pos_tags.append(tagged_sentence)
# print(pos_tags[:5])
# + id="JxQIR61bWpMS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 313} outputId="1c6d0f84-2db0-4450-eef3-d560146242ab"
print(flatten_data[0])
print(pos_tags[0])
print(elmo_embedding[0])
# + id="exyAhVLyDupt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="86d3ff5c-6ece-44f7-939d-d617fac45cb7"
maxlen = 50
max_words = 100
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(pos_tags)
sequences = tokenizer.texts_to_sequences(pos_tags)
word_index = tokenizer.word_index
print('Found %s unique tokens'%len(word_index))
# padding the sequences
pos_data = pad_sequences(sequences, maxlen=maxlen)
# + id="AAop2CH-G65Z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 311} outputId="4cb9a5a1-2813-444c-86d9-fc57ccb2d64c"
print(pos_data.shape)
print(pos_data[:5])
# + id="UB-6XW5wR2db" colab_type="code" colab={}
def build_elmo_lstm_attn_pos_attn():
inp1 = layers.Input(shape=(maxlen, 3072,), name='word_embedding') # To input elmo embeddings
inp2 = layers.Input(shape=(maxlen,), name='word_POS_tag') # To input POS tag sequences
x = layers.LSTM(64, dropout=0.3, return_sequences=True, name='lstm1')(inp1)
x = AttentionWithContext()(x)
y = layers.Embedding(max_words, 1024, input_length=maxlen, name='embedding')(inp2)
y = layers.LSTM(64, dropout=0.3, return_sequences=True, name='lstm2')(y)
y = AttentionWithContext()(y)
z = layers.Concatenate(axis=-1, name='concatenate')([x, y])
z = layers.Dense(128, activation='relu', name='dense1')(z)
z = layers.Dropout(0.5, name='dropout1')(z)
z = layers.Dense(64, activation='relu', name='dense2')(z)
z = layers.Dropout(0.3, name='dropout2')(z)
pred = layers.Dense(1, activation='sigmoid', name='dense3')(z)
model = Model(inputs=[inp1,inp2], outputs=pred, name='Elmo-LSTM-Attn+POS-LSTM-Attn')
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy', f1])
model.summary()
return model
# + id="ZJob9fa1I0x2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 660} outputId="f920ae7f-84e8-4cd8-c594-342465ae1ba7"
elmo_lstm_attn_pos_attn = build_elmo_lstm_attn_pos_attn()
# + id="J4AwMMmzC1sE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 953} outputId="3d9d1e78-e352-43bf-8ffd-4021ae1a5f0d"
plot_model(elmo_lstm_attn_pos_attn, to_file='elmo_lstm_attn_pos_attn.png')
# + id="0arrN75NI6lY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 349} outputId="53a8c938-2ba7-4bac-ce08-037e32bc06ed"
# checkpoint
filepath="elmo_lstm_attn_pos_attn_model-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
# fit the model
history = elmo_lstm_attn_pos_attn.fit([elmo_embedding, pos_data], label, validation_split=0.1, epochs=4, batch_size=32, callbacks=callbacks_list)
# + id="3gkQCQ4nDxt-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 851} outputId="3d56a776-9d29-4ec0-8920-22f05917db07"
plot_result(history)
# + id="vpunzc3VKNsx" colab_type="code" colab={}
def build_elmo_lstm_pos_attn():
inp1 = layers.Input(shape=(maxlen, 3072,), name='word_embedding') # To input elmo embeddings
inp2 = layers.Input(shape=(maxlen,), name='word_POS_tag') # To input POS tag sequences
x = layers.LSTM(64, dropout=0.5, return_sequences=True, name='lstm-1')(inp1)
y = layers.Embedding(max_words, 100, input_length=maxlen, name='embedding')(inp2)
y = layers.LSTM(64, dropout=0.5, return_sequences=True, name='lstm-2')(y)
z = layers.Concatenate(axis=-1, name='concatenate')([x, y])
z = AttentionWithContext()(z)
z = layers.Dense(128, activation='relu', name='dense1')(z)
z = layers.Dropout(0.5, name='dropout1')(z)
z = layers.Dense(64, activation='relu', name='dense2')(z)
z = layers.Dropout(0.3, name='dropout2')(z)
pred = layers.Dense(1, activation='sigmoid', name='dense3')(z)
model = Model(inputs=[inp1,inp2], outputs=pred, name='Elmo-LSTM+POS-LSTM Attention')
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy', f1])
model.summary()
return model
# + id="SRut26GTTA76" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 623} outputId="1cfda8c3-6bd3-459b-eddd-52a1acc406f1"
elmo_lstm_pos_attn = build_elmo_lstm_pos_attn()
# + id="LlpjH49WAHPM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 953} outputId="0a488109-e836-4dd4-8788-7fd851db9925"
plot_model(elmo_lstm_pos_attn, to_file='elmo_lstm_pos_attn.png')
# + id="mQAw03LvTF1O" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 349} outputId="033c96e9-090c-444b-cd06-05e14d8589f0"
# checkpoint
filepath="elmo_lstm_pos_attn_model-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
# fit the model
history = elmo_lstm_pos_attn.fit([elmo_embedding, pos_data], label, validation_split=0.1, epochs=4, batch_size=32, callbacks=callbacks_list)
# + id="XyRP5BphTcQq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 851} outputId="5b26d48b-d335-4d8e-8fdc-b51356daacc4"
plot_result(history)
# + id="iKIDSi_uVhAk" colab_type="code" colab={}
def build_elmo_pos_lstm_attn():
inp1 = layers.Input(shape=(maxlen, 3072,)) # To input elmo embeddings
inp2 = layers.Input(shape=(maxlen,)) # To input POS tag sequences
x = layers.Embedding(max_words, 1024, input_length=maxlen)(inp2)
z = layers.Concatenate(axis=-1)([inp1, x])
z = layers.LSTM(64, dropout=0.5, return_sequences=True)(z)
z = AttentionWithContext()(z)
z = layers.Dense(128, activation='relu')(z)
z = layers.Dropout(0.5)(z)
z = layers.Dense(64, activation='relu')(z)
z = layers.Dropout(0.3)(z)
pred = layers.Dense(1, activation='sigmoid')(z)
model = Model(inputs=[inp1,inp2], outputs=pred)
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy', f1])
model.summary()
return model
# + id="xQ1Jk1u6XuDn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 586} outputId="e47d7d68-ae60-40e3-c800-3e560af34122"
elmo_pos_lstm_attn = build_elmo_pos_lstm_attn()
# + id="DAGXg7ziXy2N" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 182} outputId="0360abf7-215d-43ea-9c4b-e6aed49e2968"
# fit the model
history = elmo_pos_lstm_attn.fit([elmo_embedding, pos_data], label, validation_split=0.1, epochs=4, batch_size=32)
# + id="plb8Tr-73r33" colab_type="code" colab={}
| text_biasness.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Kindle Clippings Markdown Exporter
# This notebook will generate a separate markdown file for each your books with their clippings and highlightss.
#
# NOTE: To use, first run [kindle_clippings_parser.ipynb](https://github.com/markwk/qs_ledger/blob/master/kindle/kindle_clippings_parser.ipynb) to parse and collect your device's clippings.
#
# For **data analysis and some data visualization** of your Amazon Kindle clippings, see: [kindle_clippings_data_analysis.ipynb](https://github.com/markwk/qs_ledger/blob/master/kindle/kindle_clippings_data_analysis.ipynb)
# ----
# ## Configuration
export_directory = '/Users/my-user-name/path1/path2/path3'
# ----
# ## Dependencies
from datetime import date, datetime as dt, timedelta as td
import pandas as pd
# ----
# ## Data Import and Data Preparations
my_clippings = pd.read_csv("data/clippings.csv")
my_clippings.columns = ['author', 'book_title', 'timestamp', 'highlight', 'location',
'num_pages']
# date additions
my_clippings['timestamp'] = pd.to_datetime(my_clippings['timestamp'])
my_clippings['date'] = my_clippings['timestamp'].apply(lambda x: x.strftime('%Y-%m-%d')) # note: not very efficient
my_clippings['year'] = my_clippings['timestamp'].dt.year
my_clippings['month'] = my_clippings['timestamp'].dt.month
my_clippings['mnth_yr'] = my_clippings['timestamp'].apply(lambda x: x.strftime('%Y-%m')) # note: not very efficient
my_clippings['day'] = my_clippings['timestamp'].dt.day
my_clippings['dow'] = my_clippings['timestamp'].dt.weekday
my_clippings['hour'] = my_clippings['timestamp'].dt.hour
# ----
# # Kindle Book Clippings Markdown Exporter
book_titles = my_clippings['book_title'].unique()
print('{:,} total books'.format(len(book_titles)))
def generate_book_file(book_title, directory=export_directory):
book_notes = my_clippings[my_clippings['book_title'] == book_title]
title = (book_notes.iloc[0]['book_title']).rstrip()
author = (book_notes.iloc[0]['author']).rstrip()
title_stripped = (title.rstrip()
.replace(" ", "_")
.replace(":", "")
.replace(",", "")
.replace("/", "")
.replace("(", "")
.replace(")", "")
.replace("?", "")
.lower())
filename=(book_notes.iloc[0]['timestamp'].strftime('%Y%m%d%H%M') + "_" + title_stripped+".md")
filepath= directory+filename
if author == 'Blinkist':
pass
else:
print("Printing... " + filename)
file = open(filepath,"w")
file.write("# " + title + " by " + author + " \n")
file.write("### Clippings \n")
file.write("tags: #BookClippings #BookRead \n")
file.write(" \n")
for index, row in book_notes.iterrows():
file.write(str(row['highlight']) + " \n")
file.write("p " + str(row['num_pages']) + " | " + row['location'] + " | " + str(row['timestamp']) + " \n")
file.write(" \n")
file.close()
# Get a Test Book Title
book_titles[-1]
# +
# Test Individual Book Export
# generate_book_file("Buddha's Brain ")
# -
# loop through all books
for i in book_titles:
generate_book_file(i)
| kindle/kindle_highlights_markdown_exporter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.10 ('ml')
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import plotly.express as px
import geopy
df = pd.read_csv("D:/ADSP/Hertfordshire-Constabulary/data/three-years-final.csv")
df
len(df["crime_id"].unique())
def summary_col(df, col):
#print(df[str(col)].unique())
print(df[str(col)].value_counts())
sns.set_style("whitegrid")
plt.figure(figsize=(15,8))
ax = sns.countplot(y=col, data=df)
summary_col(df, "outcome_type")
summary_col(df, "crime_type")
df.info()
# ### Feature Engineering
# which can result in the curse of dimensionality. The concept of the “curse of dimensionality” discusses that in high-dimensional spaces some things just stop working properly.
df["crime_type"] = df["crime_type"].replace({"Other theft":"theft", "Shoplifting":"theft", "Bicycle theft": "theft", "Theft from the person": "theft"}, regex=True)
df.head(2)
df["crime_type"].unique()
summary_col(df, "crime_type")
df["date"] = pd.to_datetime(df["date"])
df.head(2)
# +
# Using Dictionary comprehension
labels = df["crime_type"].astype("category").cat.categories.tolist()
replace_map_comp = {'crime_type' : {k: v for k,v in zip(labels,list(range(0,len(labels)+1)))}}
print(replace_map_comp)
# -
df = df.replace(replace_map_comp)
df
df.drop(["crime_id"], axis=1, inplace=True)
df
df["outcome_type"].unique()
df["outcome_type"] = df["outcome_type"].replace({"Suspect charged":"prosecuted", "Suspect charged as part of another case":"prosecuted",
"Defendant sent to Crown Court": "prosecuted", "Action to be taken by another organisation": "prosecuted",
"Offender given a drugs possession warning" : "prosecuted", "Offender given penalty notice":"prosecuted",
"prosecuted as part of another case":"prosecuted",
"Offender given a caution":"prosecuted", "prosecuted as part of another case":"prosecuted"})
df.head(2)
df["outcome_type"].unique()
df["outcome_type"] = df["outcome_type"].replace({"Unable to prosecute suspect":"not_prosecuted", "Local resolution":"not_prosecuted",
"Formal action is not in the public interest": "not_prosecuted", "Further investigation is not in the public interest": "not_prosecuted",
"Further action is not in the public interest" : "not_prosecuted"})
df.head(2)
df["outcome_type"].unique()
summary_col(df, "outcome_type")
df.drop(df.loc[df['outcome_type']=="Investigation complete; no suspect identified"].index, inplace=True)
df
summary_col(df, "outcome_type")
# +
# Using Dictionary comprehension
labels = df["outcome_type"].astype("category").cat.categories.tolist()
outcome_replace_map = {'outcome_type' : {k: v for k,v in zip(labels, list(range(0, len(labels)+1)))}}
outcome_replace_map
# -
df = df.replace(outcome_replace_map)
df
df.plot(x = "Longitude", y = "Latitude", kind = "scatter", c = "crime_type", colormap = "YlOrRd")
def encode_cat(df, col):
labels = df[col].astype("category").cat.categories.tolist()
replace_map = {col : {k: v for k,v in zip(labels, list(range(0,len(labels)+1)))}}
return replace_map
lsoa_dict = encode_cat(df, "LSOA_code")
lsoa_dict
df = df.replace(lsoa_dict)
df
df.drop(["Longitude", "Latitude", "Location", "LSOA_name"], axis = 1, inplace=True)
df
df.info()
df.date.min(), df.date.max()
def year_month_extract(df, col):
df[col + "_year"] = df[col].dt.year
df[col + "_month"] = df[col].dt.month
return year_month_extract
year_month_extract(df, "date")
df
sns.set(rc={"figure.figsize":(8, 6)})
sns.countplot(data = df, x = "date_month")
sns.set(rc={"figure.figsize":(8, 6)})
sns.countplot(data = df, x = "date_year")
sns.countplot(x = "outcome_type", data=df, hue=df["date_year"])
plt.xlabel("Outcome, 0 = Not Prosecuted, 1 = Prosecuted")
plt.show()
sns.set(rc={"figure.figsize":(16, 12)})
sns.countplot(x = "date_year", data=df, hue=df["crime_type"])
plt.xlabel("Burglary: 0, Criminal damage and arson: 1, Drugs: 2, Other crime: 3, Possession of weapons: 4, Public order: 5, Robbery: 6, Vehicle crime: 7, Violence and sexual offences: 8, theft: 9")
plt.show()
df.drop(["date"], axis=1, inplace=True)
df
dummy_year = pd.get_dummies(df['date_year'])
df = pd.merge(
left=df,
right=dummy_year,
left_index=True,
right_index=True,
)
print(df)
df
df.drop(["date_year"], axis = 1, inplace=True)
df
fig = plt.figure(figsize=(10,10))
colors = ["green",'pink']
prosec = df[df['outcome_type']==1]
not_prosec = df[df['outcome_type']==0]
ck = [prosec['outcome_type'].count(), not_prosec['outcome_type'].count()]
piechart = plt.pie(ck,labels=["Prosecuted","Not Prosecuted"],
autopct ='%1.1f%%',
shadow = True,
colors = colors,
startangle = 45,
explode=(0, 0.1))
df.outcome_type.value_counts()
len(df["LSOA_code"].unique())
df.reset_index(inplace=True)
df
df.drop(["index"], axis = 1, inplace=True)
df.head(2)
df.to_csv(path_or_buf="D:/ADSP/Hertfordshire-Constabulary/data/df-model-with-onehot.csv", index=False)
def get_zipcode(df, geolocator, lat, lon):
location = geolocator.reverse((df[lat], df[lon]))
zipcode = location.raw['address']['postcode']
return zipcode
# +
#replace_map = {"crime_type_code": {"Violence and sexual offences": 1, "theft": 2, "Criminal damage and arson": 3, }}
| codes/eda.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="2mItG_vSnKEB" colab_type="text"
# Universidad Simón Bolívar. \\
# Departamento de Cómputo Científico. \\
# CO-6612, Introducción a Redes Neuronales. \\
# TDD 2020. \\
# Prof. <NAME>. \\
# <NAME> 16-10072.
# # **Tarea 3: El Adaline**
# + id="wXSDOxfnnHig" colab_type="code" colab={}
import numpy as np
import csv
import matplotlib.pyplot as plt
from random import uniform, shuffle
from math import e
# + id="XZ2YZWU2nXIq" colab_type="code" colab={}
# Montamos el drive para obtener los archivos
# mnist_test.csv, mnist_train.csv y datosT3.csv
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="dfGB28pqnf-s" colab_type="text"
# ## **Problema 1**
# Programe el Adaline usando el algoritmo del LMS. Usted deberá entregar su código documentado.
# + id="CTCHNwZXnpXy" colab_type="code" colab={}
def E(X: [[float]], Y: [[float]], W: [[float]]):
""" Calcula el error cuadratico medio promediado.
Input:
- X = [x_i]: Conjunto de datos de entrenamiento, cada x_i debe ser un arreglo.
- Y = [y_i]: Conjunto de respuesta, cada y_i debe ser un arreglo con la
correcta del dato x_i.
- W: Matriz con los pesos.
Output:
ecm: Error cuadratico medio.
"""
# Numero de datos
N = len(X)
# Error cuadratico medio.
ecm = 0
for j in range(N):
# Obtenemos el j-esimo dato.
x_j = X[j].copy()
# Agregamos una coordenada para el sesgo.
x_j = np.append(x_j, 1)
# Calculamos el resultado de la red.
y_j = np.dot(W, x_j)
# Obtenemos el resultado correcto.
d_j = Y[j]
# Sumamos el error
ecm += np.dot(d_j-y_j, d_j-y_j)
return ecm/(2*N)
def adeline(X, Y, n, epochs, ecm_min=0):
""" Implementacion del adeline multiple.
Input:
- X = [x_i] Conjunto de datos de entrenamiento, cada x_i debe ser un arreglo.
- Y = [y_i] Conjunto de respuesta, cada y_i debe ser un arreglo con la
correcta del dato x_i.
- n Tasa de aprendizaje.
- epochs Numero de epocas.
- ecm_min: Error cuadratico medio necesario para finalizar el entrenamiento
antes de finalizar las epocas. Valor predeterminado: 0.
Output:
W: Matriz con los pesos.
ecm: Error cuadratico medio obtenido en cada epoca.
"""
# Obtenemos la dimension de los datos de entrada y salida.
N_x = len(X[0])
N_y = len(Y[0])
# Obtenemos el numero de datos.
N = len(X)
# Inicializamos los pesos sinapticos.
W = np.array([[uniform(-0.05,0.05) for i in range(N_x + 1)] for j in range(N_y)])
# Error cuadratico medio obtenido en cada epoca.
ecm = []
# Aqui almacenaremos los indices de los datos.
indexes = [i for i in range(N)]
for i in range(epochs):
# Ordenamos aleatoriamente los indices.
shuffle(indexes)
for j in indexes:
# Obtenemos el j-esimo dato.
x_j = X[j].copy()
# Agregamos una coordenada para el sesgo.
x_j = np.append(x_j, 1)
# Calculamos el resultado de la red.
y_j = np.dot(W, x_j)
# Obtenemos el resultado correcto.
d_j = Y[j]
# Actualizamos W.
W += n*np.outer((d_j - y_j),x_j)
new_ecm = E(X, Y, W)
ecm.append(new_ecm)
print("Epoca: ", i+1, ". Error cuadratico medio: ", new_ecm)
if new_ecm <= ecm_min:
break
return W, ecm
# + id="leZqnBXyli6Z" colab_type="code" colab={}
# Definimos las siguientes funciones para probar la implementacion del adeline.
def d(X: [[float]]):
""" Input: X = [x, y, z]
Output: [2x - 10y + 6z - 4]
"""
return np.array([2*X[0] - 10*X[1] + 6*X[2] - 4])
def genData(N):
""" Generamos un conjunto de datos aleatorios de 3 dimensiones.
Input:
N: Numero de datos.
Output
Arreglo con N arreglos de 3 coordenadas entre los valores -10 y 10.
"""
N = 100
X = []
Y = []
for i in range(N):
data = np.array([uniform(-10, 10), uniform(-10, 10), uniform(-10, 10)])
X.append(data)
Y.append(d(data))
X = np.array(X)
Y = np.array(Y)
return X, Y
# + id="D3iSpdcenpAB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 884} executionInfo={"status": "ok", "timestamp": 1592939454702, "user_tz": 240, "elapsed": 1767, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjPIz_LdAT0a3YmtheGAMEN8njcQGvelL-I_HcRtQ=s64", "userId": "17001211823553125705"}} outputId="b822613d-2e30-444f-8c28-69df5aa72139"
# Probamos la implementacion del adeline.
X, Y = genData(100)
W, ecm = adeline(X, Y, 0.01, 100, ecm_min=0)
print("Pesos obtenidos: ")
print(W)
# + id="YzBMNbcZorKT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1592884040443, "user_tz": 240, "elapsed": 626, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjPIz_LdAT0a3YmtheGAMEN8njcQGvelL-I_HcRtQ=s64", "userId": "17001211823553125705"}} outputId="61f31241-41e8-4f68-c186-84dc3285f8de"
X, Y = genData(100)
error = E(X, Y, W)
print("El error cuadratico medio promediado para los datos de prueba es: ", error)
# + [markdown] id="cB_R293a3Ibz" colab_type="text"
# ## **Problema 2**
# Para el conjunto de entrenamiento usado en la tarea del perceptrón, repita la experiencia pero ahora con el Adaline. Evalúe y compare este algoritmo con los resultados obtenidos en la tarea anterior. Comente sobre su escogencia en los parámetros de aprendizaje.
# + id="tEQAogm96PjB" colab_type="code" colab={}
def digitToCanon(x: int):
""" Dado un digito, retorna un vector de ceros excepto en la posicion del
digito, donde habra un 1. """
r = [0]*10
r[x] = 1
return np.array(r)
def readCSV(file: str):
""" Leer datos .csv para el problema de los digitos, donde para cada dato
la primera coordenada indica la respuesta, y las demas coordenadas seran
divididas entre 255 para que esten en el rango [0, 1].
Input:
file: Nombre del archivo csv.
Output:
X: Conjunto de imagenes de los digitos
Y: Respuesta a cada dato en X.
"""
with open(file, newline='') as File:
reader = csv.reader(File)
X = []
Y = []
for r in reader:
for i in range(1, len(r)):
r[i] = float(r[i])/255
X.append(r)
Y.append(digitToCanon(int(r.pop(0))))
return np.array(X), np.array(Y)
# Obtenemos los datos de los digitos.
X_train, Y_train = readCSV("/content/drive/My Drive/mnist_train.csv")
X_test, Y_test = readCSV("/content/drive/My Drive/mnist_test.csv")
# + [markdown] id="AxcLp1zpVj-X" colab_type="text"
# Usaremos una implementación ligeramente distinta a la anterior para poder calcular la precisión en cada época y poder compararla con la precisión obtenida con el perceptrón. En este caso, se usará la estrategia del aprendizaje por reforzamiento para obtener la precisión, donde cada elemento de la salida $y(n)$ sera 1 si es el máximo o 0 en caso contrario.
# + id="9NAo1Om_U6HL" colab_type="code" colab={}
def E_mnist(X, Y, W):
""" Calcula el error cuadratico medio promediado y la precision de la RNA con
los datos de MNIST de los digitos.
Input:
- X = [x_i] Conjunto de datos de entrenamiento, cada x_i debe ser un arreglo.
- Y = [y_i] Conjunto de respuesta, cada y_i debe ser un arreglo con la
correcta del dato x_i.
- W: Matriz con los pesos.
Output:
ecm: Error cuadratico medio.
acc: Precision
"""
# Numero de datos
N = len(X)
# Error cuadratico medio.
ecm = 0
# Numero de errores.
errors = 0
for j in range(N):
# Obtenemos el j-esimo dato.
x_j = X[j].copy()
# Agregamos una coordenada para el sesgo.
x_j = np.append(x_j, 1)
# Calculamos el resultado de la red.
y_j = np.dot(W, x_j)
# Obtenemos el resultado correcto.
d_j = Y[j]
# Sumamos el error
ecm += np.dot(d_j-y_j, d_j-y_j)
# Si hubo algun resultado erroneo.
if any(int(y_j[k]/max(y_j)) != d_j[k] for k in range(len(y_j))):
errors += 1
return ecm/(2*N), (N-errors)/N
def adeline_mnist(X, Y, n, epochs, ecm_min=0):
""" Implementacion del adeline multiple para los datos de MNIST de los digitos.
Input:
- X = [x_i] Conjunto de datos de entrenamiento, cada x_i debe ser un arreglo.
- Y = [y_i] Conjunto de respuesta, cada y_i debe ser un arreglo con la
correcta del dato x_i.
- n Tasa de aprendizaje.
- epochs Numero de epocas.
- phi Funcion de activacion.
- ecm_min: Error cuadratico medio necesario para finalizar el entrenamiento
antes de finalizar las epocas. Valor predeterminado: 0.
Output:
W: Matriz con los pesos.
ecm: Error cuadratico medio promediado obtenido en cada epoca.
acc: Precision obtenida en cada epoca.
"""
# Obtenemos la dimension de los datos de entrada y salida.
N_x = len(X[0])
N_y = len(Y[0])
# Obtenemos el numero de datos.
N = len(X)
# Inicializamos los pesos sinapticos.
W = np.array([[uniform(-0.05,0.05) for i in range(N_x + 1)] for j in range(N_y)])
# Error cuadratico medio obtenido en cada epoca.
ecm = []
# Precision obtenida en cada epoca
acc = []
# Aqui almacenaremos los indices de los datos.
indexes = [i for i in range(N)]
for i in range(epochs):
# Ordenamos aleatoriamente los indices.
shuffle(indexes)
for j in indexes:
# Obtenemos el j-esimo dato.
x_j = X[j].copy()
# Agregamos una coordenada para el sesgo.
x_j = np.append(x_j, 1)
# Calculamos el resultado de la red.
y_j = np.dot(W, x_j)
# Obtenemos el resultado correcto.
d_j = Y[j]
# Actualizamos W.
W += n*np.outer((d_j - y_j),x_j)
new_ecm, new_acc = E_mnist(X, Y, W)
ecm.append(new_ecm)
acc.append(new_acc)
print("Epoca: ", i+1, ". Error cuadratico medio promediado: ", new_ecm, ". Precision: ", new_acc)
if new_ecm <= ecm_min:
break
return W, ecm, acc
# + id="8FcIlYEWT3f6" colab_type="code" colab={}
def problema_2(eta: float, acc_perceptron: [float]):
W, ecm, acc = adeline_mnist(X_train, Y_train, eta, 50)
epochs = [i for i in range(1, 51)]
plt.plot(epochs, acc, label="Adeline")
plt.plot(epochs, acc_perceptron, label="Perceptron")
plt.xlabel("Epoca")
plt.ylabel("Precison")
plt.title("Evolucion del entrenamiento para eta = " + str(eta))
plt.legend()
plt.show()
print("Precision maxima obtenida: ", max(acc))
M = len(X_test)
ecm, acc = E_mnist(X_test, Y_test, W)
print("Con los datos de prueba se obtuvo una precision de ", acc)
# + id="1b_EWV0yUQi-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1592885566222, "user_tz": 240, "elapsed": 261559, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjPIz_LdAT0a3YmtheGAMEN8njcQGvelL-I_HcRtQ=s64", "userId": "17001211823553125705"}} outputId="00f8c33c-c190-4b36-cf6a-4bbbe1bfa9b2"
acc_perceptron = [0.6729,0.749,0.7638,0.7177,0.7452,0.7538,0.7461,0.774,0.7678,0.735,
0.7096,0.64,0.7218,0.7273,0.7724,0.7484,0.7541,0.7317,0.8024,0.7324,
0.7626,0.745,0.7498,0.7562,0.7524,0.7585,0.7172,0.752,0.7456,0.7031,
0.76,0.7617,0.7769,0.7718,0.7757,0.7798,0.7819,0.756,0.7631,0.755,
0.7637,0.734,0.7901,0.7375,0.7366,0.7843,0.7322,0.7584,0.7891,0.7736]
problema_2(0.001, acc_perceptron)
# + id="B9o8-Q5eVz5K" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1592886022125, "user_tz": 240, "elapsed": 254122, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjPIz_LdAT0a3YmtheGAMEN8njcQGvelL-I_HcRtQ=s64", "userId": "17001211823553125705"}} outputId="14bb136e-d71c-43c9-b88e-357313b462ac"
acc_perceptron = [0.7458,0.7652,0.7327,0.7681,0.7399,0.655,0.7589,0.7419,0.7653,
0.7686,0.7384,0.7698,0.7582,0.7647,0.6895,0.7711,0.7577,0.783,
0.7128,0.7414,0.7531,0.7122,0.7245,0.7812,0.7769,0.7229,0.7401,
0.7158,0.6955,0.7806,0.6886,0.7498,0.7677,0.7196,0.7711,0.7429,
0.7785,0.7361,0.6883,0.7195,0.7283,0.7847,0.7797,0.7258,0.721,
0.7634,0.6943,0.7655,0.7864,0.7391]
problema_2(0.01, acc_perceptron)
# + id="ZBDvNUDGV0Jx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 374} executionInfo={"status": "error", "timestamp": 1592886025679, "user_tz": 240, "elapsed": 3477, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjPIz_LdAT0a3YmtheGAMEN8njcQGvelL-I_HcRtQ=s64", "userId": "17001211823553125705"}} outputId="8c13f604-b97d-4b66-e6fe-afa883dc2369"
acc_perceptron = [0.7163,0.7592,0.7501,0.7246,0.7351,0.7655,0.7745,0.7064,0.7438,
0.7475,0.7722,0.7015,0.7644,0.7622,0.7589,0.7682,0.7353,0.6886,
0.7022,0.7654,0.7612,0.7701,0.7635,0.7242,0.7819,0.7093,0.7786,
0.753,0.7488,0.7326,0.7473,0.7445,0.7724,0.777,0.6981,0.7189,
0.7785,0.7728,0.76,0.7774,0.7647,0.7529,0.7514,0.7755,0.758,0.7672,
0.7372,0.7315,0.7088,0.7845]
problema_2(0.1, acc_perceptron)
# + [markdown] id="j0zymbRraFJU" colab_type="text"
# Podemos notar cláramente las diferencias en los resultados obtenidos para cada $\eta$. Con $\eta = 0.001$ obtuvimos una precisión aceptable, rondando los 0.85, muy por encima del perceptrón. Sin embargo, para $\eta=0.01$ la red se vuelve muy inestable, y su rendimiento es superado por el perceptrón. Finalmente para $\eta = 0.1$ obtenemos un error, el cual ocurre porque el valor de una de las neuronas de salida da **NaN**, lo cual significa que no hubo una convergencia en los valores output, alcanzando el límite de los números que python puede representar. Esto ocurre debido a que la función lineal, siendo la función de activación del adeline, no está acotada, por lo tanto, el error tampoco, asi que la actualización de los pesos puede ocurrir de manera brusca, haciendo oscilar los datos de salida en valores cada vez de mayor magnitud y finalmente ocasionando un error. Además, como la etiqueta de cada patrón es 0 o 1 en cada adeline, la función lineal no es la más indicada para el output de cada neurona, lo cual contribuye a la no-convergencia de la red. También cabe destacar que usar más épocas no parece ser efectivo, pues no se aprecia una curva de aprendizaje creciente para ningún $\eta$.
# + [markdown] id="2t1cjiFrefBV" colab_type="text"
# Debido a todo esto, se decidió experimentar un poco y usar la función logística para limitar la salida de cada neurona al rango $(0, 1)$. Veremos el cambio que esto produce solo para $\eta = 0.001$, el cual nos dio el mejor rendimiento con los adelines.
# + id="eR6_0svQ9UgF" colab_type="code" colab={}
def logist(x: float, alpha: float = 1):
""" Funcion logistica F(x) -> 1/(1 + exp(-x)) """
return 1/(1+e**(-alpha*x))
def E_mnist(X, Y, W, phi):
""" Calcula el error cuadratico medio promediado y la precision de la RNA con
los datos de MNIST de los digitos.
Input:
- X = [x_i] Conjunto de datos de entrenamiento, cada x_i debe ser un arreglo.
- Y = [y_i] Conjunto de respuesta, cada y_i debe ser un arreglo con la
correcta del dato x_i.
- W: Matriz con los pesos.
- phi Funcion de activacion.
Output:
ecm: Error cuadratico medio.
acc: Precision
"""
# Numero de datos
N = len(X)
# Error cuadratico medio.
ecm = 0
# Numero de errores.
errors = 0
for j in range(N):
# Obtenemos el j-esimo dato.
x_j = X[j].copy()
# Agregamos una coordenada para el sesgo.
x_j = np.append(x_j, 1)
# Calculamos el resultado de la red.
A = np.dot(W, x_j)
y_j = [phi(A[k]) for k in range(len(A))]
# Obtenemos el resultado correcto.
d_j = Y[j]
# Sumamos el error
ecm += np.dot(d_j-y_j, d_j-y_j)
# Si hubo algun resultado erroneo.
if any(int(y_j[k]/max(y_j)) != d_j[k] for k in range(len(y_j))):
errors += 1
return ecm/(2*N), (N-errors)/N
def adeline_mnist(X, Y, n, epochs, phi, ecm_min=0):
""" Implementacion del adeline multiple para los datos de MNIST de los digitos.
Input:
- X = [x_i] Conjunto de datos de entrenamiento, cada x_i debe ser un arreglo.
- Y = [y_i] Conjunto de respuesta, cada y_i debe ser un arreglo con la
correcta del dato x_i.
- n Tasa de aprendizaje.
- epochs Numero de epocas.
- phi Funcion de activacion.
- ecm_min: Error cuadratico medio necesario para finalizar el entrenamiento
antes de finalizar las epocas. Valor predeterminado: 0.
Output:
W: Matriz con los pesos.
ecm: Error cuadratico medio promediado obtenido en cada epoca.
acc: Precision obtenida en cada epoca.
"""
# Obtenemos la dimension de los datos de entrada y salida.
N_x = len(X[0])
N_y = len(Y[0])
# Obtenemos el numero de datos.
N = len(X)
# Inicializamos los pesos sinapticos.
W = np.array([[uniform(-0.05,0.05) for i in range(N_x + 1)] for j in range(N_y)])
# Error cuadratico medio obtenido en cada epoca.
ecm = []
# Precision obtenida en cada epoca
acc = []
# Aqui almacenaremos los indices de los datos.
indexes = [i for i in range(N)]
for i in range(epochs):
# Ordenamos aleatoriamente los indices.
shuffle(indexes)
for j in indexes:
# Obtenemos el j-esimo dato.
x_j = X[j].copy()
# Agregamos una coordenada para el sesgo.
x_j = np.append(x_j, 1)
# Calculamos el resultado de la red.
A = np.dot(W, x_j)
y_j = [phi(A[k]) for k in range(len(A))]
# Obtenemos el resultado correcto.
d_j = Y[j]
# Actualizamos W.
W += n*np.outer((d_j - y_j),x_j)
new_ecm, new_acc = E_mnist(X, Y, W, phi)
ecm.append(new_ecm)
acc.append(new_acc)
print("Epoca: ", i+1, ". Error cuadratico medio promediado: ", new_ecm, ". Precision: ", new_acc)
if new_ecm <= ecm_min:
break
return W, ecm, acc
# + id="bVzPLnOE5A9V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1592887659653, "user_tz": 240, "elapsed": 344602, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjPIz_LdAT0a3YmtheGAMEN8njcQGvelL-I_HcRtQ=s64", "userId": "17001211823553125705"}} outputId="0de87cbe-6e57-4397-d11f-6ce84a4a0e3b"
eta = 0.001
W, ecm, acc = adeline_mnist(X_train, Y_train, eta, 50, logist)
epochs = [i for i in range(1, 51)]
acc_perceptron = [0.6729,0.749,0.7638,0.7177,0.7452,0.7538,0.7461,0.774,0.7678,
0.735,0.7096,0.64,0.7218,0.7273,0.7724,0.7484,0.7541,0.7317,
0.8024,0.7324,0.7626,0.745,0.7498,0.7562,0.7524,0.7585,0.7172,
0.752,0.7456,0.7031,0.76,0.7617,0.7769,0.7718,0.7757,0.7798,
0.7819,0.756,0.7631,0.755,0.7637,0.734,0.7901,0.7375,0.7366,
0.7843,0.7322,0.7584,0.7891,0.7736]
plt.plot(epochs, acc, label="Adeline")
plt.plot(epochs, acc_perceptron, label="Perceptron")
plt.xlabel("Epoca")
plt.ylabel("Precison")
plt.title("Evolucion del entrenamiento para eta = " + str(eta))
plt.legend()
plt.show()
print("Precision maxima obtenida: ", max(acc))
M = len(X_test)
ecm, acc = E_mnist(X_test, Y_test, W, logist)
print("Con los datos de prueba se obtuvo una precision de ", acc)
# + [markdown] id="zKiwPnfuKVzO" colab_type="text"
# El cambio al usar la función logística como función de activación es gigantesco, produciendo una curva de aprendizaje mucho mas suave y estable. Además el rendimiento es mucho mejor, superando los 0.91 de precisión
# + [markdown] id="dQwyT2AbNuWZ" colab_type="text"
# ## **Problema 3**
# Para los datos en datosT3.csv busque un interpolador utilizando un Adaline. Comente sobre las decisiones del algoritmo como por ejemplo número de épocas, tasa de aprendizaje, etc.
#
#
# + [markdown] id="Deo4Bq58jETR" colab_type="text"
# Debido a que al realizar una interpolación no se sabe a priori cual va a ser el grado del polinomio que mejor se adapta a los datos, entonces primero grafiqué dichos datos para obtener una idea preliminar.
# + id="jNQjhkCoCtcJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 265} executionInfo={"status": "ok", "timestamp": 1592948517426, "user_tz": 240, "elapsed": 2441, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjPIz_LdAT0a3YmtheGAMEN8njcQGvelL-I_HcRtQ=s64", "userId": "17001211823553125705"}} outputId="0ebfa359-ad4f-405a-e24a-4126f3ef21e2"
def readCSV(file):
""" Leer datos bidimensionales de un archivo .csv
Input:
file: Nombre del archivo csv.
Output:
X: Primeras coordenadas de los datos.
Y: Segundas coordenadas de los datos.
"""
with open(file, newline='') as File:
reader = csv.reader(File)
X = []
Y = []
for r in reader:
X.append(float(r[0]))
Y.append(float(r[1]))
return X, Y
X, Y = readCSV("/content/drive/My Drive/datosT3.csv")
plt.plot(X, Y)
plt.show()
# + [markdown] id="PHnBq1PkPMdJ" colab_type="text"
# Claramente se puede apreciar el comportamiento de un polinomio de tercer grado. Por lo tanto, se decidió usar 4 adelines para realizar la interpolación.
# + id="cy7hJS3xPDQh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} executionInfo={"status": "ok", "timestamp": 1592948809430, "user_tz": 240, "elapsed": 754, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjPIz_LdAT0a3YmtheGAMEN8njcQGvelL-I_HcRtQ=s64", "userId": "17001211823553125705"}} outputId="68fcc6dd-cc58-42ee-a7ac-e35db98a5ae3"
# Calculamos los datos de entrada de la red
X_train = np.array([np.array([x, x**2, x**3]) for x in X])
Y_train = np.array([np.array([y]) for y in Y])
W, ecm = adeline(X_train, Y_train, 0.005, 25, ecm_min=0.0555)
print("Pesos obtenidos: ")
print(W)
# + id="aznlrdESQACE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 299} executionInfo={"status": "ok", "timestamp": 1592948551928, "user_tz": 240, "elapsed": 1386, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjPIz_LdAT0a3YmtheGAMEN8njcQGvelL-I_HcRtQ=s64", "userId": "17001211823553125705"}} outputId="9b0e6616-c730-4558-bf1e-3c6679377fae"
# Ahora graficaremos la funcion polinomica obtenida de la red
# Definimos la funcion polinomica con los pesos W.
def f(x):
return W[0][3] + W[0][0]*x + W[0][1]*x**2 + W[0][2]*x**3
x = [i/100 for i in range(-200, 200)]
plt.plot(x, [f(i) for i in x], label="Interpolacion")
plt.plot(X, Y, label="Datos")
plt.legend()
plt.show()
print("Polinomio aproximado: F(x) = ", round(W[0][3],4), " + ",
round(W[0][0],4),"x + ",
round(W[0][1],4), "x^2 +",
round(W[0][2],4), "x^3")
print("Error cuadratico medio: ", E(X_train, Y_train, W))
# + [markdown] id="gHtmT042UoZb" colab_type="text"
# El número de épocas y la tasa de aprendizaje elegidas fueron $25$ y $0.005$ respectivamente, pues modificarlas no disminuia el error cuadrático medio mas bajo de 0.55.
#
# + [markdown] id="85ou7ck4HUK-" colab_type="text"
# ## **Problema 4**
# + [markdown] id="I1Yvd8EOCXjf" colab_type="text"
# ### **Problema 4.a**
# Sean
# $$w = [\begin{array}{crl} w_1 & w_2 \end{array}]^t$$
# $$ \mathcal{E}(w) = \frac{1}{2} \sigma^2 - r^t w + \frac{1}{2} w^t R w $$
#
# Encuentre el valor óptimo de $w$, para el cual $\mathcal{E}(w)$ es mínimo.
# + [markdown] id="GjCkqHmxqSlM" colab_type="text"
# $$ \mathcal{E}(w) = \frac{1}{2} \sigma^2 - r^t w + \frac{1}{2} w^t R w $$
#
# $$ = \frac{1}{2} \sigma^2 - \left[ \begin{array}{crl} 0.8182 & 0.354 \\ \end{array} \right] \left[ \begin{array}{crl} w_1 \\ w_2 \\ \end{array} \right] + \frac{1}{2} [\begin{array}{crl} w_1 & w_2 \end{array}] \left[ \begin{array}{crl} 1 & 0.8182 \\ 0.8182 & 1 \\ \end{array} \right] \left[ \begin{array}{crl} w_1 \\ w_2 \\ \end{array} \right]$$
#
# $$ = \frac{1}{2} \sigma^2 - 0.8182 w_1 - 0.354 w_2 + \frac{1}{2} [\begin{array}{crl} w_1 & w_2 \end{array}] \left[ \begin{array}{crl} w_1 + 0.8182w_2 \\ 0.8182w_1 + w_2 \\ \end{array} \right] $$
#
# $$ = \frac{1}{2} \sigma^2 - 0.8182 w_1 - 0.354 w_2 + \frac{w_1^2}{2} + 0.8182w_1w_2 + \frac{w_2^2}{2}$$
#
# $$ \mathcal{E}(w) = \frac{w_1^2}{2} - 0.8182 w_1 + 0.8182w_1w_2 - 0.354 w_2 + \frac{w_2^2}{2} + \frac{1}{2} \sigma^2$$
#
# Calculamos las derivadas parciales de $\mathcal{E}(w)$ respecto a $w_1, w_2$ y las igualamos a 0.
# $$ \frac{\partial \mathcal{E}(w)}{\partial w_1} = w_1 + 0.8182w_2 - 0.8182 = 0$$
#
# $$w_1 = 0.8182(1 - w_2)$$
#
# $$ \frac{\partial \mathcal{E}(w)}{\partial w_2} = w_2 + 0.8182w_1 - 0.354 = 0$$
#
# $$w_1 = \frac{0.354 - w_2}{0.8182}$$
#
# Igualando los $w_1$ obtenemos
# $$ 0.8182(1 - w_2) = \frac{0.354 - w_2}{0.8182}$$
# $$ 0.6694 - 0.6694w_2 + w_2 = 0.354$$
# $$ 0.3306w_2 = -0.3154 $$
# $$ w_2 = -0.954$$
# $$ w_1 = 0.8182(1 - w_2) = 0.8182(1 + 0.954) = 1.5987$$
#
# Sabemos que el punto $p=(1.5987, -0.954)$ es un punto crítico. Calculamos las segundas derivadas parciales de $\mathcal{E}(w)$.
# $$ \frac{\partial \mathcal{E}(p)}{(\partial w_1)^2} = 1$$
# $$ \frac{\partial \mathcal{E}(p)}{(\partial w_2)^2} = 1$$
# $$ \frac{\partial \mathcal{E}(p)}{\partial w_1 \partial w_2} = 0.8182$$
# $$ \frac{\partial \mathcal{E}(p)}{\partial w_2 \partial w_1} = 0.8182$$
#
# Finalmente, como
# $$ \frac{\partial \mathcal{E}(p)}{\partial w_1} = 0 $$
# $$ \frac{\partial \mathcal{E}(p)}{\partial w_2} = 0 $$
# $$ \frac{\partial \mathcal{E}(p)}{(\partial w_1)^2} > 0 $$
# $$ \left| \begin{array}{crl} \frac{\partial \mathcal{E}(p)}{(\partial w_1)^2} & \frac{\partial \mathcal{E}(p)}{\partial w_1 \partial w_2} \\ \frac{\partial \mathcal{E}(p)}{\partial w_2 \partial w_1} & \frac{\partial \mathcal{E}(p)}{(\partial w_2)^2} \\ \end{array} \right| = \left| \begin{array}{crl} 1 & 0.8182 \\ 0.8182 & 1 \\ \end{array} \right| = 0.3305 > 0$$
#
# Entonces por el criterio de la segunda derivada, el punto $p=(1.5987, -0.954)$ es un mínimo de $\mathcal{E}(w)$. Más aún, como sólo conseguimos un punto tal que las derivadas parciales de $\mathcal{E}$ evaluadas en ese punto son $0$, entonces $p$ es el único punto crítico de $\mathcal{E}$, y por lo tanto, es un mínimo global.
# + [markdown] id="bD2Q1nfZqTGb" colab_type="text"
# ### **Problema 4.b**
# Use el método del descenso de gradiente para calcular el valor óptimo, usando los valores de la tasa de aprendizaje y en cada caso grafique la trayectoria de la evolución de los pesos $w(n)$ en el plano.
# + id="Az22n6mnSWp5" colab_type="code" colab={}
def E(W, s=1):
""" Funcion de costo.
Input:
W = [x, y] Pesos sinapticos.
s: Sigma. Valor predeterminado: 1.
Output:
x**2/2 - 0.8182*x + 0.8182xy - 0.354y + y**2/2 + s**2/2
"""
return W[0]**2/2 - 0.8182*W[0] + 0.8182*W[0]*W[1] - 0.354*W[1] + W[1]**2/2 + s**2/2
def dE(W):
""" Derivada de la funcion de costo.
Input:
W = [x, y] Pesos sinapticos.
Output:
[x + 0.8182y - 0.8182, y + 0.8182x - 0.354]
"""
return np.array([W[0] + 0.8182*W[1] - 0.8182, W[1] + 0.8182*W[0] - 0.354])
def descent_gradient(k, n, E, dE, epochs, err_min=0, dif_err=0):
""" Implementacion del descenso de gradiente.
Input:
- k: Dimension de los pesos sinapticos.
- n: Tasa de aprendizaje
- E: Funcion de coste.
- dE: Derivada de la funcion de coste.
- epochs: Numero de epocas.
- err_min: Error necesario para finalizar el entrenamiento.
Valor predeterminado: 0.
- dif_err: Diferencia necesaria entre los errores de dos epocas para finalizar
el entrenamiento. Valor predeterminao: 0
Output:
err: Evaluacion de la funcion de coste en cada epoca.
W: Pesos sinapticos obtenidos en cada epoca.
"""
# Inicializamos los pesos sinapticos.
w_j = np.array([uniform(-0.05,0.05) for i in range(k)])
# Error cometido en cada epoca.
err = []
# Pesos obtenidos en cada epoca.
W = []
# Epoca
epoch = 0
for j in range(epochs):
epoch += 1
# Actualizamos W.
w_j -= n*dE(w_j)
# Almacenamos las coordenadas de W para esta epoca.
W.append(w_j.copy())
# Calculamos el nuevo error y verificamos que haya cambiado.
new_err = E(w_j)
err.append(new_err)
print("Epoca: ", j+1, ". Error obtenido: ", new_err)
if j > 0:
if err[len(err)-2] - new_err <= dif_err or new_err <= err_min:
break
return err, W
# + id="BLe4ABwbKtRg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 941} executionInfo={"status": "ok", "timestamp": 1593043929085, "user_tz": 240, "elapsed": 2588, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjPIz_LdAT0a3YmtheGAMEN8njcQGvelL-I_HcRtQ=s64", "userId": "17001211823553125705"}} outputId="6b6a8742-f876-4b04-fb58-30604538f446"
err, W = descent_gradient(2, 1, E, dE, 300, dif_err = 1e-7)
# Graficamos las curvas de nivel
X = [i/140 + 0.3 for i in range(200)]
Y = [i/100 - 1.2 for i in range(200)]
Z = [[E( [X[i], Y[j]] ) for i in range(200)] for j in range(200)]
plt.contour(X, Y, Z, 45, cmap='RdGy')
# Graficamos los resultados.
X = [w[0] for w in W]
Y = [w[1] for w in W]
plt.plot(X, Y)
plt.xlabel("w_x")
plt.ylabel("w_y")
plt.title("Evolucion de los pesos.")
plt.show()
print("Pesos obtenidos: ", W[len(W)-1])
# + id="y6oVCGnaY2Xc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1593043986610, "user_tz": 240, "elapsed": 4269, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjPIz_LdAT0a3YmtheGAMEN8njcQGvelL-I_HcRtQ=s64", "userId": "17001211823553125705"}} outputId="e78aa9ca-e0f7-4c1f-b460-a358ff1301bb"
err, W = descent_gradient(2, 0.3, E, dE, 300, dif_err = 1e-7)
# Graficamos las curvas de nivel
X = [i/130 + 0.2 for i in range(200)]
Y = [i/100 - 1.2 for i in range(200)]
Z = [[E( [X[i], Y[j]] ) for i in range(200)] for j in range(200)]
plt.contour(X, Y, Z, 45, cmap='RdGy')
# Graficamos los resultados.
X = [w[0] for w in W]
Y = [w[1] for w in W]
plt.plot(X, Y)
plt.xlabel("w_x")
plt.ylabel("w_y")
plt.title("Evolucion de los pesos.")
plt.show()
print("Pesos obtenidos: ", W[len(W)-1])
# + [markdown] id="4iEYDX1Hml21" colab_type="text"
# Se puede notar claramente la diferencia en la evolución de los pesos sinápticos para los distintos $\eta$, siendo más suave con $\eta=0.3$, aunque con $\eta=1$ logra converger más rápido. En ambos casos, los pesos obtenidos coinciden casi exáctamente con los predichos teóricamente $(1.5987, -0.954)$
# + id="ooysLTXmj_xM" colab_type="code" colab={}
| Neural_Networks_Introduction/B_Adaline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/java/small_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="c9eStCoLX0pZ"
# **<h3>Predict the documentation for java code using codeTrans mulititask training model</h3>**
# <h4>You can make free prediction online through this
# <a href="https://huggingface.co/SEBIS/code_trans_t5_small_code_documentation_generation_java_multitask">Link</a></h4> (When using the prediction online, you need to parse and tokenize the code first.)
# + [markdown] id="6YPrvwDIHdBe"
# **1. Load necessry libraries including huggingface transformers**
# + colab={"base_uri": "https://localhost:8080/"} id="6FAVWAN1UOJ4" outputId="c838f2b7-8eb3-49d8-b236-6d1d9d31411e"
# !pip install -q transformers sentencepiece
# + id="53TAO7mmUOyI"
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
# + [markdown] id="xq9v-guFWXHy"
# **2. Load the token classification pipeline and load it into the GPU if avilabile**
# + colab={"base_uri": "https://localhost:8080/", "height": 321, "referenced_widgets": ["e98b843550674e68ab627619c465f9c5", "f4ac46d345fc4d169b31d4eb8137f3d8", "522ef820d83d4943b0030fd1634ca744", "e081ef5636c34b46bc4f6cc649458242", "<KEY>", "<KEY>", "12727aefbdb74869b2670a27de95e2d2", "<KEY>", "fe84b7d2d0264344a700f5920d6903e5", "cee158d31131458e9f7341bea1baa3b6", "<KEY>", "b0cae7e2755a4f959594aac8fe2feace", "<KEY>", "cfe0aa0e839e4dea98fe86871daaba77", "a5388a0bacfa48c3a078100a1ef51281", "fac53ebdb425464890bf771810a38b96", "83eadb9e029f46919eda2865e474fa27", "<KEY>", "<KEY>", "00f0047e21414674b92aca2ff18586c0", "<KEY>", "76ad6895394b4057b343c5ea7ae1ea48", "<KEY>", "<KEY>", "6e1878e13ee543beb8fede751ed28850", "9b54d672b5a1420294f185a0ad7b7514", "<KEY>", "<KEY>", "736c8748d0324d20ab19e17a01b6a0f1", "<KEY>", "cef42faccff247ceacc1d838bac66ec0", "8a0c62f8ebe84e209eaa4312863a5b31", "d409b581e093486097b877b0b0fec70a", "08a2ea1d3a124b48995960788a179dd2", "<KEY>", "<KEY>", "9186ee5276764dad9a2a2b5ba0eaff16", "4d290a80ee3c4ffcafeec37a3a256487", "<KEY>", "4d837d50ecf74ac7a460f361d934e29a"]} id="5ybX8hZ3UcK2" outputId="eb2e614e-71db-49bc-8ed3-f07625964d15"
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java_multitask", skip_special_tokens=True),
device=0
)
# + [markdown] id="hkynwKIcEvHh"
# **3 Give the code for summarization, parse and tokenize it**
# + id="nld-UUmII-2e"
code = """public static <T, U> Function<T, U> castFunction(Class<U> target) {\n return new CastToClass<T, U>(target);\n }""" #@param {type:"raw"}
# + id="cJLeTZ0JtsB5" colab={"base_uri": "https://localhost:8080/"} outputId="e8376160-31bb-4c71-c9db-d1cb34ff3616"
# !pip install tree_sitter
# !git clone https://github.com/tree-sitter/tree-sitter-java
# + id="hqACvTcjtwYK"
from tree_sitter import Language, Parser
Language.build_library(
'build/my-languages.so',
['tree-sitter-java']
)
JAVA_LANGUAGE = Language('build/my-languages.so', 'java')
parser = Parser()
parser.set_language(JAVA_LANGUAGE)
# + id="LLCv2Yb8t_PP"
def my_traverse(node, code_list):
lines = code.split('\n')
if node.child_count == 0:
line_start = node.start_point[0]
line_end = node.end_point[0]
char_start = node.start_point[1]
char_end = node.end_point[1]
if line_start != line_end:
code_list.append(' '.join([lines[line_start][char_start:]] + lines[line_start+1:line_end] + [lines[line_end][:char_end]]))
else:
code_list.append(lines[line_start][char_start:char_end])
else:
for n in node.children:
my_traverse(n, code_list)
return ' '.join(code_list)
# + id="BhF9MWu1uCIS" colab={"base_uri": "https://localhost:8080/"} outputId="ad566102-26da-4bde-fe69-a097ad8fc9cd"
tree = parser.parse(bytes(code, "utf8"))
code_list=[]
tokenized_code = my_traverse(tree.root_node, code_list)
print("Output after tokenization: " + tokenized_code)
# + [markdown] id="sVBz9jHNW1PI"
# **4. Make Prediction**
# + colab={"base_uri": "https://localhost:8080/"} id="KAItQ9U9UwqW" outputId="6b00da98-66b9-4eb7-ba29-fd336dfe5daa"
pipeline([tokenized_code])
| prediction/multitask/pre-training/function documentation generation/java/small_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + tags=["hide-cell"]
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
import pandas as pd
import panel as pn
import ipywidgets as widgets
# -
# ## Simulating Mass budget ##
# + code_folding=[]
def mass_bal(n_simulation, MA, MB, MC, R_A, R_B):
A = np.zeros(n_simulation) # creat an array with zros
B = np.zeros(n_simulation)
C = np.zeros(n_simulation)
time = np.arange(n_simulation)
for i in range(0,n_simulation-1):
A[0] = MA # starting input value
B[0] = MB
C[0] = MC
A[i+1] = A[i]-R_A*A[i]
B[i+1] = B[i]+R_A*A[i]-R_B*B[i]
C[i+1] = C[i]+R_B*B[i]
summ = A[i]+B[i]+C[i]
d = {"Mass_A": A, "Mass_B": B, "Mass_C": C, "Total Mass": summ}
df = pd.DataFrame(d) # Generating result table
label = ["Mass A (g)", "Mass B (g)", "Mass C (g)"]
fig = plt.figure(figsize=(6,4))
plt.plot(time, A, time, B, time, C, linewidth=3); # plotting the results
plt.xlabel("Time [Time Unit]"); plt.ylabel("Mass [g]") # placing axis labels
plt.legend(label, loc=0);plt.grid(); plt.xlim([0,n_simulation]); plt.ylim(bottom=0) # legends, grids, x,y limits
plt.show() # display plot
return print(df.round(2))
N = widgets.BoundedIntText(value=20,min=0,max=100,step=1,description= 'Δ t (day)',disabled=False)
A = widgets.BoundedFloatText(value=100,min=0,max=1000.0,step=1,description='M<sub>A</sub> (kg)',disabled=False)
B = widgets.BoundedFloatText(value=5,min=0,max=1000.0,step=1,description='M<sub>B</sub> (kg)',disabled=False)
C = widgets.BoundedFloatText(value=10,min=0,max=1000,step=0.1,description='M<sub>C</sub> (kg)',disabled=False)
RA = widgets.BoundedFloatText(value=0.2,min=0,max=100,step=0.1,description='R<sub>A</sub> (day<sup>-1 </sup>)',disabled=False)
RB = widgets.BoundedFloatText(value=0.2,min=0,max=100,step=0.1,description='R<sub>B</sub> (day<sup>-1 </sup>)',disabled=False)
interactive_plot = widgets.interactive(mass_bal, n_simulation = N, MA=A, MB=B, MC=C, R_A=RA, R_B=RB,)
output = interactive_plot.children[-1]
#output.layout.height = '350px'
interactive_plot
# -
| Lecture IPYNB/decay.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/wamaithaNyamu/ALGORITHMS-AND-DATASTRUCTURES/blob/master/Monitoring_and_reporting_changes_in_surface_water_using_satellite_Image_Data.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="rjDiD_iBKq7n" colab_type="text"
# #Introduction
# + [markdown] id="lCADGgAUKHJm" colab_type="text"
# According to UNESCO, fresh water is the most important resource for human kind survival. Fresh water is an enabling or a source of conflict for any technologoical, social, cooperation or conflict in the world. Comparing past water resources with those of the present can help us understand how past societal and economic decisions can contribute to more informed management decision in the future. We have publicly available dataset from organizations like NASA and European Space Agency. Yoi have been hired by the UNESCO,to use satellitte imagery and deep learning image segmentation algorithms to assess the changes to fresh water resources over time.
# + [markdown] id="JYaOxSDoL-zw" colab_type="text"
# #Getting started
# + id="WczgzZ9HMXdq" colab_type="code" colab={}
# + [markdown] id="9YLmgbAHMCsj" colab_type="text"
# #Data Acquisition and Preprocessing
# + id="xaQe86WwMHRS" colab_type="code" colab={}
# + [markdown] id="SN9JWs2EMK1d" colab_type="text"
# #Enhancing and segmenting Images
# + id="TWJKdzjVMYH8" colab_type="code" colab={}
# + [markdown] id="nvYGb3CpMLrH" colab_type="text"
# #Model Training and Evaluation
# + id="09--ucrXMYuS" colab_type="code" colab={}
# + [markdown] id="H8QMjqYPMOji" colab_type="text"
# #Model Optimisation
# + id="b95VH6-CMZLV" colab_type="code" colab={}
# + [markdown] id="3GZssy7iMQWI" colab_type="text"
# #Reporting to UNESCO
# + id="OC-mJ1yUMT2Y" colab_type="code" colab={}
| Monitoring_and_reporting_changes_in_surface_water_using_satellite_Image_Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ejercicio: Web Scrapping para extraer información de un artículo
# Este ejercicio consiste en extraer la información más importante de una página web. Para este ejercicio elegimos un artículo sobre el transhumanismo que se puede encontrar [en este enlace](https://nuso.org/articulo/hacia-un-futuro-transhumano/). El artículo está en español. Dejaremos aquí el abstracto de este.
#
# > El transhumanismo es un movimiento intelectual que propone superar los límites naturales de la humanidad mediante el mejoramiento tecnológico y, eventualmente, la separación de la mente del cuerpo humano. Si bien ha sido históricamente marginal y sectario, sus planteos de medicina mejorativa, su materialismo radical, incluso sus controvertidas ideas de eugenesia, inmortalidad y singularidad adquieren creciente interés en un momento en el cual la tecnología amenaza con avanzar sobre esferas de la vida humana hasta ahora en apariencia intocables.
# En este ejercicio, extraeremos los párrafos del texto, un resumen de este, el vocabulario usado, el título, la fecha de publicación, el nombre de la revista, el nombre del autor, entre otras cosas.
# **Primero importamos las librerias necesarias**
import requests
from bs4 import BeautifulSoup
import re
import nltk
import string
# Ahora, obtenemos la página como un objecto
# Obtener el código HTML del artículo
respuesta = requests.get("https://nuso.org/articulo/hacia-un-futuro-transhumano/")
pagina = BeautifulSoup(respuesta.text, 'html.parser' )
# Primeramente, vamos a obtener el nombre de la revista. el objeto `página` tiene un método que retorna el título del documento HTML
# +
def nombre_revista(pagina):
return pagina.title.text.split('|')[1].strip()
nombre_revista = nombre_revista(pagina)
# -
# Podemos ver que `title` retorna, para nuestro caso específico, un array con 2 elementos: el título de la ravista y el del artícul. Ahora queremos el nombre del artículo.
# +
def nombre_articulo(pagina):
return pagina.title.text.split('|')[0].strip()
nombre_articulo = nombre_articulo(pagina)
# -
# También podemos obtener el número del artículo con respecto a la revista. Esa información se encutra en la primera parte de la página web y representa cuántos artículos tenía publicado la revista, hasta el momento.
# +
def numero_revista(pagina):
regex = r'Nº (\d+)'
texto_pagina = pagina.find(name='div', attrs={'class': 'section-title has-magazine'}).span.text
numero_revista = re.findall(regex,texto_pagina)
return numero_revista[0]
numero_revista = numero_revista(pagina)
# -
# Queremos obtener la fecha de la publicación del artículo
# +
def fecha(pagina):
texto_pagina = pagina.find(name='div', attrs={'class': 'section-title has-magazine'}).span.text
return texto_pagina.split('/')[1].strip()
fecha = fecha(pagina)
# -
# Queremos obtener el resumen, o el abstract, del artículo. Es la pequeña explicación sobre que trata el artículo.
# +
def resumen_articulo(pagina):
return pagina.find(name='div', attrs={'class':'summary'}).text.strip()
resumen_articulo = resumen_articulo(pagina)
# -
# Finalmente, queremos el texto, o la redacción principal del artículo.
# +
def redaccion_principal(pagina):
parrafos = obtener_lista_de_parrafos(pagina)
texto_articulo = ''
for parrafo in parrafos:
texto_articulo = texto_articulo + parrafo.text + '\n\n'
return texto_articulo.strip()
redaccion_principal = redaccion_principal(pagina)
# -
# Después de obtener todos los atributos principales del artículo, podemos hacer la "limpieza del texto". Si nosotros, por ejemplo, quisiéramos utilizar este texto para entrenar un modelo de machine learning, normalmente tenemos que eliminar los elementos del texto que no son relevantes para el modelo de machine learning. Por ejemplo, podemos eliminar las palabras de parada, las palabras como los artículos, que no brindan información importante.
#
# También normalmente se quiere tener el vocabulario del texto. Es decir, cuantas palabras diferentes tiene el texto.
# +
def obtener_palabras_de_parada():
return set( nltk.corpus.stopwords.words('spanish') + list(string.punctuation))
def obtener_vocabulario(texto_en_palabras, palabras_parada):
return [palabra for palabra in texto_en_palabras if palabra not in palabras_parada]
palabras_parada = obtener_palabras_de_parada()
texto_en_palabras = nltk.word_tokenize(redaccion_principal)
vocabulario = obtener_vocabulario(texto_en_palabras, palabras_parada)
# -
# Ahora simplemente mostramos todo el código que se necesitó para realizar este ejercicio.
# +
# Funciones relativas al procesamiento del documento HTML
def nombre_revista(pagina):
return pagina.title.text.split('|')[1].strip()
def nombre_articulo(pagina):
return pagina.title.text.split('|')[0].strip()
def numero_revista(pagina):
regex = r'Nº (\d+)'
texto_pagina = pagina.find(name='div', attrs={'class': 'section-title has-magazine'}).span.text
numero_revista = re.findall(regex,texto_pagina)
return numero_revista[0]
def fecha(pagina):
texto_pagina = pagina.find(name='div', attrs={'class': 'section-title has-magazine'}).span.text
return texto_pagina.split('/')[1].strip()
def resumen_articulo(pagina):
return pagina.find(name='div', attrs={'class':'summary'}).text.strip()
def obtener_lista_de_parrafos(pagina):
contenedor_texto = pagina.find(name='div', attrs={'class': 'uk-width-expand'})
return contenedor_texto.findAll(name='p')
def redaccion_principal(pagina):
parrafos = obtener_lista_de_parrafos(pagina)
texto_articulo = ''
for parrafo in parrafos:
texto_articulo = texto_articulo + parrafo.text + '\n\n'
return texto_articulo.strip()
# Recuperar la información en texto del artículo
nombre_revista = nombre_revista(pagina)
nombre_articulo = nombre_articulo(pagina)
numero_revista = numero_revista(pagina)
fecha = fecha(pagina)
resumen_articulo = resumen_articulo(pagina)
redaccion_principal = redaccion_principal(pagina)
# Funciones relativas a la limpieza del texto
def obtener_palabras_de_parada():
return set( nltk.corpus.stopwords.words('spanish') + list(string.punctuation))
def obtener_vocabulario(texto_en_palabras, palabras_parada):
return [palabra for palabra in texto_en_palabras if palabra not in palabras_parada]
# Obtener el vocabulario relevante del texto
palabras_parada = obtener_palabras_de_parada()
texto_en_palabras = nltk.word_tokenize(redaccion_principal)
vocabulario = obtener_vocabulario(texto_en_palabras, palabras_parada)
| Web Application/Notebooks/2. Web Scraping/Ejercicio_NUSO_web_scrappng.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Analysis of population genetic signatures of selection using the SFS and quantitative metrics of selection in individual lineages (related to Figure S2 and Figure S3)
# +
from __future__ import division
import sys
import os
import time
import copy
import pickle
import numpy as np
import pandas as pd
import scipy
# %matplotlib inline
from matplotlib import pyplot as plt
import matplotlib as mpl
from matplotlib import gridspec
import seaborn as sns
import bct
output_dir = "outs"
output_suffix = ""
output_formats = [".pdf", ".png"]
def save_figure(fig, name, output_dir, output_suffix, output_formats, savefig_args):
if savefig:
for output_format in output_formats:
fig.savefig(output_dir + "/" + name + output_suffix + output_format, **savefig_args)
return None
savefig = True
savefig_args = {"dpi": 300, "bbox_inches": "tight", "pad_inches": 0.2}
mpl.rc('savefig', dpi=300)
sns.set_style("ticks")
sns.set_context("talk")
myColors = ["#E69F00", "#56B4E9", "#D55E00", "#009E73"]
# -
# # Load data
# +
# Lineage dynamics data
df_expanded = pd.read_csv("data/df_expanded.filtered.csv", index_col=0)
df_persistent = pd.read_csv("data/df_persistent.filtered.csv", index_col=0)
print "Lineages"
print "Expanded", df_expanded.shape[0]
print "Persistent", df_persistent.shape[0]
# +
# Load frequencies of mutations for all lineages
freqs = pickle.load(open("data/SFS_Bulk_freqs.pickle"))
# Load number of leaves in each lineage
lineage_sizes = pickle.load(open("data/SFS_Bulk_lineage_sizes.pickle"))
# -
# Metrics of selection
df_metrics = pd.read_csv("data/df_metrics.csv", index_col=0)
df_metrics.head()
# +
# Sort lineages by metric of selection
df_metrics.sort_values(by="H_pvalue_kingman", ascending=True, inplace=True)
df_metrics_expanded = df_metrics.loc[df_metrics["label_dynamics"] == "Vaccine-responsive"]
df_metrics_persistent = df_metrics.loc[df_metrics["label_dynamics"] == "Persistent"]
# -
df_metrics.head()
# # Calculate SFS for individual lineages
import bct
# Choose bins for SFS
bins = np.array([1e-5, 1e-4, 1e-3, 1e-2, 0.1, 0.5, 0.9, 0.99, 0.999, 0.9999, 0.99999])
bin_centers_manual = np.array([5e-5, 5e-4, 5e-3, 5e-2, 0.25, 0.75, 1-5e-2, 1-5e-3, 1-5e-4, 1-5e-5])
bin_centers = np.sqrt(bins[1:] * bins[:-1])
# +
# Compute SFS for every lineage in an ensemble
def calc_sfs_ensemble(freqs, lineage_sizes, lineage_uids, bins):
""" Calculate mean SFS over an ensemble by taking mean value at each bin """
S = np.empty((len(lineage_uids), len(bins)-1))
for i, lineage_uid in enumerate(lineage_uids):
myFreqs = freqs[lineage_uid].values()
myLeaves = lineage_sizes[lineage_uid]
binned_sfs, binned_sfs_normed = bct.bin_sfs_cut(myFreqs, bins=bins, leaves=myLeaves)
S[i,:] = binned_sfs_normed
return S
# Calculate SFS
S_expanded = calc_sfs_ensemble(freqs, lineage_sizes, list(df_metrics_expanded.index), bins)
S_persistent = calc_sfs_ensemble(freqs, lineage_sizes, list(df_metrics_persistent.index), bins)
# -
# # Plot SFSs of individual lineages as heatmaps with metrics aligned
# Define function to plot SFS as heatmap
def plot_sfs_ensemble(ax, S, bin_centers, cmap_name, **kwargs):
from matplotlib.colors import LogNorm
S_pseudocount = S + 1e-2
S_masked = np.ma.array(S_pseudocount, mask=np.isnan(S_pseudocount))
cmap = mpl.cm.get_cmap(cmap_name)
cmap.set_bad('gray', 1.0)
ax.imshow(S_masked, norm=LogNorm(vmin=1e-2, vmax=1e6),
interpolation='none', cmap=cmap, **kwargs)
# ax.set_adjustable('box-forced')
ax.xaxis.set_ticks_position('none')
ax.yaxis.set_ticks_position('none')
ax.set_xticklabels([])
ax.set_yticklabels([])
return ax
# # Vaccine-responsive lineages
fig, ax = plt.subplots(1, 1, figsize=(4,8))
plot_sfs_ensemble(ax, S_expanded[::-1], bin_centers, cmap_name="YlGnBu_r")
ax.set_ylim(-0.5, df_metrics_expanded.shape[0]-0.5)
plt.tight_layout()
save_figure(fig, "SFS_Heatmaps_expanded", output_dir, output_suffix, output_formats, savefig_args)
# Define function for plotting colorbar (cbar is separate and we place it into the final figure)
def plot_sfs_ensemble_cbar(ax, S, bin_centers, cmap_name, **kwargs):
from matplotlib.colors import LogNorm
S_pseudocount = S + 1e-2
S_masked = np.ma.array(S_pseudocount, mask=np.isnan(S_pseudocount))
cmap = mpl.cm.get_cmap(cmap_name)
cmap.set_bad('gray', 1.0)
im = ax.imshow(S_masked, norm=LogNorm(vmin=1e-2, vmax=1e6),
interpolation='none', cmap=cmap, **kwargs)
cbar = fig.colorbar(im, ticks=[1e6, 1e4, 1e2, 1e0, 1e-2], orientation='horizontal')
cbar.set_ticklabels([r'$10^{6}$', r'$10^{4}$', r'$10^{2}$', r'$1$', 0])
# ax.set_adjustable('box-forced')
ax.xaxis.set_ticks_position('none')
ax.yaxis.set_ticks_position('none')
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.axis('off')
return ax
fig, ax = plt.subplots(1, 1, figsize=(2,2))
plot_sfs_ensemble_cbar(ax, S_expanded[::-1], bin_centers, cmap_name="YlGnBu_r")
save_figure(fig, "SFS_Heatmaps_expanded_cbar", output_dir, output_suffix, output_formats, savefig_args)
# +
# Plot metrics aligned with heatmap
from matplotlib import gridspec
from matplotlib.colors import LogNorm
mpl.rcParams.update({'font.size': 22})
cmap_scatter = "RdYlBu"
fig = plt.figure(figsize=(8,8))
gs = gridspec.GridSpec(1, 6)
ax0 = fig.add_subplot(gs[0,0])
ax1 = fig.add_subplot(gs[0,1], sharey=ax0)
ax2 = fig.add_subplot(gs[0,2], sharey=ax0)
ax3 = fig.add_subplot(gs[0,3], sharey=ax0)
ax4 = fig.add_subplot(gs[0,4], sharey=ax0)
# ax5 = fig.add_subplot(gs[0,5], sharey=ax0)
axes = [ax0, ax1, ax2, ax3, ax4]
# axes_cbars = [ax0_cbar, ax1_cbar, ax2_cbar, ax3_cbar]
y = range(0,df_metrics_expanded.shape[0])[::-1]
s = 8
ax = ax0
x = df_metrics_expanded["H"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.plot((0,0),(min(y),max(y)+10), "k--", lw=1)
ticks=[40, 0, -40, -80]
ax.set_xticks(ticks)
ax = ax1
# x = np.log10(df_metrics_expanded["pvalue_kingman"])
x = df_metrics_expanded["H_pvalue_kingman"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.set_xlim(1e-7, 2)
ax.set_xscale('log')
ticks=[1e-8, 1e-4, 1]
ax.set_xticks(ticks)
ticklabels=[r'$10^{-8}$', r'$10^{-4}$', 1]
ax.set_xticklabels(ticklabels)
ax = ax2
x = df_metrics_expanded["D"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.plot((0,0),(min(y),max(y)+10), "k--", lw=1)
ticks=[-3, 0, 3]
ax.set_xticks(ticks)
ax = ax3
# x = np.log10(df_metrics_expanded["D_pvalue_kingman"])
x = df_metrics_expanded["D_pvalue_kingman"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.set_xlim(1e-3, 1.4)
ax.set_xscale("log")
# ticks=[-2, -1, 0]
# ax.set_xticks(ticks)
ax = ax4
x = df_metrics_expanded['num_seq']
sc = ax.barh(np.array(y)-0.5, x, 0.5, color="k")
ax.set_xscale("log")
ticks=[1e2, 1e3, 1e4, 1e5]
ax.set_xticks(ticks)
for ax in axes:
ax.set_ylim(-1, df_metrics_expanded.shape[0])
ax.tick_params(labelsize=8, pad=1)
ax.yaxis.set_ticks_position('none')
ax.set_yticklabels([])
ax.xaxis.tick_top()
# sns.despine()
plt.subplots_adjust(wspace=0.3)
save_figure(fig, "SFS_Heatmaps_metrics_expanded", output_dir, output_suffix, output_formats, savefig_args)
# -
fig, ax = plt.subplots(1, 1, figsize=(1,7))
myColors_subjects = ['#e41a1c','#377eb8','#a6cee3','#984ea3','#f781bf']
patient_uid_to_color = dict(zip([2,3,6,7,8],myColors_subjects))
c = np.array([patient_uid_to_color[int(str(x)[0])] for x in list(df_metrics_expanded.index)])
x = [0] * len(c)
ax.scatter(x, y, c=c, marker="s", s=2)
ax.set_xlim(-1,1)
ax.axis('off')
save_figure(fig, "SFS_Heatmaps_subjects_expanded", output_dir, output_suffix, output_formats, savefig_args)
# +
# Plot everything together
from matplotlib import gridspec
from matplotlib.colors import LogNorm
mpl.rcParams.update({'font.size': 22})
cmap_scatter = "RdYlBu"
fig = plt.figure(figsize=(8,8))
outer = gridspec.GridSpec(1, 2, width_ratios=[2, 6], wspace=-0.05)
gs = gridspec.GridSpecFromSubplotSpec(1, 6, subplot_spec = outer[1], wspace = 0.25)
ax0 = fig.add_subplot(outer[0,0])
ax1 = fig.add_subplot(gs[0,0], sharey=ax0)
ax2 = fig.add_subplot(gs[0,1], sharey=ax0)
ax3 = fig.add_subplot(gs[0,2], sharey=ax0)
ax4 = fig.add_subplot(gs[0,3], sharey=ax0)
ax5 = fig.add_subplot(gs[0,4], sharey=ax0)
ax6 = fig.add_subplot(gs[0,5], sharey=ax0)
axes = [ax0, ax1, ax2, ax3, ax4, ax5, ax6]
y = range(0,df_metrics_expanded.shape[0])[::-1]
s = 7
ax = ax0
plot_sfs_ensemble(ax, S_expanded[::-1], bin_centers, cmap_name="YlGnBu_r")
ax.set_ylim(-0.5, df_metrics_expanded.shape[0]-0.5)
ax.set_aspect(0.75)
ax.set_adjustable('box-forced')
ax = ax1
x = df_metrics_expanded["H"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.plot((0,0),(min(y),max(y)+10), "k--", lw=1)
ticks=[40, 0, -40]
ax.set_xticks(ticks)
ax = ax2
# x = np.log10(df_metrics_expanded["pvalue_kingman"])
x = df_metrics_expanded["H_pvalue_kingman"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.set_xlim(1e-6, 2)
ax.set_xscale('log')
ticks=[1e-6, 1e-3, 1]
ax.set_xticks(ticks)
ticklabels=[r'$10^{-6}$', r'$10^{-3}$', 1]
ax.set_xticklabels(ticklabels)
ax = ax3
x = df_metrics_expanded["D"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.plot((0,0),(min(y),max(y)+10), "k--", lw=1)
ticks=[-3, 0, 3]
ax.set_xticks(ticks)
ax = ax4
# x = np.log10(df_metrics_expanded["D_pvalue_kingman"])
x = df_metrics_expanded["D_pvalue_kingman"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.set_xlim(1e-3, 1.4)
ax.set_xscale("log")
# ticks=[-2, -1, 0]
# ax.set_xticks(ticks)
ax = ax5
x = df_metrics_expanded['num_seq']
sc = ax.barh(np.array(y)-0.25, x, 0.5, color="k")
ax.set_xscale("log")
ticks=[1e2, 1e3, 1e4, 1e5]
ax.set_xticks(ticks)
ax = ax6
myColors_subjects = ['#e41a1c','#377eb8','#a6cee3','#984ea3','#f781bf']
patient_uid_to_color = dict(zip([2,3,6,7,8],myColors_subjects))
c = np.array([patient_uid_to_color[int(str(x)[0])] for x in list(df_metrics_expanded.index)])
x = [0] * len(c)
ax.scatter(x, y, c=c, marker="s", s=24)
ax.set_xlim(-1,1)
ax.axis('off')
for ax in axes[1:]:
ax.set_ylim(-0.5, df_metrics_expanded.shape[0]-0.5)
ax.tick_params(labelsize=6, pad=1)
ax.yaxis.set_ticks_position('none')
ax.set_yticklabels([])
ax.xaxis.tick_top()
save_figure(fig, "SFS_Heatmaps_SFSWithMetricsSubjects_expanded", output_dir, output_suffix, output_formats, savefig_args)
# +
# Print summaries of lineages
pvalue_cutoff = 0.05
print "Fraction of lineages with significant deviation from neutrality by Fay and Wu's H"
print sum(df_metrics_expanded["H_pvalue_kingman"] < pvalue_cutoff) / float(df_metrics_expanded["H_pvalue_kingman"].shape[0])
print
print "Fraction of lineages with significant deviation from neutrality by non-monotonicity D"
print sum(df_metrics_expanded["D_pvalue_kingman"] < pvalue_cutoff) / float(df_metrics_expanded["D_pvalue_kingman"].shape[0])
print
print "Fraction of lineages with significant deviation from neutrality by Fay and Wu's H AND D"
print sum((df_metrics_expanded["H_pvalue_kingman"] < pvalue_cutoff) & (df_metrics_expanded["D_pvalue_kingman"] < 0.05)) / float(df_metrics_expanded.shape[0])
print
print "Fraction of lineages with significant deviation from neutrality by Fay and Wu's H OR D"
print sum((df_metrics_expanded["H_pvalue_kingman"] < pvalue_cutoff) | (df_metrics_expanded["D_pvalue_kingman"] < 0.05)) / float(df_metrics_expanded.shape[0])
# -
# # Persistent lineages
# Heatmaps
fig, ax = plt.subplots(1, 1, figsize=(4,8))
plot_sfs_ensemble(ax, S_persistent[::-1], bin_centers, cmap_name="YlGnBu_r")
ax.set_ylim(-0.5, df_metrics_persistent.shape[0]-0.25)
plt.tight_layout()
save_figure(fig, "SFS_Heatmaps_persistent", output_dir, output_suffix, output_formats, savefig_args)
# +
# Plot metrics aligned with heatmap
from matplotlib import gridspec
from matplotlib.colors import LogNorm
mpl.rcParams.update({'font.size': 22})
cmap_scatter = "RdYlBu"
fig = plt.figure(figsize=(8,8))
gs = gridspec.GridSpec(1, 6)
ax0 = fig.add_subplot(gs[0,0])
ax1 = fig.add_subplot(gs[0,1], sharey=ax0)
ax2 = fig.add_subplot(gs[0,2], sharey=ax0)
ax3 = fig.add_subplot(gs[0,3], sharey=ax0)
ax4 = fig.add_subplot(gs[0,4], sharey=ax0)
# ax5 = fig.add_subplot(gs[0,5], sharey=ax0)
axes = [ax0, ax1, ax2, ax3, ax4]
# axes_cbars = [ax0_cbar, ax1_cbar, ax2_cbar, ax3_cbar]
y = range(0,df_metrics_persistent.shape[0])[::-1]
s = 7
ax = ax0
x = df_metrics_persistent["H"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.plot((0,0),(min(y),max(y)+10), "k--", lw=1)
ax.set_xlim(left=-160)
ticks=[40, 0, -80, -160]
ax.set_xticks(ticks)
ax = ax1
# x = np.log10(df_metrics_persistent["pvalue_kingman"])
x = df_metrics_persistent["H_pvalue_kingman"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.set_xlim(1e-8, 2)
ax.set_xscale('log')
ticks=[1e-8, 1e-4, 1]
ax.set_xticks(ticks)
ticklabels=[r'$10^{-8}$', r'$10^{-4}$', 1]
ax.set_xticklabels(ticklabels)
ax = ax2
x = df_metrics_persistent["D"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.plot((0,0),(min(y),max(y)+10), "k--", lw=1)
ticks=[-3, 0, 3]
ax.set_xticks(ticks)
ax = ax3
# x = np.log10(df_metrics_persistent["D_pvalue_kingman"])
x = df_metrics_persistent["D_pvalue_kingman"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.set_xlim(1e-3, 1.4)
ax.set_xscale("log")
# ticks=[-2, -1, 0]
# ax.set_xticks(ticks)
ax = ax4
x = df_metrics_persistent['num_seq']
sc = ax.barh(np.array(y)-0.5, x, 0.5, color="k")
ax.set_xscale("log")
ticks=[1e2, 1e3, 1e4, 1e5]
ax.set_xticks(ticks)
for ax in axes:
ax.set_ylim(-1, df_metrics_persistent.shape[0])
ax.tick_params(labelsize=8, pad=1)
ax.yaxis.set_ticks_position('none')
ax.set_yticklabels([])
ax.xaxis.tick_top()
# sns.despine()
plt.subplots_adjust(wspace=0.3)
save_figure(fig, "SFS_Heatmaps_metrics_persistent", output_dir, output_suffix, output_formats, savefig_args)
# -
# Subjects
fig, ax = plt.subplots(1, 1, figsize=(1,7))
myColors_subjects = ['#e41a1c','#377eb8','#a6cee3','#984ea3','#f781bf']
patient_uid_to_color = dict(zip([2,3,6,7,8],myColors_subjects))
c = np.array([patient_uid_to_color[int(str(x)[0])] for x in list(df_metrics_persistent.index)])
x = [0] * len(c)
ax.scatter(x, y, c=c, marker="s", s=2)
ax.set_xlim(-1,1)
ax.axis('off')
save_figure(fig, "SFS_Heatmaps_subjects_persistent", output_dir, output_suffix, output_formats, savefig_args)
# +
# Plot everything together
from matplotlib import gridspec
from matplotlib.colors import LogNorm
mpl.rcParams.update({'font.size': 22})
cmap_scatter = "RdYlBu"
fig = plt.figure(figsize=(8,8))
outer = gridspec.GridSpec(1, 2, width_ratios=[2, 6], wspace=-0.05)
gs = gridspec.GridSpecFromSubplotSpec(1, 6, subplot_spec = outer[1], wspace = 0.25)
ax0 = fig.add_subplot(outer[0,0])
ax1 = fig.add_subplot(gs[0,0], sharey=ax0)
ax2 = fig.add_subplot(gs[0,1], sharey=ax0)
ax3 = fig.add_subplot(gs[0,2], sharey=ax0)
ax4 = fig.add_subplot(gs[0,3], sharey=ax0)
ax5 = fig.add_subplot(gs[0,4], sharey=ax0)
ax6 = fig.add_subplot(gs[0,5], sharey=ax0)
axes = [ax0, ax1, ax2, ax3, ax4, ax5, ax6]
y = range(0,df_metrics_persistent.shape[0])[::-1]
s = 4
ax = ax0
plot_sfs_ensemble(ax, S_persistent[::-1], bin_centers, cmap_name="YlGnBu_r")
ax.set_ylim(-0.5, df_metrics_persistent.shape[0]-0.5)
ax.set_aspect(0.55)
ax.set_adjustable('box-forced')
ax = ax1
x = df_metrics_persistent["H"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.plot((0,0),(min(y),max(y)+10), "k--", lw=1)
ax.set_xlim(left=-160)
ticks=[40, 0, -80, -160]
ax.set_xticks(ticks)
ax = ax2
# x = np.log10(df_metrics_persistent["pvalue_kingman"])
x = df_metrics_persistent["H_pvalue_kingman"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.set_xlim(1e-7, 2)
ax.set_xscale('log')
ticks=[1e-8, 1e-4, 1]
ax.set_xticks(ticks)
ticklabels=[r'$10^{-8}$', r'$10^{-4}$', 1]
ax.set_xticklabels(ticklabels)
ax = ax3
x = df_metrics_persistent["D"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.plot((0,0),(min(y),max(y)+10), "k--", lw=1)
ticks=[-3, 0, 3]
ax.set_xticks(ticks)
ax = ax4
# x = np.log10(df_metrics_persistent["D_pvalue_kingman"])
x = df_metrics_persistent["D_pvalue_kingman"]
sc = ax.scatter(x, y, s=s, c="k", cmap=cmap_scatter, lw=0)
ax.set_xlim(1e-2, 1.4)
ax.set_xscale("log")
# ticks=[-2, -1, 0]
# ax.set_xticks(ticks)
ax = ax5
x = df_metrics_persistent['num_seq']
sc = ax.barh(np.array(y)-0.25, x, 0.5, color="k")
ax.set_xscale("log")
ticks=[1e2, 1e3, 1e4, 1e5]
ax.set_xticks(ticks)
ax = ax6
myColors_subjects = ['#e41a1c','#377eb8','#a6cee3','#984ea3','#f781bf']
patient_uid_to_color = dict(zip([2,3,6,7,8],myColors_subjects))
c = np.array([patient_uid_to_color[int(str(x)[0])] for x in list(df_metrics_persistent.index)])
x = [0] * len(c)
ax.scatter(x, y, c=c, marker="s", s=10)
ax.set_xlim(-1,1)
ax.axis('off')
for ax in axes[1:]:
ax.set_ylim(-0.8, df_metrics_persistent.shape[0]-0.1)
ax.tick_params(labelsize=6, pad=1)
ax.yaxis.set_ticks_position('none')
ax.set_yticklabels([])
ax.xaxis.tick_top()
save_figure(fig, "SFS_Heatmaps_SFSWithMetricsSubjects_persistent", output_dir, output_suffix, output_formats, savefig_args)
# +
# Print summaries of lineages
pvalue_cutoff = 0.05
print "Fraction of lineages with significant deviation from neutrality by Fay and Wu's H"
print sum(df_metrics_persistent["H_pvalue_kingman"] < pvalue_cutoff) / float(df_metrics_persistent["H_pvalue_kingman"].shape[0])
print
print "Fraction of lineages with significant deviation from neutrality by non-monotonicity D"
print sum(df_metrics_persistent["D_pvalue_kingman"] < pvalue_cutoff) / float(df_metrics_persistent["D_pvalue_kingman"].shape[0])
print
print "Fraction of lineages with significant deviation from neutrality by Fay and Wu's H AND D"
print sum((df_metrics_persistent["H_pvalue_kingman"] < pvalue_cutoff) & (df_metrics_persistent["D_pvalue_kingman"] < 0.05)) / float(df_metrics_persistent.shape[0])
print
print "Fraction of lineages with significant deviation from neutrality by Fay and Wu's H OR D"
print sum((df_metrics_persistent["H_pvalue_kingman"] < pvalue_cutoff) | (df_metrics_persistent["D_pvalue_kingman"] < 0.05)) / float(df_metrics_persistent.shape[0])
# -
| figures/SFS_Heatmaps_Fig2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ae
# language: python
# name: ae
# ---
# ### Scikit-learn
# requires:
# - python >=2.6 or >=3.3
# - NumPy >= 1.6.1
# - Scipy >=0.9
#
# #### install
# pip install scikit-learn
#
# ## Iris Classifier
import numpy as np
from sklearn import datasets
# from sklearn.cross_validation import train_test_split
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
iris = datasets.load_iris()
iris_X = iris.data
iris_y = iris.target
# print(iris_X[:2,:]) # first two input sample
# print(iris_y) # all label
X_train, X_test, y_train, y_test = train_test_split(iris_X, iris_y, test_size=0.3)
# print(y_train)
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
print(knn.predict(X_test))
print(y_test)
# ## Datasets of Scikit-Learn
from sklearn import datasets
from sklearn.linear_model import LinearRegression
loaded_data = datasets.load_boston()
data_X = loaded_data.data
data_y = loaded_data.target
model = LinearRegression()
model.fit(data_X, data_y)
print(model.predict(data_X[:4,:]))
print(data_y[:4])
# ### make some data
# +
X, y = datasets.make_regression(n_samples=100, n_features=1, n_targets=1, noise=10)
import matplotlib.pyplot as plt
plt.scatter(X,y)
plt.show()
# -
# ### model propertity
print(model.coef_) # y = 0.1x + 0.3
print(model.intercept_) # bias
print(model.get_params())
print(model.score(data_X, data_y)) # score in regression -> R^2 (coefficient of determination)
# ## Normalization
# Feature Scaling
# Idea: Make sure features are on a similar scale
# +
from sklearn import preprocessing
import numpy as np
a = np.array([[10, 2.7, 3.6], [-100, 5, -2], [120, 20, 40]], dtype=np.float64)
print(a)
print(preprocessing.scale(a))
# -
from sklearn import preprocessing
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets.samples_generator import make_classification
from sklearn.svm import SVC
import matplotlib.pyplot as plt
# +
X, y = make_classification(n_samples=300, n_features=2, n_redundant=0, n_informative=2,
random_state=22, n_clusters_per_class=1, scale=100)
plt.scatter(X[:, 0], X[:, 1], c=y)
plt.show()
# +
X = preprocessing.scale(X) # preprocess.minmax_scale(X, feature_range=(-1, 1))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size =.3)
clf = SVC()
clf.fit(X_train, y_train)
print(clf.score(X_test, y_test))
# -
| .ipynb_checkpoints/Scikit-learn-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/farkoo/Cat-Dog-Classifier/blob/master/CatDogClassifier_Part_5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="1g4Li6ngEF9d"
# # استفاده از شبکههای عصبی کانولوشنی و استفاده از شبکههای از پیش آموزش داده شده
# + [markdown] id="IRnk0Sq4KEld"
# ## آمادهسازی دادههای آموزشی ، اعتبارسنجی و تست
# + [markdown] id="Y2hB_PdsKL7k"
# <div dir='rtl'>
# مشابه قسمتهای قبلی لازم است دادهها را در ماشین مجازی آمادهسازی کنیم.
# + colab={"base_uri": "https://localhost:8080/"} id="38ScwC7bewNw" outputId="77b6236a-40a8-403e-a960-622ab798b822" language="bash"
# git clone "https://github.com/MKasaei00/IUT-CI-HW3-cat-dog-classifier.git"
# + colab={"base_uri": "https://localhost:8080/"} id="UvNMNj1whfmG" outputId="dcab39b2-edb3-47d7-f3e7-762d988904f9"
# %cd IUT-CI-HW3-cat-dog-classifier
# + colab={"base_uri": "https://localhost:8080/"} id="Nb69RhJ5iDrW" outputId="2ff10b39-ba64-425b-95c9-2f5122471a37" language="bash"
# 7z x dataset/test_set.zip -odataset
# 7z x dataset/training_set.zip -odataset
# + id="hTd3JReGlT0r"
import numpy as np
import os
import cv2
from google.colab.patches import cv2_imshow
from keras.utils import np_utils
import matplotlib.pyplot as plt
import tensorflow as tf
import random
import math
# + [markdown] id="g7tavQwbKWpN"
# <div dir='rtl'>
#
# مسیر دادههای آموزشی و تست را مشخص میکنیم.
# + id="NA3x7ObDlopB"
test_dir = 'dataset/test_set'
training_dir = 'dataset/training_set'
# + [markdown] id="AUk50IjIKfEz"
# <div dir='rtl'>
#
# با توجه به شبکهی آمادهای که از وزنهای اولیهی آن استفاده میکنیم لازم است مقادیر ورودی بین -1 و +1 باشد.
#
# پس با توجه به مقادیر مختلف ممکن برای هر رنگ پیکسل (بین 0 تا 255) لازم است تمام اعداد را از وسط بازه کم کنیم و تقسیم بر نصف طول بازه کنیم تا درنهایت تمام مقادیر به مقادیر بین -1 و +1 تبدیل شوند.
# + id="5lNX7WfLln7S"
def load_data_from_directory(dir,width,height,classes):
img_data = []
img_label = []
categories = os.listdir(dir)
for category in categories:
for file in os.listdir(os.path.join(dir,category)):
img_path = os.path.join(dir,category,file)
img = cv2.imread(img_path)
img = cv2.resize(img,(width,height))
img = (np.array(img).astype('float32') - 175.5) / 175.5
img_data.append(img)
img_label.append(category)
img_onehot = np_utils.to_categorical([categories.index(label) for label in img_label],classes,dtype=np.ubyte)
img_data = np.array(img_data)
img_onehot = np.array(img_onehot)
zip_list = list(zip(img_data, img_label, img_onehot))
random.shuffle(zip_list)
img_data, img_label, img_onehot = zip(*zip_list)
return img_data , img_label, img_onehot
# + [markdown] id="y-jHkmO3LHrS"
# <div dir='rtl'>
#
# طول و عرض تصاویر متفاوت است و لازم است برای سازگاری با شبکهی عصبی تمام تصاویر را به یک اندازه مشخص تبدیل کنیم.
# + id="cVb181xCZZbJ"
classes = 2
w = 150
h = 150
# + id="9kpvfOQ3mDqq"
train_x , train_label, train_onehot = load_data_from_directory(training_dir,w,h,classes)
test_x , test_label, test_onehot = load_data_from_directory(test_dir,w,h,classes)
# + id="0TV8o9GRwGPi"
test_label2 = []
for i, label in enumerate(test_label):
test_label2.append(1) if (label == 'dogs') else test_label2.append(0)
# + id="v7XgBhICe2Y6"
train_x = np.asarray(train_x)
train_label = np.asarray(train_label)
train_onehot = np.asarray(train_onehot)
test_x = np.asarray(test_x)
test_label = np.asarray(test_label2)
test_onehot = np.asarray(test_onehot)
# + id="cYnCpcoumHxk"
# !rm -rf cleanData
# !mkdir cleanData
np.save('cleanData/train_x.npy',train_x)
np.save('cleanData/train_label.npy',train_label)
np.save('cleanData/train_onehot.npy',train_onehot)
np.save('cleanData/test_x.npy',test_x)
np.save('cleanData/test_label.npy',test_label)
np.save('cleanData/test_onehot.npy',test_onehot)
# + id="mpcwCHVPmI1z"
train_label = np.load('cleanData/train_onehot.npy')
train_x = np.load('cleanData/train_x.npy')
test_x = np.load('cleanData/test_x.npy')
test_label = np.load('cleanData/test_onehot.npy')
test_vec = np.load('cleanData/test_label.npy')
# + [markdown] id="grFcBFEmLVFs"
# <div dir='rtl'>
# برای آموزش شبکهی عصبی و برای نهایت استفاده از دادههای در دسترس ۹۹ درصد دادههای آموزشی را برای آموزش شبکه و فقط از یک درصد برای ازریابی و validation استفاده میکنیم.
#
# همچنین دادههای تست تعیین شده پس از آموزش کامل شبکه برای ارزیابی عملکرد شبکه استفاده میکنیم.
# + id="7FLzp06qGu1b"
from sklearn.model_selection import train_test_split
train_X, valid_X, train_Label, valid_Label = train_test_split(train_x, train_label, test_size = 0.01, random_state = 13)
# + colab={"base_uri": "https://localhost:8080/"} id="T2PZ3NEiWrJa" outputId="974e4565-a7ac-4289-bc3b-c669a50b304b"
train_X.shape, valid_X.shape
# + [markdown] id="q82TKEMcse8U"
# # استفاده از وزنهای آماده و ساختار شبکهی Inception V3
# + [markdown] id="YQuEAh7mMBGL"
# <div dir='rtl'>
#
#
# ابتدا کتابخانه
# keras
# و ماژولهای مورد نیاز را تعریف میکنیم.
# + id="ytLgdToaMHU8"
from keras.applications.inception_v3 import InceptionV3
from keras.models import Model
from keras.layers import *
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import *
# + [markdown] id="iOafOGXaMK_j"
# <div dir='rtl'>
#
# تابع زیر مشابه قسمتهای قبل وظیفهی تعریف مدل و آمادهسازی دادهها ، آموزش شبکه و در نهایت ارزیابی با دادههای تست را دارد.
#
# در ابتدا شبکهی
# inceptionv3
# بارگذاری میشود و این لایه را غیر قابل آموزش و ثابت تعریف میکنیم پس از آن لایههای اضافی و قابل آموزش خود را به شبکه اضافه میکنیم.
#
# برای آموزش بهتر شبکه از تغییرات منطقی و ممکن در دادهها مانند چرخش و زوم و .. استفاده میکنیم.
#
# در نهایت مدل را آموزش میدهیم و نموداری از
# accuracy
# در
# epoch
# های مختلف براساس دادههای آموزشی و دادههای آموزشی و ارزیابی رسم میکنیم.
#
# در آخرین مرحله دادههای تست را با استفاده از مدل پیشبینی کرده نتایج را به صورت کامل گزارش میکنیم.
#
# + id="jfKUyZ6VzqZB"
def train_model(config):
input = Input((150, 150, 3))
net = InceptionV3(include_top = False, weights = 'imagenet')(input)
net = Conv2D(2, 3,padding='same',activation='relu')(net)
net = GlobalAveragePooling2D()(net)
output = Activation('softmax')(net)
model = Model(inputs = input, outputs = output)
#Freezing InceptionV3.
model.layers[1].trainable=False
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
model.summary()
train_datagen = ImageDataGenerator(
rotation_range=config['aug_rotation'],
width_shift_range=config['aug_shift_w'],
height_shift_range=config['aug_shift_h'],
shear_range=config['aug_shear'],
zoom_range=config['aug_zoom'],
horizontal_flip=config['aug_horizontal_flip'],
fill_mode='nearest'
)
train_generator = train_datagen.flow(train_X, train_Label, batch_size = config['batch'])
validation_datagen = ImageDataGenerator()
validation_generator = validation_datagen.flow(valid_X,valid_Label,config['batch'])
history = model.fit(train_generator,
validation_data=validation_generator,
epochs = config['epochs'],
batch_size=config['batch'],
steps_per_epoch=math.ceil(7920/config['batch']),
validation_steps = math.ceil(80/config['batch'])
)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train','validation'],loc='lower right')
plt.show()
test_eval = model.evaluate(test_x, test_label, verbose = 0)
print('Test loss: ', test_eval[0])
print('Test accuracy: ', test_eval[1])
print('\n\n')
predictes_classes = model.predict(test_x)
predicted_classes = np.argmax(np.round(predictes_classes), axis = 1)
from sklearn.metrics import classification_report
target_names = ["Class {}".format(i) for i in range(classes)]
print(classification_report(test_vec, predicted_classes, target_names = target_names))
return model
# + [markdown] id="Kc7Na087NuXS"
# <div dir='rtl'>
#
# حال از تابع تعریف شده و ضرایب مناسب برای آگمنتیشن و تغییر دادههای ورودی جهت آموزش شبکه استفاده میکنیم.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="K4yJen86a6gq" outputId="05931a86-809f-4f58-af4b-c4c52480a1ba"
model = train_model({
'in_drop_out':0,
'batch':256,
'epochs':10,
'aug_rotation':180,
'aug_shift_w':0.10,
'aug_shift_h':0.10,
'aug_shear':0.10,
'aug_zoom':0.10,
'aug_horizontal_flip':True
})
# + [markdown] id="xV8mN7lwOLW8"
# <div dir='rtl'>
#
#
# با مدل آموزش داده شدهی بالا توانستیم دقت ۹۵ درصد روی دادههای تست به دست آوریم.
#
# حال میخواهیم قسمتهای آموزشپذیر مدل را بیشتر کنیم و پس تابع آموزش را دوباره تعریف میکنیم.
# + id="zckVVs2bq-wq"
def train_model(config):
input = Input((150, 150, 3))
net = InceptionV3(include_top = False, weights = 'imagenet')(input)
net = Conv2D(2, 3,padding='same',activation='relu')(net)
net = Conv2D(2, 3,padding='same',activation='relu')(net)
net = GlobalAveragePooling2D()(net)
output = Activation('softmax')(net)
model = Model(inputs = input, outputs = output)
#Freezing InceptionV3.
model.layers[1].trainable=False
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
model.summary()
train_datagen = ImageDataGenerator(
rotation_range=config['aug_rotation'],
width_shift_range=config['aug_shift_w'],
height_shift_range=config['aug_shift_h'],
shear_range=config['aug_shear'],
zoom_range=config['aug_zoom'],
horizontal_flip=config['aug_horizontal_flip'],
fill_mode='nearest'
)
train_generator = train_datagen.flow(train_X, train_Label, batch_size = config['batch'])
validation_datagen = ImageDataGenerator()
validation_generator = validation_datagen.flow(valid_X,valid_Label,config['batch'])
history = model.fit(train_generator,
validation_data=validation_generator,
epochs = config['epochs'],
batch_size=config['batch'],
steps_per_epoch=math.ceil(7920/config['batch']),
validation_steps = math.ceil(80/config['batch'])
)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train','validation'],loc='lower right')
plt.show()
test_eval = model.evaluate(test_x, test_label, verbose = 0)
print('Test loss: ', test_eval[0])
print('Test accuracy: ', test_eval[1])
print('\n\n')
predictes_classes = model.predict(test_x)
predicted_classes = np.argmax(np.round(predictes_classes), axis = 1)
from sklearn.metrics import classification_report
target_names = ["Class {}".format(i) for i in range(classes)]
print(classification_report(test_vec, predicted_classes, target_names = target_names))
return model
# + [markdown] id="E6MzFU8tO9wC"
# <div dir='rtl'>
#
# در این مرحله چون تعداد متغیرهای قابلاموزش مدل بیشتر شده بهتر است تعداد
# epoch
# های یادگیری را بیشتر کنیم تا شبکه زمان بیشتری برای ارزیابی و تطبیق شبکه با دادهها داشته باشد.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="WYOAroOrrA6B" outputId="6e97de0e-2699-4f6e-aa5b-97b98a121e12"
model = train_model({
'in_drop_out':0,
'batch':256,
'epochs':30,
'aug_rotation':180,
'aug_shift_w':0.10,
'aug_shift_h':0.10,
'aug_shear':0.10,
'aug_zoom':0.10,
'aug_horizontal_flip':True
})
# + [markdown] id="ZW8K3Ad3PtPI"
# ## نتیجه نهایی
# + [markdown] id="YApBEmTFPvzh"
# <div dir='rtl'>
#
# در این بخش توانستیم تصاویر سگ و گربه را با استفاده از شبکههای عصبی کانولوشنی و وزنهای از پیش آموزش داده شده با دقت خیلی خوبی تفکیک کرده و مدل مناسبی با صحت روی دادههای تست 96 درصد ارائه کنیم.
| CatDogClassifier_Part_5.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.6.2
# language: julia
# name: julia-0.6
# ---
# # Chapter 1 - Differentiation Matrices
using Plots
gr()
# ## Program 1
# +
function diffmaterr(N, fun, dfun, xlims=(-π, π))
L = xlims[2] - xlims[1]
h = L / N
x = xlims[1] + (1:N)*h
u = fun.(x)
du = dfun.(x)
e = ones(N)
D1 = sparse(1:N, [2:N; 1], 2*e/3, N, N) -
sparse(1:N, [3:N; 1; 2], e/12, N, N)
D = (D1 - D1') / h
err = norm(D*u - du, Inf)
return err
end
# -
fun(x) = exp(sin(x))
dfun(x) = cos(x) * fun(x)
Nvec = 2 .^(3:12)
err = diffmaterr.(Nvec, fun, dfun)
scatter(Nvec, err, yaxis=:log10, xaxis=:log10, marker=5, label="Error")
plot!(Nvec, (1.0*Nvec).^(-4), label="N^(-4)")
using Polynomials
# +
function lagrangei(i, N)
den = 1
xn = zeros(Rational{Int}, N-1)
x = 0:(N-1)
j = 1
for k = 1:(i-1)
xn[j] = x[k]
den = den * (x[i] - x[k])
j += 1
end
for k = (i+1):N
xn[j] = x[k]
den = den * (x[i] - x[k])
j += 1
end
p = poly(xn)
return p/den
end
function dercoefs(N)
pd = [polyder(lagrangei(i,N)) for i = 1:N]
D = zeros(Rational{Int}, N,N)
x = 0:(N-1)
for j = 1:N
for i = 1:N
D[j,i] = polyval(pd[i], x[j])
end
end
return D
end
# -
lagrangei(1,5)
dercoefs(7)[4,:]
function diffmat(N, P, h=1.0)#, xlims=(-π, π))
nsten = P÷2
icen = nsten+1
Dloc = Float64.(dercoefs(P)[icen,:])
o = ones(N)/h
D = sparse(1:N, 1:N, ones(N)*Dloc[icen], N, N)
for i = 1:nsten
J1 = [(i+1):N; 1:i]
D = D + sparse(1:N, J1, o*Dloc[icen+i], N, N)
J2 = [(N-i+1):N; 1:(N-i)]
D = D + sparse(1:N, J2, o*Dloc[icen-i], N, N)
end
return D
end
diffmat(10, 3, 1.0)
function diffmaterr2(x, D, fun, dfun)
N = size(x,1)
u = fun.(x)
du = dfun.(x)
err = norm(D*u - du, Inf)
return err
end
# +
Nvec2 = 2 .^(4:16)
n = length(Nvec2)
err2 = zeros(n)
xmin = -π
xmax = π
L = xmax - xmin
P = 5
for i = 1:n
N = Nvec2[i]
h = L / N
x = xmin + (1:N)*h
D = diffmat(N, P, h)
err2[i] = diffmaterr2(x, D, fun, dfun)
end
scatter(Nvec2, err2, yaxis=:log10, xaxis=:log10, marker=5, label="Error")
plot!(Nvec2, (1.0*Nvec2).^(-(P-1)), label=string(P-1))
# -
| chap01/p1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# <center>
# <img src="./images/adsp_logo.png">
# </center>
#
# ### Prof. Dr. -Ing. <NAME> <br> Jupyter Notebook: <NAME>
#
# + [markdown] slideshow={"slide_type": "-"}
# # The z-Transform
# + hide_input=true language="html"
# <iframe width="560" height="315" src="https://www.youtube.com/embed/pkY3RfUrGsM" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
# + [markdown] slideshow={"slide_type": "-"}
# The z-Transform is a more general transform than the Fourier transform, and we will use it to obtain perfect reconstruction in filter banks and wavelets. Hence we will now look at the effects of sampling and some more tools in the z-domain.
#
# Since we usually deal with causal systems in practice, we use the **1-sided z-Transform**, defined as
#
# $$ \large
# X(z)=\sum_ {n=0} ^\infty x(n)z^{-n}
# $$
#
# Observe this simply takes our sequence $x(n)$ and **turns it into the polynomial** $X(z)$.
#
# First observe that we get our usual frequency response (the Discrete Time Fourier Transform for a causal signal, starting at n=0) if we evaluate the z-tranform along the unit circle in the z-domain,
#
# $$z=e^{j\Omega}$$
#
# This connects the z-Transform with the DTFT, except for the sample index n, which for the so-called one-side z-Tranform starts at n=0, and for the DTFT starts at $n=-\infty$.
#
# In general, we can write complex variable z with an angle and a magnitude,
#
# $$\large
# z=r\cdot e^{j\Omega}$$
#
# where we can interpret the $\Omega$ as the **normalized angular frequency**, and the $r$ a damping factor for an exponentially decaying oscillation, if $r<1$ (or exponentially growing if $r>1$.
#
# **Observe**: This damping factor is **not** in the DTFT. This means in the z-Transform we can have a converging sum of the transform even for unstable signals or system, by just choosing r large enough! This means the **Region of Convergence** (ROC) just becomes smaller. Remember, in the z-transform sum we have $z^{-1}=\frac{1}{r}\cdot e^{-j\Omega}$.
# + hide_input=true language="html"
# <iframe width="560" height="315" src="https://www.youtube.com/embed/SCsSYp91CA0" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
# + [markdown] slideshow={"slide_type": "-"}
# **Recommended reading:**
# <NAME>, <NAME>: “Discrete Time Signal Processing”, Prentice Hall.
#
# + hide_input=true language="html"
# <iframe src='https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-341-discrete-time-signal-processing-fall-2005/', width=900, height=400></iframe>
# + [markdown] slideshow={"slide_type": "slide"}
# ## z-Transform Properties
# + [markdown] slideshow={"slide_type": "-"}
# z-Transform definition:
#
# $$ \large
# x(n) \rightarrow \sum _{n=0} ^ \infty x(n) \cdot z^{-n} =: X(z)
# $$
#
# The z-transform turns a sequence into a polynomial in z.
#
# Example: $x(n)=[2,4,3,1]$
#
# $$X(z)=2+4z^{-1}+3z^{-2}+z^{-3}$$
# + hide_input=true language="html"
# <iframe width="560" height="315" src="https://www.youtube.com/embed/YPU8FB3qSgY" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Shift Property
# + [markdown] slideshow={"slide_type": "-"}
# Take two causal sequences (causal means sample value 0 for negative indices): Sequence x(n), and x(n-1), which is the same sequence but delayed by one sample. Then their z-transforms are:
#
# $$ \large
# x(n) \rightarrow \sum _ {n=0 }^ \infty x(n) \cdot z^{-n} =: X(z)$$
#
# $$ \large
# x(n-1) \rightarrow \sum _{n=0 }^ \infty x(n-1) \cdot z^{-n} =\sum_{n=1} ^ \infty x(n-1) \cdot z^{-n} =
# $$
#
# Use the index substitution, $n' \leftarrow n-1$ or $n'+1\leftarrow n$ to get rid of the "$n-1$" in the transform:
#
#
# $$ \large
# =\sum _{n'=0} ^\infty x(n') \cdot z^{-(n'+1)} = z^{-1} \cdot \sum_ {n'=0} ^\infty x(n') \cdot z^{-n'} = X(z) \cdot z^{-1}
# $$<br>
#
# This shows that a **delay by 1 sample** in the signal sequence (time domain) corresponds to the **multiplication with** $z^{-1}$ in the z-domain:
#
# $$\large x(n)\rightarrow X(z)$$
# $$\large x(n-1) \rightarrow X(z)\cdot z^{-1}$$
# + hide_input=true language="html"
# <iframe width="560" height="315" src="https://www.youtube.com/embed/U17KDyOI58I" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
# + [markdown] slideshow={"slide_type": "-"}
# **Example:**
# Signal:
# $x_0=[1,2,3]$ => $X_0(z)=1+2z^{-1}+3z^{-2}$
#
# Signal, delayed by 1 sampling period:
#
# $x_1=[0,1,2,3]=>X_1(z)=0+1z^{-1}+2z^{-2}+3z^{-3}=$
#
# In the z-domain the delay shows up as multiplication with $z^{-1}$,
#
# $$=X_0(z).z^{-1}$$
# + [markdown] slideshow={"slide_type": "subslide"}
# Related to the shift property is the z-transform of the shifted unit pulse. The unit pulse is defined as
#
# $$\large
# \Delta \left ( n \right ) =\left ( \matrix {{1 , i f n =0} \\ {0 , e l s e}} \right )$$
#
# so it is just a zero sequence with a 1 at time 0.
#
# Its z-Transform is then:
#
# $$\large \Delta(n)\rightarrow 1$$
#
# The z-transform of the shifted unit pulse is:
#
# $$\large \Delta(n-d)\rightarrow z^{-d}$$
#
# Shifted by d samples.
#
# The “**unit step**” function is defined as:
#
# $$\large u\left ( n \right ) =\left ( \matrix {{1 , i f n \geq 0} \\ {0 , e l s e}} \right )$$
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Linearity
# + [markdown] slideshow={"slide_type": "-"}
# $$ \large
# a \cdot x(n) \rightarrow a \cdot X(z) $$
#
# $$\large x(n)+y(n)\rightarrow X(z)+Y(z)$$
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Convolution
# + [markdown] slideshow={"slide_type": "-"}
# $$\large
# x(n)*y(n)\rightarrow X(z)\cdot Y(z)$$
#
# **The z-transform turns a convolution into a multiplication.**
#
# Remember: the convolution is defined as:
#
# $$ \large
# x(n)*y(n)=\sum _ {m= -\infty} ^ \infty x(m) \cdot y(n-m)
# $$
#
# This is because the convolution of 2 sequences behave in the same way as the multiplication of 2 polynomials (the z-transform) of these sequences. This is one of the main advantages of the z-Transform, since it turns convolution into a simpler multiplication (which is in principle invertible).
# + [markdown] slideshow={"slide_type": "slide"}
# ## Example z-Transform
# + hide_input=true language="html"
# <iframe width="560" height="315" src="https://www.youtube.com/embed/9XRlk27e9zU" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
# + [markdown] slideshow={"slide_type": "-"}
# Exponential decaying sequence: $x(n)=p^{n}$ for n=0,1,..., meaning the sequence
#
# $$\large 1,p,p^{2},p^{3},...$$
#
# $$\large \rightarrow X(z)=\sum _{n=0}^{\infty}p^n \cdot z^{-n}$$
#
# **Remember:** we had a closed form solution for this type of **geometric sums:**
#
# $$S= \sum_ {k = 0}^{N - 1} c^k$$
#
# its solution was:
#
# $$
# S =\frac{c^N - 1} {c - 1}
# $$
#
# Now we have an infinite sum, which means N goes towards infinity. But we have the expression $c^N$ in the solution. If $\mid c\mid <1$, then this goes to zero $c^N\rightarrow 0$. Now we have $c=p\cdot z^{-1}$. Hence, if $\mid p\cdot z^{-1}\mid <1$ we get
#
# $$\large
# \rightarrow X(z)=\frac{1}{1 -p \cdot z^{-1}} = \frac{z} {z-p}
# $$
#
# Observe that this fraction has a **pole** at position z=p, and a **zero** at position z=0. Hence if know the pole position, we know p, and if we know p we know the time sequence. So the location of the pole gives us very important information about the signal.
#
# Keep in mind that this solution is only valid for all p which fullfill $\mid p\cdot z^{-1}\mid <1$. We see that this is true for $\mid z\mid >\mid p\mid $. This is also called the “**Region of Convergence” (ROC)**. The ROC is connected to the resulting stability of the system or signal.
#
# The region of convergence is outside the pole locations. If the region of convergence includes the unit circle, we have a stable system. This means: if the **poles are inside the unit circle**, we have a **stable system**.
#
# The sum of x(n) **converges** (we get the sum if we set $z=1$) if **abs(p)<1**. In this case we also say that the signal or system is **stable** (meaning we obtain a bounded output for a bounded input, so-called “BIBO stability”). In this case we see that the resulting pole of our z-transform is **inside the unit circle**. If abs(p)>1, we have an exponential growth, which is basically an “exploding” signal or system (meaning the output grows towards infinity), hence **unstable**.
#
# In general we say that a system or a signal is **stable**, if the **poles** of its z-transform are **inside the unit circle** in the z-domain, or **unstable** if **at least one pole is outside the unit circle** (it will exponentially grow).
#
# These are basic properties, which can be used to derive z-transforms of more complicated expressions, and they can also be used to obtain an inverse z-transform, by inspection.
#
# For instance if we see a fraction with a **pole** in the z-Transform, we know that the underlying time sequence has an **exponential decay or oscillation** in it.
#
# Observe that we can obtain a real valued decayed oscillation if we have 2 poles, each the conjugate complex of the other, or one with $+\Omega$ and one with $-\Omega$. In this way, we cancel the imaginary part.
#
# One of the main differences compared to the Discrete Time Fourier Transform (DTFT): With the z-transform we can see if a signal or system is stable by looking at the position of the poles in the z-domain. This is not possible for the DTFT, since there we don't know the positions of the poles.
#
# Now take a look at our down sampled signal from a previous notebook:
# $$ \large
# x^d \left ( n \right ) =x \left ( n \right ) \cdot \Delta_N \left ( n \right ) =x ( n ) \cdot \frac{1} {N} \sum _ {k = 0}^ {N - 1} e^{j \frac{2 \pi} {N }\cdot k \cdot n}
# $$
#
# Now we can z-transform it
#
#
# $$ \large
# \sum _ {n=0} ^\infty x^d \left ( n \right ) \cdot z^{-n} = \sum_ {n=0}^ \infty x ( n ) \cdot \frac{ 1} {N} \sum_ {k = 0} ^{N - 1} e^{j \frac{2\pi} {N} \cdot k \cdot n} \cdot z^{ -n }
# $$
#
# Hence the effect of **multiplying our signal with the delta impulse train** in the z-domain is
#
# $$\large
# X^d(z)=\frac{1} {N} \sum _{k=0} ^{N-1} X( e^{-j \frac{2 \pi} {N }\cdot k} \cdot z)
# $$
#
# Observe that here the aliasing components appear by multiplying $z$ with $e^{-j\frac{2 \pi}{N}\cdot k}$, which in effect is a shift of the frequency.
#
#
# Remember from last time, the effect of the **removal or re-insertion of the zeros** (changing the sampling rate) from or into the signal $x^d(n)$ at the higher sampling rate and $y(m)$ at the lower sampling rate in the z-domain is
#
# $$\large
# Y(z)=X^{d} \cdot \left( z^{\frac{1}{N}}\right)$$
# + [markdown] slideshow={"slide_type": "slide"}
# ### Recommended
# + hide_input=true language="html"
# <iframe width="560" height="315" src="https://www.youtube.com/embed/Nf2QBWC0hCQ" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
# + [markdown] slideshow={"slide_type": "-"}
# **z-Transform using Python**
#
# https://github.com/GuitarsAI/MRSP_Notebooks
# + hide_input=true language="html"
# <iframe width="560" height="315" src="https://www.youtube.com/embed/n4keW_vluJA" frameborder="0" allow="accelerometer;
# encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
# + [markdown] slideshow={"slide_type": "-"}
# **Frequency Response: z-Transform and the DTFT**
# + hide_input=true language="html"
# <iframe width="560" height="315" src="https://www.youtube.com/embed/NMGtwYE8veQ" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
| ADSP_07a_The_z-Transform.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import warnings
import pprint
import skrebate
import imblearn
from imblearn import under_sampling, over_sampling, combine
from imblearn.pipeline import Pipeline as imbPipeline
from sklearn import (preprocessing, svm, linear_model, ensemble, naive_bayes,
tree, neighbors, decomposition, kernel_approximation, cluster)
from sklearn.pipeline import Pipeline
from sklearn.compose import TransformedTargetRegressor
from sklearn.model_selection import (KFold, GroupKFold, StratifiedKFold,
LeaveOneGroupOut, cross_validate,
cross_val_predict, learning_curve)
from sklearn.feature_selection import SelectKBest, f_regression, SelectFromModel, VarianceThreshold, f_classif
from sklearn.metrics import r2_score, auc, roc_auc_score, balanced_accuracy_score, confusion_matrix, roc_curve
from sklearn import metrics
from sklearn.preprocessing import QuantileTransformer, quantile_transform
from xgboost import XGBRegressor, XGBClassifier
warnings.simplefilter('ignore')
# +
import os
import sys
import numpy as np
import pandas as pd
import re
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
# -
Olaparib_1017 = pd.read_csv('./drug_respond/data/smmart_protein_rna_tissue10_ready/Olaparib_1017.tsv.gz', sep='\t', index_col=0)
Olaparib_1017
# +
X, y = Olaparib_1017.iloc[:, 6:].values, Olaparib_1017.iloc[:, 5].values
data1 = go.Scatter(
y = y,
mode = "markers",
name = "y_ture",
)
layout = dict(
xaxis=dict(
title='Cellline No.'
),
yaxis=dict(
title='Drug response curve AUC'
),
title="Distribution of target y"
)
fig = go.Figure(data=[data1], layout=layout)
iplot(fig)
# To show plot, paste the link to this GitHub notebook into http://nbviewer.jupyter.org/
# -
predicted = pd.read_csv('./Galaxy10-[Cross_val_predict_on_collection_9_and_Olaparib_1017__Pipeline Builder on data 18 -- BinarizeTargetRegressor (XGBRegressor)].tabular', sep='\t')
predicted
combined_df = Olaparib_1017.iloc[:, :6]
combined_df['Predicted'] = 1 - predicted['Predicted'].values
combined_df['1-prediction'] = predicted['Predicted'].values
combined_df = combined_df.reset_index(drop=True)
combined_df
discretize = combined_df['AUC'].mean() - combined_df['AUC'].std()
y_true = combined_df['AUC'].values < discretize
# ## roc_curve by `1-predicted`
fpr, tpr, thresholds = roc_curve(y_true, combined_df['1-prediction'].values)
fpr, tpr, thresholds
# ## new roc_curve by ranking predicted value directly
combined_df_sort = combined_df.sort_values(['Predicted'])
combined_df_sort
total_positive = np.sum(y_true)
total_negative = combined_df_sort.shape[0] - total_positive
new_thresholds = np.r_[0, combined_df_sort['Predicted'].values, 1]
new_tprs = []
new_fprs = []
for thres in new_thresholds:
#predicted_positive
p_df = combined_df_sort[combined_df_sort['Predicted'] < thres]
# true positive count
tp_count = np.sum(p_df['AUC'] < discretize)
# false positive count
fp_count = p_df.shape[0] - tp_count
# get rates
new_tprs.append(tp_count/total_positive)
new_fprs.append(fp_count/total_negative)
new_tprs, new_fprs
# +
data2 = go.Scatter(
x = fpr,
y = tpr,
mode = "lines",
name = "roc_curve",
)
data3 = go.Scatter(
x = [0, 1],
y = [0, 1],
mode = 'lines',
name = 'x=y'
)
data4 = go.Scatter(
x = new_fprs,
y = new_tprs,
mode = "lines",
name = "roc_curve_new",
)
layout = dict(
xaxis=dict(
title='False Positive Rate'
),
yaxis=dict(
title='True Positive Rate'
),
title="ROC Curve"
)
fig = go.Figure(data=[data2, data3,data4], layout=layout)
iplot(fig)
# -
| results/test_roc_curve_using_regression_predict_as_y_score.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + deletable=true editable=true
# #!/usr/bin/env python3
import os
import warnings
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_distances
#import tensorflow as tf
#import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
#import seaborn as sns
import torch
from transformers import BertModel
from transformers import BertTokenizer
def load_gold(filepath_or_buffer, sep='\t'):
df = pd.read_csv(filepath_or_buffer, sep=sep, dtype=str)
gold = OrderedDict()
for _, row in df[['questionID', 'explanation']].dropna().iterrows():
explanations = OrderedDict((uid.lower(), Explanation(uid.lower(), role))
for e in row['explanation'].split()
for uid, role in (e.split('|', 1),))
question = Question(row['questionID'].lower(), explanations)
gold[question.id] = question
return gold
def read_explanations(path):
header = []
uid = None
df = pd.read_csv(path, sep='\t', dtype=str)
for name in df.columns:
if name.startswith('[SKIP]'):
if 'UID' in name and not uid:
uid = name
else:
header.append(name)
if not uid or len(df) == 0:
warnings.warn('Possibly misformatted file: ' + path)
return []
return df.apply(lambda r: (r[uid], ' '.join(str(s) for s in list(r[header]) if not pd.isna(s))), 1).tolist()
# + deletable=true editable=true
import argparse
tables = 'data/annotation/expl-tablestore-export-2017-08-25-230344/tables'
questions = 'data/questions/ARC-Elementary+EXPL-Dev.tsv'
quesitons = open(questions, 'r')
explanations = []
for path, _, files in os.walk(tables):
for file in files:
explanations += read_explanations(os.path.join(path, file))
if not explanations:
warnings.warn('Empty explanations')
# prepare data
df_q = pd.read_csv(questions, sep='\t', dtype=str)
text_q = [q for q in df_q['Question']]
df_e = pd.DataFrame(explanations, columns=('uid', 'text'))
text_e = [e for e in df_e['text']]
# + deletable=true editable=true
uid_list = []
for i in df_e['uid']:
uid_list.append(i)
# + deletable=true editable=true
'ac40-d9c4-86a2-f4bb' in uid_list
# + deletable=true editable=true
explanations = df_q['explanation'][1].strip().split('|')[:-1]
# + deletable=true editable=true
clean_exp = [explanations[0]] + [x.split()[1] for x in explanations[1:]]
# + deletable=true editable=true
set_e = {}
for i in range(len(df_e)):
set_e[df_e['uid'][i]] = df_e['text'][i]
# + deletable=true editable=true
all_uids = list(set_e.keys())
# + deletable=true editable=true
all_uids
# + deletable=true editable=true
import random
# + deletable=true editable=true
random_uid = all_uids[random.randint(0, len(text_e))]
print(random_uid in clean_exp)
# + deletable=true editable=true
q_e_pair = []
for i in range(len(text_q)):
question_text = text_q[i]
explanations = df_q['explanation'][i].strip().split('|')[:-1]
clean_exp = [explanations[0]] + [x.split()[1] for x in explanations[1:]]
explanation_texts = [set_e[id] for id in clean_exp]
random_explanation =
for explanation in explanation_texts:
q_e = question_text + ' [SEP] ' + explanation
token_q_e = tokenizer.encode(q_e)
print(token_q_e)
break
# + deletable=true editable=true
text_q[1]
# + deletable=true editable=true
set_e['1a29-2268-eeb7-edba']
# + deletable=true editable=true
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# + deletable=true editable=true
# + deletable=true editable=true
# + deletable=true editable=true
# build encoder
#module_url = "https://tfhub.dev/google/universal-sentence-encoder/2"
#embed = hub.Module(module_url)
## get embedding
#with tf.Session() as session:
# session.run([tf.global_variables_initializer(), tf.tables_initializer()])
# question_embeddings = session.run(embed(text_q))
# answer_embeddings = session.run(embed(text_e))
# define BERT to tokenize captions
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
bert = BertModel.from_pretrained('bert-base-uncased')
#
## get emebedding
#question_embeddings = []
#answer_embeddings = []
#with torch.no_grad():
# for single_q in text_q:
# token_q = tokenizer.encode(single_q)
# input_id = torch.tensor(token_q).unsqueeze(0)
# last_hidden = bert(input_id)[0]
# last_hidden_mean = torch.mean(last_hidden, dim=1)[0]
# question_embeddings.append(last_hidden_mean.tolist())
# for single_e in text_e:
# token_e = tokenizer.encode(single_e)
# input_id = torch.tensor(token_e).unsqueeze(0)
# last_hidden = bert(input_id)[0]
# last_hidden_mean = torch.mean(last_hidden, dim=1)[0]
# answer_embeddings.append(last_hidden_mean.tolist())
#X_q = vectorizer.transform(df_q['Question'])
#X_e = vectorizer.transform(df_e['text'])
X_dist = cosine_distances(question_embeddings, answer_embeddings)
for i_question, distances in enumerate(X_dist):
for i_explanation in np.argsort(distances)[:args.nearest]:
print('{}\t{}'.format(df_q.loc[i_question]['questionID'], df_e.loc[i_explanation]['uid']))
# + deletable=true editable=true
| bert_finetune.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import pandas as pd
import numpy as np
from data import clean_and_split_nba_data as clean
from models import plot_validation_curve as vc
from src.models import eval_model as evm
import xgboost as xgb
from hyperopt import Trials, STATUS_OK, tpe, hp, fmin
# %load_ext autoreload
# %autoreload 2
df = pd.read_csv("../data/raw/train.csv")
df
x_data, x_train, x_val, x_test, y_data , y_train, y_val, y_test = clean.clean_and_split_nba_data(df,True)
vc.plot_validation_curve(estimator=xgb.XGBClassifier(random_state=8, verbosity=1,use_label_encoder=False,objective ='binary:logistic',eval_metric='auc'),
hyperparameter='min_child_weight',
hyperparameter_value=[1,2,3,5,10,15,20,30,40,50,100,500,550,560,570,580,590,600,700],
x=x_data,
y=y_data,
title="XGBoost",
cv=5)
vc.plot_validation_curve(estimator=xgb.XGBClassifier(random_state=8,
verbosity=1,
use_label_encoder=False,
objective ='binary:logistic',
eval_metric='auc'),
hyperparameter='min_child_weight',
hyperparameter_value=[200,300,350,400,450,500,550],
x=x_data,
y=y_data,
title="XGBoost",
cv=5)
vc.plot_validation_curve(estimator=xgb.XGBClassifier(random_state=8,
verbosity=1,
use_label_encoder=False,
objective ='binary:logistic',
eval_metric='auc',
min_child_weight=300
),
hyperparameter='colsample_bytree',
hyperparameter_value=np.arange( 0.1, 1.0, 0.05),
x=x_data,
y=y_data,
title="XGBoost",
cv=5)
vc.plot_validation_curve(estimator=xgb.XGBClassifier(random_state=8,
verbosity=1,
use_label_encoder=False,
objective ='binary:logistic',
eval_metric='auc',
min_child_weight=300,
),
hyperparameter='subsample',
hyperparameter_value=np.arange( 0.1, 1.0, 0.05),
x=x_data,
y=y_data,
title="XGBoost",
cv=5)
vc.plot_validation_curve(estimator=xgb.XGBClassifier(random_state=8,
verbosity=1,
use_label_encoder=False,
objective ='binary:logistic',
eval_metric='auc',
min_child_weight=300
),
hyperparameter='reg_lambda',
hyperparameter_value=np.arange( 0.1, 1.0, 0.05),
x=x_data,
y=y_data,
title="XGBoost",
cv=5)
vc.plot_validation_curve(estimator=xgb.XGBClassifier(random_state=8,
verbosity=1,
use_label_encoder=False,
objective ='binary:logistic',
eval_metric='auc',
min_child_weight=300
),
hyperparameter='reg_alpha',
hyperparameter_value=np.arange( 0.1, 1.0, 0.05),
x=x_data,
y=y_data,
title="XGBoost",
cv=5)
vc.plot_validation_curve(estimator=xgb.XGBClassifier(random_state=8,
verbosity=1,
use_label_encoder=False,
objective ='binary:logistic',
eval_metric='auc',
min_child_weight=300
),
hyperparameter='max_depth',
hyperparameter_value=range(1,50,1),
x=x_data,
y=y_data,
title="XGBoost",
cv=5)
vc.plot_validation_curve(estimator=xgb.XGBClassifier(random_state=8,
verbosity=1,
use_label_encoder=False,
objective ='binary:logistic',
eval_metric='auc',
min_child_weight=300
),
hyperparameter='learning_rate',
hyperparameter_value=np.arange( 0.1, 1.0, 0.05),
x=x_data,
y=y_data,
title="XGBoost",
cv=5)
vc.plot_validation_curve(estimator=xgb.XGBClassifier(random_state=8,
verbosity=1,
use_label_encoder=False,
objective ='binary:logistic',
min_child_weight=300
),
hyperparameter='eval_metric',
hyperparameter_value=['auc','error','mlogloss','map'],
x=x_data,
y=y_data,
title="XGBoost",
cv=5)
vc.plot_validation_curve(estimator=xgb.XGBClassifier(random_state=8,
verbosity=1,
use_label_encoder=False,
objective ='binary:logistic',
min_child_weight=300
),
hyperparameter='eval_metric',
hyperparameter_value=['error','mlogloss','map'],
x=x_data,
y=y_data,
title="XGBoost",
cv=5)
xgboost1=xgb.XGBClassifier(random_state=8,
verbosity=1,
use_label_encoder=False,
objective ='binary:logistic',
eval_metric='auc',
min_child_weight=300)
evm.eval_model(xgboost1,x_train,y_train,x_val,y_val)
evm.get_performance(xgboost1, x_test, y_test, "Test", True)
| notebooks/P_Sampath_Week03-02-XG-model-train.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Sharing is Caring: GPU Interoperability and <3 of All Frameworks
# +
import cupy as cp
import numpy as np
from numba import cuda
# PyTorch 1.4 supports direct __cuda_array_interface__ handoff.
import torch
# RFC: https://github.com/tensorflow/community/pull/180
# # !pip install tfdlpack-gpu
import tfdlpack
# -
# ### Create GPU Arrays and Move to DL Frameworks with `__cuda_array_interface__`
# Frameworks that leverage the `__cuda_array_interface__` can be seamlessly transferred from compatiable libraries (CuPy, Numba, cuSignal, etc) directly, without using an intermediate Tensor format like [DLPack](https://github.com/dmlc/dlpack)
#
# **CuPy <-> PyTorch**
# +
# CuPy - GPU Array (like NumPy!)
gpu_arr = cp.random.rand(10_000, 10_000)
# Look at pointer
print('CuPy GPU Array Pointer: ', gpu_arr.__cuda_array_interface__['data'])
# Migrate from CuPy to PyTorch
torch_arr = torch.as_tensor(gpu_arr, device='cuda')
# Look at pointer -- it's the same as the CuPy array above!
print('PyTorch GPU Tensor Pointer: ', torch_arr.__cuda_array_interface__['data'])
# Migrate from PyTorch to CuPy
cupy_arr = cp.asarray(torch_arr)
# Look at pointer
print('CuPy GPU Pointer: ', cupy_arr.__cuda_array_interface__['data'])
# -
# **Numba CUDA <-> PyTorch**
# +
# NumPy - CPU Array
cpu_arr = np.random.rand(10_000, 10_000)
# Use Numba to move to GPU
numba_gpu_arr = cuda.to_device(cpu_arr)
# Migrate from Numba, used for custom CUDA JIT kernels to PyTorch
torch_arr_numba = torch.as_tensor(numba_gpu_arr, device='cuda')
# Migrate from PyTorch back to Numba
numba_arr_from_torch = cuda.to_device(torch_arr_numba)
# Pointer love again
print('Numba GPU Array Pointer: ', numba_gpu_arr.__cuda_array_interface__['data'])
print('PyTorch GPU Tensor Pointer: ', torch_arr_numba.__cuda_array_interface__['data'])
print('Numba GPU Pointer: ', numba_arr_from_torch.__cuda_array_interface__['data'])
# -
# ### Create GPU Arrays and Move to DL Frameworks with DLPack
# Not all major frameworks currently support the `__cuda_array_interface__`, cough, [TensorFlow](https://www.tensorflow.org/). We can use the aforementioned DLPack as a bridge between the GPU ecosystem and TensorFlow with `tfdlpack`. See [this RFC](https://github.com/tensorflow/community/pull/180) for more information.
#
# Optional: Allow GPU growth in TensorFlow or TF will take over the entire GPU.
# !export TF_FORCE_GPU_ALLOW_GROWTH=false
# **CuPy <-> TensorFlow**
# +
# CuPy - GPU Array (like NumPy!)
gpu_arr = cp.random.rand(10_000, 10_000)
# Use CuPy's built in `toDlpack` function to move to a DLPack capsule
dlpack_arr = gpu_arr.toDlpack()
# Use `tfdlpack` to migrate to TensorFlow
tf_tensor = tfdlpack.from_dlpack(dlpack_arr)
# Confirm TF tensor is on GPU
print(tf_tensor.device)
# Use `tfdlpack` to migrate back to CuPy
dlpack_capsule = tfdlpack.to_dlpack(tf_tensor)
cupy_arr = cp.fromDlpack(dlpack_capsule)
# -
# **Numba CUDA <-> TensorFlow**
# +
# Reset CUDA memory
cuda.close()
# NumPy - CPU Array
cpu_arr = np.random.rand(10_000, 10_000)
# Use Numba to move to GPU
numba_gpu_arr = cuda.to_device(cpu_arr)
# Use CuPy's asarray function and toDlpack to create DLPack capsule. There are multiple other ways to do this (i.e. PyTorch Utils)
dlpack_arr = cp.asarray(numba_gpu_arr).toDlpack()
# Migrate from Numba, used for custom CUDA JIT kernels to PyTorch
tf_tensor = tfdlpack.from_dlpack(dlpack_arr)
# Confirm TF tensor is on GPU
print(tf_tensor.device)
# Use `tfdlpack` to migrate back to Numba
dlpack_capsule = tfdlpack.to_dlpack(tf_tensor)
numba_arr = cuda.to_device(cp.fromDlpack(dlpack_capsule))
# -
# **PyTorch <-> TensorFlow**
# +
import torch
import tfdlpack
from torch.utils import dlpack as th_dlpack
# Torch - GPU Array
gpu_arr = torch.rand(10_000, 10_000).cuda()
# Use Torch's DLPack function to get DLPack Capsule
dlpack_arr = th_dlpack.to_dlpack(gpu_arr)
# Use `tfdlpack` to migrate to TensorFlow
tf_tensor = tfdlpack.from_dlpack(dlpack_arr)
# Confirm TF tensor is on GPU
print(tf_tensor.device)
# Use `tfdlpack` to migrate back to PyTorch
dlpack_capsule = tfdlpack.to_dlpack(tf_tensor)
torch_arr = th_dlpack.from_dlpack(dlpack_capsule)
| notebooks/interoperability/gpu_interop_dlframeworks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Customize your own GPU Kernels in gQuant
#
# The gQuant is designed to accelerate quantitive finance workflows on the GPU. The acceleration on GPU is facilitated by using cuDF dataframes organized into a computation graph. The cuDF project is a continously evolving library that provides a pandas-like API. Sometimes the data scientists are facing a few challenges that cannot be easily solved:
#
# 1. The quantitative work needs customized logic to manipulate the data, and there are no direct methods within cuDF to support this logic.
# 2. Each cuDF dataframe method call launches the GPU kernel once. For performance crtical task, it is sometimes required to wrap lots of computation steps together in a single GPU kernel to reduce the kernel launch overheads.
#
# The solution is to build customized GPU kernels to implement them. The code and examples below illustrate a variety of approaches to implement customized GPU kernels in Python.
import sys; sys.path.insert(0, '..')
# Load necessary Python modules
import sys
from gquant.dataframe_flow import TaskSpecSchema, TaskGraph
from gquant.dataframe_flow import Node, NodePorts, PortsSpecSchema
import cudf
import numpy as np
from numba import cuda
import cupy
import math
import dask
import dask_cudf
# Define a utility function to verify the results:
def verify(ground_truth, computed):
max_difference = (ground_truth - computed).abs().max()
# print('Max Difference: {}'.format(max_difference))
assert(max_difference < 1e-8)
return max_difference
# ### Example Problem: Calculating the distance of points to the origin
#
# The sample problem is to take a list of points in 2-D space and compute their distance to the origin.
# We start by creating a source `Node` in the graph that generates a cuDF dataframe containing some configurable number of random points. A custom node is defined by inheriting from the `Node` class and overriding methods `columns_setup` and `process`. The ports API is enabled by adding (or overriding) the `ports_setup` method. The `ports_setup` must return an instance of `NodePorts` which encapsulates the ports specs. Ports specs are dictionaries with port attributes/options per `PortsSpecSchema`.
#
# In the case of the `PointNode` below the input port is an empty dictionary, since no inputs are required, and the output port is called "points_df_out". When using ports the `process` API must return a dictionary where the keys correspond to the output ports. The `columns_setup` is as before except that the columns dictionaries must be per port.
class PointNode(Node):
def ports_setup(self):
input_ports = {}
output_ports = {
'points_df_out': {
PortsSpecSchema.port_type: cudf.DataFrame
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def columns_setup(self):
self.required = {}
self.addition = {
'points_df_out': {
'x': 'float64',
'y': 'float64'
}
}
def process(self, inputs):
npts = self.conf['npts']
df = cudf.DataFrame()
df['x'] = np.random.rand(npts)
df['y'] = np.random.rand(npts)
output = {
'points_df_out': df,
}
return output
# The distance can be computed via cuDF methods. We define the `DistanceNode` to calculate the euclidean distance and add a `distance_cudf` column to the output dataframe. We will use that as the ground truth to compare and verify results later. Additionally, the distance node calculates absolute distance (Manhattan distance) in another output port which is optional.
#
class DistanceNode(Node):
def ports_setup(self):
input_ports = {
'points_df_in': {
'type': cudf.DataFrame
}
}
output_ports = {
'distance_euclid_df': {
'type': cudf.DataFrame
},
'distance_abs_df': {
PortsSpecSchema.port_type: cudf.DataFrame,
PortsSpecSchema.optional: True
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def columns_setup(self):
self.delayed_process = True
req_cols = {
'x': 'float64',
'y': 'float64'
}
self.required = {
'points_df_in': req_cols,
'distance_euclid_df': req_cols,
'distance_abs_df': req_cols
}
self.addition = {
'distance_euclid_df': {
'distance_cudf': 'float64'
},
'distance_abs_df': {
'distance_cudf': 'float64'
}
}
def process(self, inputs):
df = inputs['points_df_in']
# DEBUGGING
try:
from dask.distributed import get_worker
worker = get_worker()
print('worker{} process NODE "{}" worker: {}'.format(
worker.name, self.uid, worker))
# print('worker{} NODE "{}" df type: {}'.format(
# worker.name, self.uid, type(df)))
except (ValueError, ImportError):
pass
calc_absd = self.conf.get('calc_absd', False)
if calc_absd:
df_abs = df.copy()
df_abs['distance_cudf'] = df['x'].abs() + df['y'].abs()
df['distance_cudf'] = (df['x']**2 + df['y']**2).sqrt()
output = {
'distance_euclid_df': df,
}
if calc_absd:
output['distance_abs_df'] = df_abs
return output
# Having these two nodes, we can construct a simple task graph to compute the distance.
# +
# Task specifications.
points_tspec = {
TaskSpecSchema.task_id: 'points_task',
TaskSpecSchema.node_type: PointNode,
TaskSpecSchema.conf: {'npts': 1000},
TaskSpecSchema.inputs: {},
}
cudf_distance_tspec = {
TaskSpecSchema.task_id: 'distance_by_cudf',
TaskSpecSchema.node_type: DistanceNode,
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {
'points_df_in': 'points_task.points_df_out'
}
}
task_list = [points_tspec, cudf_distance_tspec]
task_graph = TaskGraph(task_list)
# -
# We can visualize the task graph with and without ports.
print('WITHOUT PORTS')
task_graph.draw(show='ipynb')
print('WITH PORTS')
task_graph.draw(show='ipynb', show_ports=True)
# The next step is to run the task graph to obtain the distances. The output is identified by the `id` of the distance node:
# +
task_list = [points_tspec, cudf_distance_tspec]
task_graph = TaskGraph(task_list)
outlist = [
'points_task.points_df_out',
'distance_by_cudf.distance_euclid_df',
'distance_by_cudf.distance_abs_df'
]
try:
(points_df, dist_euclid_df_w_cudf, dist_abs_df_w_cudf) = \
task_graph.run(outputs=outlist)
except Exception as err:
print(err)
# -
# Note the error above. We specified `distance_by_cudf.distance_abs_df` as an output, but in the `conf` of `cudf_distance_task_spec` we did not set `calc_absd` to be `True`. Therefore `distance_by_cudf.distance_abs_df` is not calculated (refer to process method of `DistanceNode` class above). Below we remove the `distance_by_cudf.distance_abs_df` from outlist and re-run.
outlist = ['distance_by_cudf.distance_euclid_df']
(dist_euclid_df_w_cudf,) = task_graph.run(outputs=outlist)
print('HEAD dist_euclid_df_w_cudf:\n{}'.format(dist_euclid_df_w_cudf.head()))
# Why did the above run without errors even though the `DistanceNode` defines an output port `distance_abs_df`? That's because in the `ports_setup` that port is configured to be optional.
# ```
# 'distance_abs_df': {
# 'type': cudf.DataFrame,
# 'optional': True
# }
# ```
#
# Note that instead of keywords `type` and `optional` we used `PortsSpecSchema` for these fields (to adhere to good programming practices). If we were to set `output_ports` in the `DistanceNode` as below:
# ```
# output_ports = {
# 'distance_euclid_df': {
# 'type': cudf.DataFrame
# },
# 'distance_abs_df': {
# 'type': cudf.DataFrame
# }
# ```
# Then the `distance_abs_df` would be non-optional and above would have produced an error as well. Try it out yourself by editing the `DistanceNode` and re-running the task-graph (remember to re-instantiate the `cudf_distance_task_spec`).
#
# Below we set the `conf` to calculate absolute distance.
# +
replace_spec = {
'distance_by_cudf': {
TaskSpecSchema.conf: {
'calc_absd': True
}
}
}
outlist = [
'points_task.points_df_out',
'distance_by_cudf.distance_euclid_df',
'distance_by_cudf.distance_abs_df'
]
(points_df, dist_euclid_df_w_cudf, dist_abs_df_w_cudf) = \
task_graph.run(outputs=outlist, replace=replace_spec)
# -
# We could have setup the `cudf_distance_tspec` to calculate absolute distance to begin with and obtained all the outputs without errors. The above was meant to demonstrate how to work with ports.
print('points_df:\n{}\n'.format(points_df.head()))
print('dist_euclid_df_w_cudf:\n{}\n'.format(dist_euclid_df_w_cudf.head()))
print('dist_abs_df_w_cudf:\n{}\n'.format(dist_abs_df_w_cudf.head()))
# ### Customized Kernel with Numba library
#
# Numba is an excellent python library used for accelerating numerical computations. Numba supports CUDA GPU programming by directly compiling a restricted subset of Python code into CUDA kernels and device functions. The Numba GPU kernel is written in Python and translated (JIT just-in-time compiled) into GPU code at runtime. This is achieved by decorating a Python function with `@cuda.jit`.
# Just like a C/C++ CUDA GPU kernel, the `distance_kernel` function is called by thousands of threads in the GPU. The thread id is computed by `threadIdx.x`, `blockId.x` and `blockDim.x` built-in variables. Please check the [CUDA programming guild](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#thread-hierarchy) for details.
# A cuDF series can be converted to GPU arrays compatible with the Numba library via.to_gpu_array` API. The next step is to define a Node that calls this Numba kernel to compute the distance and save the result into `distance_numba` column in the output dataframe.
# +
import rmm
@cuda.jit
def distance_kernel(x, y, distance, array_len):
# ii - overall thread index
ii = cuda.threadIdx.x + cuda.blockIdx.x * cuda.blockDim.x
if ii < array_len:
distance[ii] = math.sqrt(x[ii]**2 + y[ii]**2)
class NumbaDistanceNode(Node):
def ports_setup(self):
input_ports = {
'points_df_in': {
PortsSpecSchema.port_type: cudf.DataFrame
}
}
output_ports = {
'distance_df': {
PortsSpecSchema.port_type: cudf.DataFrame
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def columns_setup(self,):
self.delayed_process = True
required = {'x': 'float64',
'y': 'float64'}
self.required = {
'points_df_in': required,
'distance_df': required
}
self.addition = {
'distance_df': {'distance_numba': 'float64'}
}
def process(self, inputs):
df = inputs['points_df_in']
# DEBUGGING
try:
from dask.distributed import get_worker
worker = get_worker()
print('worker{} process NODE "{}" worker: {}'.format(
worker.name, self.uid, worker))
# print('worker{} NODE "{}" df type: {}'.format(
# worker.name, self.uid, type(df)))
except (ValueError, ImportError):
pass
number_of_threads = 16
number_of_blocks = ((len(df) - 1)//number_of_threads) + 1
# Inits device array by setting 0 for each index.
# df['distance_numba'] = 0.0
darr = rmm.device_array(len(df))
distance_kernel[(number_of_blocks,), (number_of_threads,)](
df['x'].to_gpu_array(),
df['y'].to_gpu_array(),
darr,
len(df))
df['distance_numba'] = darr
return {'distance_df': df}
# -
# The `self.delayed_process = True` flag in the `columns_setup` is necesary to enable the logic in the `Node` class for handling `dask_cudf` dataframes in order to use Dask (for distributed computation i.e. multi-gpu in examples later on). The `dask_cudf` dataframe does not support GPU customized kernels directly. The `to_delayed` and `from_delayed` low level interfaces of `dask_cudf` enable this support. The gQuant framework handles `dask_cudf` dataframes automatically under the hood when we set this flag.
# ### Customized Kernel by CuPy library
#
# CuPy is an alternative to Numba. Numba JIT compiles Python code into GPU device code at runtime. There are some limitations in how Numba can be used as well as JIT compilation latency overhead. When a Python process calls a Numba GPU kernel for the first time Numba has to compile the Python code, and each time a new Python process is started the GPU kernel has to be recompiled. If advanced features of CUDA are needed and latency is important, CuPy is an alternative library that can be used to compile C/C++ CUDA code. CuPy caches the GPU device code on disk (default location `$(HOME)/.cupy/kernel_cache` which can be changed via `CUPY_CACHE_DIR` environment variable) thus eliminating compilation latency for subsequent Python processes.
#
# `CuPy` GPU kernel is esentially a C/C++ GPU kernel. Below we define the `compute_distance` kernel using `CuPy`:
# Using gQuant we can now define a Node that calls this CuPy kernel to compute the distance and save the results into `distance_cupy` column of a `cudf` dataframe.
# +
kernel_string = r'''
extern "C" __global__
void compute_distance(const double* x, const double* y,
double* distance, int arr_len) {
int tid = blockDim.x * blockIdx.x + threadIdx.x;
if (tid < arr_len){
distance[tid] = sqrt(x[tid]*x[tid] + y[tid]*y[tid]);
}
}
'''
class CupyDistanceNode(Node):
def ports_setup(self):
input_ports = {
'points_df_in': {
PortsSpecSchema.port_type: cudf.DataFrame
}
}
output_ports = {
'distance_df': {
PortsSpecSchema.port_type: cudf.DataFrame
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def columns_setup(self,):
cols_required = {'x': 'float64',
'y': 'float64'}
self.required = {
'points_df_in': cols_required,
'distance_df': cols_required
}
self.addition = {
'distance_df': {
'distance_cupy': 'float64'
}
}
self.delayed_process = True
def get_kernel(self):
raw_kernel = cupy.RawKernel(kernel_string, 'compute_distance')
return raw_kernel
def process(self, inputs):
df = inputs['points_df_in']
# cupy_x = cupy.asarray(df['x'.to_gpu_array())
# cupy_y = cupy.asarray(df['y'.to_gpu_array())
cupy_x = cupy.asarray(df['x'])
cupy_y = cupy.asarray(df['y'])
number_of_threads = 16
number_of_blocks = (len(df) - 1)//number_of_threads + 1
dis = cupy.ndarray(len(df), dtype=cupy.float64)
self.get_kernel()((number_of_blocks,), (number_of_threads,),
(cupy_x, cupy_y, dis, len(df)))
df['distance_cupy'] = dis
return {'distance_df': df}
# -
# The `self.delayed_process = True` flag is added for the same reason as with `DistanceNumbaNode` i.e. to support `dask_cudf` data frames.
# ### Computing using the Nodes with customized GPU kernels
#
# First we construct the computation graph for gQuant.
# +
# For comparison to above re-use points dataframe instead
# of rand generating each time when running the task-graph.
points_tspec.update({
TaskSpecSchema.load: {
'points_df_out': points_df
}
})
numba_distance_tspec = {
TaskSpecSchema.task_id: 'distance_by_numba',
TaskSpecSchema.node_type: NumbaDistanceNode,
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {
'points_df_in': 'points_task.points_df_out'
},
}
cupy_distance_tspec = {
TaskSpecSchema.task_id: 'distance_by_cupy',
TaskSpecSchema.node_type: CupyDistanceNode,
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {
'points_df_in': 'points_task.points_df_out'
},
}
task_list = [
points_tspec,
cudf_distance_tspec,
numba_distance_tspec,
cupy_distance_tspec
]
task_graph = TaskGraph(task_list)
task_graph.draw(show='ipynb', show_ports=True)
# -
# Then we run the tasks.
out_list = [
'distance_by_cudf.distance_euclid_df',
'distance_by_numba.distance_df',
'distance_by_cupy.distance_df'
]
(df_w_cudf, df_w_numba, df_w_cupy) = task_graph.run(out_list)
print('HEAD df_w_cudf:\n{}\n'.format(df_w_cudf.head()))
print('HEAD df_w_numba:\n{}\n'.format(df_w_numba.head()))
print('HEAD df_w_cupy:\n{}\n'.format(df_w_cupy.head()))
# Use `verify` function defined above to verify the results:
mdiff = verify(df_w_cudf['distance_cudf'], df_w_numba['distance_numba'])
print('Max Difference cudf to numba: {}'.format(mdiff))
mdiff = verify(df_w_cudf['distance_cudf'], df_w_cupy['distance_cupy'])
print('Max Difference cudf to cupy: {}'.format(mdiff))
# To illustrate multi-input nodes let's create a verify node.
class VerifyNode(Node):
def ports_setup(self):
input_ports = {
'df1': {
PortsSpecSchema.port_type: [cudf.DataFrame, dask_cudf.DataFrame]
},
'df2': {
PortsSpecSchema.port_type: [cudf.DataFrame, dask_cudf.DataFrame]
}
}
output_ports = {
'max_diff': {
PortsSpecSchema.port_type: float
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def columns_setup(self):
pass
def process(self, inputs):
df1 = inputs['df1']
df2 = inputs['df2']
col_df1 = self.conf['df1_col']
col_df2 = self.conf['df2_col']
df1_col = df1[col_df1]
if isinstance(df1, dask_cudf.DataFrame):
# df1_col = df1_col.compute()
pass
df2_col = df2[col_df2]
if isinstance(df2, dask_cudf.DataFrame):
# df2_col = df2_col.compute()
pass
max_difference = (df1_col - df2_col).abs().max()
if isinstance(max_difference, np.float64):
max_difference = max_difference.item()
if isinstance(max_difference, dask.dataframe.core.Scalar):
max_difference = float(max_difference.compute())
# print('Max Difference: {}'.format(max_difference))
# assert(max_difference < 1e-8)
return {'max_diff': max_difference}
# +
verify_tspec = {
TaskSpecSchema.task_id: 'verify_cudf_to_numba',
TaskSpecSchema.node_type: VerifyNode,
TaskSpecSchema.conf: {
'df1_col': 'distance_cudf',
'df2_col': 'distance_numba'
},
TaskSpecSchema.inputs: {
'df1': 'distance_by_cudf.distance_euclid_df',
'df2': 'distance_by_numba.distance_df'
}
}
verify_tspec2 = {
TaskSpecSchema.task_id: 'verify_cudf_to_cupy',
TaskSpecSchema.node_type: VerifyNode,
TaskSpecSchema.conf: {
'df1_col': 'distance_cudf',
'df2_col': 'distance_cupy'
},
TaskSpecSchema.inputs: {
'df1': 'distance_by_cudf.distance_euclid_df',
'df2': 'distance_by_cupy.distance_df'
}
}
task_graph.extend([verify_tspec, verify_tspec2], replace=True)
task_graph.draw(show='ipynb', show_ports=True)
(max_cudf_to_numba_diff, max_cudf_to_cupy_diff) = task_graph.run([
'verify_cudf_to_numba.max_diff',
'verify_cudf_to_cupy.max_diff'
])
print('Max Difference cudf to numba: {}'.format(max_cudf_to_numba_diff))
print('Max Difference cudf to cupy: {}'.format(max_cudf_to_cupy_diff))
# -
# ### Dask distributed computation
#
# Using Dask and `dask-cudf` we can run the Nodes with customized GPU kernels on distributed dataframes. Under the hood of the `Node` class the Dask delayed processing API is handled for cudf dataframes when the `self.delayed_process = True` flag is set.
#
# We first start a distributed Dask environment. When a dask client is instantiated it registers itself as the default Dask scheduler (<http://distributed.dask.org/en/latest/client.html>). Therefore all subsequent Dask distibuted dataframe operations will run in distributed fashion.
# +
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
cluster = LocalCUDACluster()
client = Client(cluster)
client
# -
# The Dask status page can be displayed in a web browser at `<ip-address>:8787`. The ip-address corresponds to the machine where the dask cluster (scheduler) was launched. Most likely same ip-address as where this jupyter notebook is running. Using the Dask status page is convenient for monitoring dask distributed processing. <http://distributed.dask.org/en/latest/web.html>
# The next step is to partition the `cudf` dataframe into a `dask_cudf` dataframe. Here we make the number of partitions corresponding to the number of workers:
class DistributedNode(Node):
def ports_setup(self):
input_ports = {
'points_df_in': {
PortsSpecSchema.port_type: cudf.DataFrame
}
}
output_ports = {
'points_ddf_out': {
PortsSpecSchema.port_type: dask_cudf.DataFrame
}
}
return NodePorts(inports=input_ports, outports=output_ports)
def columns_setup(self,):
required = {
'x': 'float64',
'y': 'float64'
}
self.required = {
'points_df_in': required,
'points_ddf_out': required
}
def process(self, inputs):
npartitions = self.conf['npartitions']
df = inputs['points_df_in']
ddf = dask_cudf.from_cudf(df, npartitions=npartitions)
return {'points_ddf_out': ddf}
# We add this distribution node to the computation graph to convert `cudf` dataframes into `dask-cudf` dataframes. The `dask-cudf` dataframes are handled automatically in gQuant when `self.delayed_process=True` within a `Node` implementation (setup in `columns_setup`). When using nodes with ports with `self.delayed_process=True` setting, it is required that all input and output ports be of type `cudf.DataFrame`. Otherwise don't set `self.delayed_process` and one can write custom logic to handle distributed dataframes (refer to `VerifyNode` abover for an example where `dask_cudf` dataframes are handled directly within the process method).
# +
npartitions = len(client.scheduler_info()['workers'])
distribute_tspec = {
TaskSpecSchema.task_id: 'distributed_points',
TaskSpecSchema.node_type: DistributedNode,
TaskSpecSchema.conf: {'npartitions': npartitions},
TaskSpecSchema.inputs: {
'points_df_in': 'points_task.points_df_out'
}
}
dask_cudf_distance_tspec = {
TaskSpecSchema.task_id: 'distance_by_cudf',
TaskSpecSchema.node_type: DistanceNode,
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {
'points_df_in': 'distributed_points.points_ddf_out'
}
}
dask_numba_distance_tspec = {
TaskSpecSchema.task_id: 'distance_by_numba',
TaskSpecSchema.node_type: NumbaDistanceNode,
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {
'points_df_in': 'distributed_points.points_ddf_out'
}
}
dask_cupy_distance_tspec = {
TaskSpecSchema.task_id: 'distance_by_cupy',
TaskSpecSchema.node_type: CupyDistanceNode,
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {
'points_df_in': 'distributed_points.points_ddf_out'
}
}
task_list = [
points_tspec,
distribute_tspec,
dask_cudf_distance_tspec,
dask_numba_distance_tspec,
dask_cupy_distance_tspec
]
task_graph = TaskGraph(task_list)
task_graph.draw(show='ipynb', show_ports=True)
out_list = [
'distributed_points.points_ddf_out',
'distance_by_cudf.distance_euclid_df',
'distance_by_numba.distance_df',
'distance_by_cupy.distance_df'
]
(points_ddf, ddf_w_cudf, ddf_w_numba, ddf_w_cupy) = task_graph.run(out_list)
df_w_cudf = ddf_w_cudf.compute()
df_w_numba = ddf_w_numba.compute()
df_w_cupy = ddf_w_cupy.compute()
# -
# Verify the results:
# +
verify_cudf_numba_tspec = verify_tspec.copy()
verify_cudf_cupy_tspec = verify_tspec2.copy()
task_graph.extend(
[verify_cudf_numba_tspec,
verify_cudf_cupy_tspec],
replace=True)
task_graph.draw(show='ipynb', show_ports=True)
# Use results above and avoid re-running dask
replace_spec = {
'distance_by_cudf': {
TaskSpecSchema.load: {
'distance_euclid_df': ddf_w_cudf
}
},
'distance_by_numba': {
TaskSpecSchema.load: {
'distance_df': ddf_w_numba
}
},
'distance_by_cupy': {
TaskSpecSchema.load: {
'distance_df': ddf_w_cupy
}
}
}
(max_cudf_to_numba_diff, max_cudf_to_cupy_diff) = task_graph.run(
['verify_cudf_to_numba.max_diff',
'verify_cudf_to_cupy.max_diff'],
replace=replace_spec
)
print('HEAD points_ddf:\n{}\n'.format(points_ddf.head()))
print('HEAD df_w_cudf:\n{}\n'.format(ddf_w_cudf.head()))
print('HEAD df_w_numba:\n{}\n'.format(ddf_w_numba.head()))
print('HEAD df_w_cupy:\n{}\n'.format(ddf_w_cupy.head()))
print('Max Difference cudf to numba: {}'.format(max_cudf_to_numba_diff))
print('Max Difference cudf to cupy: {}'.format(max_cudf_to_cupy_diff))
# -
# One limitation to be aware of when using customized kernels within Nodes in the Dask environment, is that each GPU kernel works on one partition of the dataframe. Therefore if the computation depends on other partitions of the dataframe the approach above does not work.
# ### Saving Custom Nodes and Kernels
#
# The gQuant examples already implement a number of `Nodes`. These can be found in `gquant.plugin_nodes` submodules.
#
# The customized kernels and nodes can be saved to your own python modules for future re-use instead of having to re-define them at runtime. The nodes we defined above were to a written to a python module "custom_port_nodes.py" (the `DistanceNode` was simplified to ommit the absolute distance calculation). We will re-run our workflow importing the Nodes from the custom module we wrote out.
#
# When defining the tasks we specify `filepath` for the path to the python module that has the Node definition. Notice, that the `node_type` is specified as a string instead of class. The string is the class name of the node that will be imported for running a task.
# +
npartitions = len(client.scheduler_info()['workers'])
points_tspec = {
TaskSpecSchema.task_id: 'points_task',
TaskSpecSchema.node_type: 'PointNode',
TaskSpecSchema.filepath: 'custom_port_nodes.py',
TaskSpecSchema.conf: {'npts': 1000},
TaskSpecSchema.inputs: {},
}
distribute_tspec = {
TaskSpecSchema.task_id: 'distributed_points',
TaskSpecSchema.node_type: 'DistributedNode',
TaskSpecSchema.filepath: 'custom_port_nodes.py',
TaskSpecSchema.conf: {'npartitions': npartitions},
TaskSpecSchema.inputs: {
'points_df_in': 'points_task.points_df_out'
}
}
dask_cudf_distance_tspec = {
TaskSpecSchema.task_id: 'distance_by_cudf',
TaskSpecSchema.node_type: 'DistanceNode',
TaskSpecSchema.filepath: 'custom_port_nodes.py',
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {
'points_df_in': 'distributed_points.points_ddf_out'
}
}
dask_numba_distance_tspec = {
TaskSpecSchema.task_id: 'distance_by_numba',
TaskSpecSchema.node_type: 'NumbaDistanceNode',
TaskSpecSchema.filepath: 'custom_port_nodes.py',
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {
'points_df_in': 'distributed_points.points_ddf_out'
}
}
dask_cupy_distance_tspec = {
TaskSpecSchema.task_id: 'distance_by_cupy',
TaskSpecSchema.node_type: 'CupyDistanceNode',
TaskSpecSchema.filepath: 'custom_port_nodes.py',
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {
'points_df_in': 'distributed_points.points_ddf_out'
}
}
verify_cudf_to_numba_tspec = {
TaskSpecSchema.task_id: 'verify_cudf_to_numba',
TaskSpecSchema.node_type: 'VerifyNode',
TaskSpecSchema.filepath: 'custom_port_nodes.py',
TaskSpecSchema.conf: {
'df1_col': 'distance_cudf',
'df2_col': 'distance_numba'
},
TaskSpecSchema.inputs: {
'df1': 'distance_by_cudf.distance_df',
'df2': 'distance_by_numba.distance_df'
}
}
verify_cudf_to_cupy_tspec = {
TaskSpecSchema.task_id: 'verify_cudf_to_cupy',
TaskSpecSchema.node_type: 'VerifyNode',
TaskSpecSchema.filepath: 'custom_port_nodes.py',
TaskSpecSchema.conf: {
'df1_col': 'distance_cudf',
'df2_col': 'distance_cupy'
},
TaskSpecSchema.inputs: {
'df1': 'distance_by_cudf.distance_df',
'df2': 'distance_by_cupy.distance_df'
}
}
task_list = [
points_tspec,
distribute_tspec,
dask_cudf_distance_tspec,
dask_numba_distance_tspec,
dask_cupy_distance_tspec,
verify_cudf_to_numba_tspec,
verify_cudf_to_cupy_tspec
]
task_graph = TaskGraph(task_list)
task_graph.draw(show='ipynb', show_ports=True)
# +
out_list = [
'distance_by_cudf.distance_df',
'distance_by_numba.distance_df',
'distance_by_cupy.distance_df',
'verify_cudf_to_numba.max_diff',
'verify_cudf_to_cupy.max_diff'
]
(ddf_w_cudf, ddf_w_numba, ddf_w_cupy,
mdiff_cudf_to_numba, mdiff_cudf_to_cupy) = task_graph.run(out_list)
print('HEAD df_w_cudf:\n{}\n'.format(ddf_w_cudf.head()))
print('HEAD df_w_numba:\n{}\n'.format(ddf_w_numba.head()))
print('HEAD df_w_cupy:\n{}\n'.format(ddf_w_cupy.head()))
print('Max Difference cudf to numba: {}'.format(mdiff_cudf_to_numba))
print('Max Difference cudf to cupy: {}'.format(mdiff_cudf_to_cupy))
# -
# The final illustration is how to save and load a task graph to a file for re-use.
task_graph.save_taskgraph('custom_wflow.yaml')
task_graph = TaskGraph.load_taskgraph('custom_wflow.yaml')
# +
# update npartitions in case the scheduler is running with
# different number of workers than what was saved.
npartitions = len(client.scheduler_info()['workers'])
replace_spec = {
'distributed_points': {
TaskSpecSchema.conf: {'npartitions': npartitions},
}
}
out_list = [
'distance_by_cudf.distance_df',
'distance_by_numba.distance_df',
'distance_by_cupy.distance_df',
'verify_cudf_to_numba.max_diff',
'verify_cudf_to_cupy.max_diff'
]
(ddf_w_cudf, ddf_w_numba, ddf_w_cupy,
mdiff_cudf_to_numba, mdiff_cudf_to_cupy) = task_graph.run(
out_list, replace=replace_spec)
print('HEAD df_w_cudf:\n{}\n'.format(ddf_w_cudf.head()))
print('HEAD df_w_numba:\n{}\n'.format(ddf_w_numba.head()))
print('HEAD df_w_cupy:\n{}\n'.format(ddf_w_cupy.head()))
print('Max Difference cudf to numba: {}'.format(mdiff_cudf_to_numba))
print('Max Difference cudf to cupy: {}'.format(mdiff_cudf_to_cupy))
# -
# ### Conclusion
#
# Using customized GPU kernels allows data scientists to implement and incorporate advanced algorithms. We demonstrated implementations using Numba and CuPy.
#
# The Numba approach enables data scientists to write GPU kernels directly in the Python language. Numba is easy to use for implementing and accelerating computations. However there is some overhead incurred for compiling the kernels whenever the Numba GPU kernels are used for the first time in a Python process. Currently Numba library only supports primitive data types. Some advanced CUDA programming features, such as function pointers and function recursions are not supported.
#
# The Cupy method is very flexible, because data scientists are writing C/C++ GPU kernels with CUDA directly. All the CUDA programming features are supported. CuPy compiles the kernel and caches the device code to the filesystem. The launch overhead is low. Also, the GPU kernel is built statically resulting in runtime efficiency. However it might be harder for data scientists to use, because C/C++ programming is more complicated.
#
# Below is a brief summary comparison table:
#
# | Methods | Development Difficulty | Flexibility | Efficiency | Latency |
# |---|---|---|---|---|
# | Numba method | medium | medium | low | high |
# | CuPy method | hard | high | high | low |
#
# We recommend that the data scientists select the approach appropriate for their task taking into consideration the efficiency, latency, difficulty and flexibility of their workflow.
#
# In this blog, we showed how to wrap the customized GPU kernels in gQuant nodes. Also, by taking advantage of having the gQuant handle the low-level Dask interfaces for the developer, we demonstrated how to use the gQuant workflow with Dask distributed computations.
# +
# Clean up
# Shutdown the Dask cluster
client.close()
cluster.close()
# -
| notebooks/05b_customize_nodes_with_ports.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# +
# System level
import sys
# Future functions
from __future__ import print_function
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D, Convolution2D
from keras.models import Model
from keras import backend as K
import matplotlib.pyplot as plt
#import seaborn as sns
# %matplotlib inline
import numpy as np
import keras
keras.backend.backend();
# +
# ------------------------------------------------------------------------------
# Set variables
# ------------------------------------------------------------------------------
# training variables
nb_train = 2000 # number of objects for training (of both classes)
nb_test = 2000 # ... test
nb_epoch = 10 # number of passes over full neural network during training
batch_size = 32 # number objects to put in memory at once
np.random.seed(1928)
dim = 28 # image size
# +
# make data
import script17_0411 as ks
reload(ks)
x_train = ks.make_inputs(nb_train, dim=dim)
y_train = ks.make_outputs_sum(x_train)
x_test = ks.make_inputs(nb_test, dim=dim)
y_test = ks.make_outputs_sum(x_test)
print(x_train.shape)
print(x_test.shape)
# -
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Wrangling: We Rate Dogs Twitter account
# Import
import numpy as np
import pandas as pd
import requests
import tweepy
import json
import time
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# %matplotlib inline
import seaborn as sb
# ## Table of contents
# <ul>
# <li><a href="#gathering">Data gathering</a></li>
# <li><a href="#assessment">Data assessment</a></li>
# <li><a href="#cleaning">Data cleaning</a></li>
# <li><a href="#saving">Saving</a></li>
# <li><a href="#analysis">Analysis and visualisation</a></li>
# </ul>
# <a id='gathering'></a>
# ## Data gathering
# In the first part of this project, the required data will be gathered from different sources.
# Create a data frame from the provided .csv-file
df_archive = pd.read_csv('twitter-archive-enhanced.csv')
# Download the provided .tsv-file programmatically
r = requests.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv')
# Check success of request
r.status_code
# Write the downloaded object into a file
with open('image-predictions.tsv', mode = 'wb') as file:
file.write(r.content)
# Create a data frame
df_images = pd.read_csv('image-predictions.tsv', sep='\t')
# +
# Prepare for using the Twitter API
consumer_key = 'CONSUMER KEY'
consumer_secret = 'CONSUMER SECRET'
access_token = 'ACCESS TOKEN'
access_secret = 'ACCESS SECRET'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True)
# +
# Query json data for each tweet ID in the Twitter archive of WeRateDogs
missing_ids = [] # initialise list
count = 1
start = time.time()
with open('tweet_json.txt', 'w') as outfile:
for i in df_archive.tweet_id: # get each tweet ID
try:
# write the json data for each ID into a text file
tweet = api.get_status(i, tweet_mode = 'extended')
json.dump(tweet._json, outfile)
outfile.write('\n') # add new line
except tweepy.TweepError:
# catch error and write missing IDs into list
missing_ids.append(i)
print(count, ': Tweet ID ', i)
count += 1
end = time.time()
print('Elapsed time: ', (end - start)/60, ' minutes')
# -
# Read text file with json data line by line
json_list = []
with open('tweet_json.txt') as file:
for line in file:
json_list.append(json.loads(line))
# Get tweet ids that have corresponding json data
json_ids = np.array(df_archive.tweet_id[df_archive['tweet_id'].isin(missing_ids) == False])
# Write a data frame with the retweet and favorite counts
df_list = []
favorite_count = {}
for i in range(len(json_list)):
df_list.append({'tweet_id': json_ids[i],
'retweet_count': json_list[i]['retweet_count'],
'favorite_count': json_list[i]['favorite_count']})
df_popularity = pd.DataFrame(df_list)
# <a id='assessment'></a>
# ## Data assessment
# In this part of the project, I will inspect the data to identify quality and tidiness issues.
df_archive.head()
df_archive.info()
df_archive.sample(5)
df_archive.name.value_counts()
df_archive.name.value_counts().sample(20)
# print the text of all tweet IDs with non-names to identify the errors
# get index of non-names
ind = df_archive.name.str.extract('(^[a-z]+$)').dropna().index
# print index and text for non-names
for i in ind:
print(i, '--', df_archive.text[i])
df_archive.doggo.value_counts()
df_archive.floofer.value_counts()
df_archive.pupper.value_counts()
df_archive.puppo.value_counts()
df_archive.rating_numerator.value_counts()
# Print the text of all tweet IDs with unusual numerators to identify the errors
# Get unusual values
vals = df_archive.rating_numerator.value_counts()[df_archive.rating_numerator.value_counts() <=2].index
# Filter for unusual values
mask = df_archive.rating_numerator.isin(vals)
# Print index and text of unusual values
for i in df_archive.loc[mask, 'text'].index:
print(i, '--', df_archive.text[i])
df_archive.rating_denominator.value_counts()
# Print the text of all tweet IDs with unusual denominators to identify the errors
# Get unusual values
vals = df_archive.rating_denominator.value_counts()[df_archive.rating_denominator.value_counts() <=3].index
# Filter for unusual values
mask = df_archive.rating_denominator.isin(vals)
# Print index and text of unusual values
for i in df_archive.loc[mask, 'text'].index:
print(i, '--', df_archive.text[i])
df_images.head()
df_images.sample(5)
df_images.info()
df_popularity.head()
df_popularity.info()
# The following issues were found after executing the code above:
# ### Quality issues
# #### `df_archive`
# 1. `df_archive` has more rows than `df_images`, so there are some tweets with no images (project requirements are to use only tweets with images)
# 2. Missing values in dog stage variables and `name` are marked as 'None' instead of NaN
# 3. `tweet_id`, `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id` and `retweeted_status_user_id` are floats instead of strings
# 4. Data frame contains retweets and replies but should only contain original tweets
# 5. Some names are wrong in `name`
# 1. It seems that all incorrect name entries are spelt with lowercase letters.
# 2. Checking the text of these cases reveals that the expressions "named..." and "name is..." were not considered when extracting the names from the text
# 3. Index no. 369 should have the name "Grace"
# 6. Some values in `rating_numerator` and `rating_denominator` appear to be wrong
# 1. index of wrong values: [55, 313, 340, 342, 516, 763, 1068, 1165, 1202, 1662, 1712, 2335]
# 2. correct values: [13/10, 13/10, 9.75/10, NaN, NaN, NaN, 11.27/10, 14/10, 13/10, 11/10, NaN, 11.26/10, 9/10]
# 7. `timestamp` and `retweeted_status_timestamp` columns are strings instead of dates. This will be ignored for two reasons, though:
# 1. I do not intend to analyse any time based information
# 2. When storing the data frame into a .csv-file, the date format will be lost again
#
# #### `df_images`
# 8. Some images do not show dogs
# 9. There are three possible dog breeds for each image instead of one
# 10. Names of dog breeds are not formated consistently
# 11. `tweet_id` is int instead of string
#
#
# #### `df_popularity`
# 12. Fewer number of tweet IDs than in the other data frames. Instead of trimming the other data frames, these will be accepted as missing values. No action required.
# 13. `tweet_id` is int instead of string
#
# ### Tidiness issues
# 14. `df_archive`: Dog stage variable is written as 4 columns but it should be one column
# 15. `df_archive`: Rating is separated into two columns but should be one variable in one column
# 16. `df_popularity`: Should not be a separate data frame but should be written into `df_archive`
# 17. `df_images`: The selected dog breed and jpg url should be written into `df_archive`
# <a id='cleaning'></a>
# ## Data cleaning
# Make backup copies of all data frames
df_archive_raw = df_archive.copy()
df_images_raw = df_images.copy()
df_popularity_raw = df_popularity.copy()
# ### Issues 8 & 9: Some images do not show dogs in `df_images` (this needs to be fixed before I can tackle Issue 1) & there are three possible dog breeds for each image instead of one
# ##### Define
# Exclude rows where `p1`, `p2` and `p3` are not a dog breed and define the "true breed" for each image by picking the breed prediction with the highest confidence.
# ##### Code
# Select only rows with at least one recognised dog breed
df_images = df_images[((df_images.p1_dog == True) | (df_images.p2_dog == True) | (df_images.p3_dog == True))]
# Choose dog breed with biggest confidence in each row
for i in df_images.index:
if (df_images.p1_conf[i] > df_images.p2_conf[i]) & (df_images.p1_dog[i] == True):
df_images.loc[i,'selected_breed'] = df_images.loc[i,'p1']
elif (df_images.p2_conf[i] > df_images.p3_conf[i]) & (df_images.p2_dog[i] == True):
df_images.loc[i,'selected_breed'] = df_images.loc[i,'p2']
else:
df_images.loc[i,'selected_breed'] = df_images.loc[i,'p3']
# ##### Test
# Check if selected breed is a dog breed and has the highest confidence
df_images.sample(10)
# ### Issue 1: `df_archive` has more rows than `df_images`
# ##### Define
# Select only those tweet IDs in `df_archive` that are in `df_images`
# ##### Code
# Get tweet ids in df_images and filter df_archive
tweet_list = df_images.tweet_id.values
df_archive = df_archive[df_archive['tweet_id'].isin(tweet_list)]
# ##### Test
# Check if the two data frames have the same length
len(df_archive) == len(df_images)
# ### Issue 2: Missing values in dog stage variables and `name` are marked as 'None' instead of NaN in `df_archive`
# ##### Define
# Replace 'None' with NaN
# ##### Code
df_archive = df_archive.replace('None', np.nan)
# ##### Test
# make sure that 'None' is no longer a value
df_archive.doggo.value_counts()
# Make sure that 'None' is no longer a value
df_archive.floofer.value_counts()
# Make sure that 'None' is no longer a value
df_archive.pupper.value_counts()
# Make sure that 'None' is no longer a value
df_archive.puppo.value_counts()
# ### Issues 3, 11 & 13: `tweet_id`, `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id` and `retweeted_status_user_id` have wrong data types in `df_archive`, `df_images` and `df_popularity`
# ##### Define
# Change format `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id` and `retweeted_status_user_id` to int
# ##### Code
# +
# Change data format
df_archive.tweet_id = df_archive.tweet_id.astype('object')
df_archive.in_reply_to_status_id = df_archive.in_reply_to_status_id.astype('object')
df_archive.in_reply_to_user_id = df_archive.in_reply_to_user_id.astype('object')
df_archive.retweeted_status_id = df_archive.retweeted_status_id.astype('object')
df_archive.retweeted_status_user_id = df_archive.retweeted_status_user_id.astype('object')
df_images.tweet_id = df_images.tweet_id.astype('object')
df_popularity.tweet_id = df_popularity.tweet_id.astype('object')
# -
# ##### Test
# Check data type
df_archive.info()
# Check data type
df_images.info()
# Check data type
df_popularity.info()
# ### Issue 4: `df_archive` contains retweets and replies
# ##### Define
# Filter `df_archive` to keep only those rows where `in_reply_to_status_id` and `retweeted_status_id` are NaNs. Then drop these columns because they are of no use.
# ##### Code (filter)
# Include only rows with no retweets and replies
df_archive = df_archive[(df_archive.in_reply_to_status_id.isnull()) & (df_archive.retweeted_status_id.isnull())]
# ##### Test
# Check data type (before dropping)
df_archive.info()
# ##### Code (drop unused columns)
# Drop unused columns
df_archive = df_archive.drop(['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp'], axis = 1)
# ##### Test
# Make sure that columns were dropped
df_archive.info()
# ### Issue 5: Some names are wrong in `name`
# ##### Define
# 1. Extract and replace the names after the expressions "named..." and "name is..."
# 2. Replace the name with index 369 with "Grace"
# 3. Replace the remaining lowercase entries in `name` with NaNs
# ##### Code
# Get dataframe with non-names
ind = df_archive.name.str.extract('(^[a-z]+$)').dropna().index
df_archive_nonames = df_archive.loc[ind]
# Extract and replace names after "named..."
named = df_archive_nonames.text.str.extract('named\s([A-Z][a-z]+)[\s\.]').dropna().rename(columns={0:'name'})
df_archive.update(named)
# Extract and replace names after "name is..."
name_is = df_archive_nonames.text.str.extract('name is\s([A-Z][a-z]+)[\s\.]').dropna().rename(columns={0:'name'})
df_archive.update(name_is)
# Replace name at index 369 with "Grace"
grace = pd.Series('Grace', name='name', index=[369])
df_archive.update(grace)
# Replace the rest of the wrong names with NaNs
rest_ind = []
for i in ind:
if i not in np.concatenate((np.array(named.index), np.array(name_is.index), [369])):
df_archive.replace(df_archive.name.loc[i], np.nan, inplace = True)
# ##### Test
# Check entries of non-names (most of them should be NaN)
df_archive.loc[ind]['name']
# Check entries of the names extracted after "named..."
df_archive.loc[np.array(named.index)]['name']
# Check entries of the names extracted after "name is..."
df_archive.loc[np.array(name_is.index)]['name']
# Check entry at index 369
df_archive.loc[grace.index]['name']
# ### Issues 6 & 15: Some values in `rating_numerator` and `rating_denominator` appear to be wrong & rating is separated into two columns but should be one variable in one column
# ##### Define
# 1. Calculate rating as one number and write it into one column
# 2. Set the values with index [55, 313, 340, 342, 516, 763, 1068, 1165, 1202, 1662, 1712, 2335] to [13/10, 13/10, 9.75/10, NaN, NaN, NaN, 11.27/10, 14/10, 13/10, 11/10, NaN, 11.26/10, 9/10]
# 3. Drop unsused columns
# ##### Code
# Define a series with indices and values as defined in the data assessment
new_rating_index = [55, 313, 340, 342, 516, 763, 1068, 1165, 1202, 1662, 1712, 2335]
new_rating = [13/10, 13/10, 9.75/10, np.nan, np.nan, 11.27/10, 14/10, 13/10, 11/10, np.nan, 11.26/10, 9/10]
new_rating = pd.Series(new_rating, name='rating', index=new_rating_index)
ind = []
for i in df_archive.index:
if i in new_rating_index: # check if index is still present or was deleted in previous cleaning actions
df_archive.loc[i, 'rating'] = new_rating[i] # change values according to new_rating
ind.append(i) # write changed indices into list
else:
df_archive.loc[i, 'rating'] = df_archive.rating_numerator[i] / df_archive.rating_denominator[i] # calculate rating
# Drop unused columns
df_archive = df_archive.drop(['rating_numerator','rating_denominator'], axis = 1)
# ##### Test
# Check changed ratings changed according to new_rating
df_archive.loc[ind,'rating']
# Check new column
df_archive.sample(10)
# ### Issue 10: Names of dog breeds are not formated consistently
# ##### Define
# Replace the underscore(s) in `selected_breed` and capitalise the first letters of each word
# ##### Code
# Replace underscores and capitalise
df_images.selected_breed = df_images.selected_breed.str.replace('_',' ').str.title()
# ##### Test
# Check new names of dog breeds
df_images.selected_breed.sample(20)
# ### Issue 14: Dog stage variable is written as 4 columns but it should be one column in `df_archive`
# ##### Define
# Write dog stages into one column and delete the unused columns
# ##### Code
for line in df_archive.index: # go through each line
for val in df_archive.loc[line,['doggo','floofer','pupper','puppo']]: # go through each value in a line
if val in ['doggo','floofer','pupper','puppo']:
df_archive.loc[line,'dog_stage'] = val
# Drop unused columns
df_archive = df_archive.drop(['doggo','floofer','pupper','puppo'], axis = 1)
# ##### Test
# Check new column
df_archive.sample(10)
# ### Issue 16: `df_popularity` should not be a separate data frame but should be written into `df_archive`
# ##### Define
# Merge `df_popularity` and `df_archive` into one data frame
# ##### Code
# Merge the two data frames and change data types of new columns to 'int'
df_archive = df_archive.merge(df_popularity, how = 'left', on = 'tweet_id')
df_archive.retweet_count = df_archive.retweet_count.astype('Int64')
df_archive.favorite_count = df_archive.favorite_count.astype('Int64')
# ##### Test
# Check new columns and their data type
df_archive.info()
# ### Issue 17: `selected_breed` and `jpg_url` in `df_images` should be written into `df_archive`
# ##### Define
# Merge the `selected_breed` and `jpg_url` columns of `df_images` and `df_archive` into one data frame
# ##### Code
df_archive = df_archive.merge(df_images[['tweet_id','selected_breed','jpg_url']], how = 'left', on = 'tweet_id')
# ##### Test
# Check new column
df_archive.info()
# <a id='saving'></a>
# ## Saving
# Save the master data frame to a csv-file
df_archive.to_csv('twitter_archive_master.csv', index=False)
# <a id='analysis'></a>
# ## Analysis and visalisation
# There are three questions I would like to answer:
# 1. Which tweet was retweeted the most?
# 2. Which dog breed was retweeted the most?
# 3. Which dog breed received the most "likes"?
# Get image url of tweet with the most retweets and download file
mask = df_archive.retweet_count == df_archive.retweet_count.max()
ind_max_retweet = df_archive.text[mask].index[0]
url = df_archive.jpg_url.loc[ind_max_retweet]
r = requests.get(url)
# Write the downloaded object into a file
with open('max_retweeted_image.jpg', mode = 'wb') as file:
file.write(r.content)
# Get image and text of tweet with the most retweets
max_retweeted_image = mpimg.imread('max_retweeted_image.jpg')
max_retweeted_text = df_archive.text[mask].values[0]
max_retweeted_text = max_retweeted_text[:max_retweeted_text.rfind('https')] # strip off tweet url
# Plot
fig, ax = plt.subplots()
ax.imshow(max_retweeted_image);
props = dict(boxstyle='round', facecolor='skyblue', alpha=0.5)
ax.text(-800, 800, max_retweeted_text, fontsize=14, bbox=props);
plt.axis('off');
plt.title('This tweet was retweeted most often: {} times'.format(df_archive.retweet_count.max()));
# The above image shows the tweet text and image that was retweeted most often.
# Group by dog breed and get the top ten of mean retweets per breed
top_retweeted_breeds = df_archive.groupby('selected_breed').mean().retweet_count.sort_values(axis=0,ascending=False).index[:11]
# Get data frame with top ten dog breeds
df_top_breeds = df_archive[df_archive.selected_breed.isin(top_retweeted_breeds)]
# Plot the top ten dog breeds
sb.pointplot(x = df_top_breeds.retweet_count, y = df_top_breeds.selected_breed,
color = 'skyblue', order = top_retweeted_breeds, linestyles='');
plt.xticks(rotation=30);
plt.xlabel('Mean retweet count');
plt.ylabel('');
plt.title('Top ten dog breeds in terms of mean retweets');
# This image shows the top ten dogs in terms of mean retweets. Tweets with Bedlington Terrier dogs were retweeted most often. However, the spread of the retweet counts is quite high. Some individual tweets of the other top ten breeds were retweeted more often than the mean retweets of the Bedlington Terriers, for example the Standard Poodle or the English Springer.
# Group by dog breed and get the top ten of mean likes per breed
top_favorite_breeds = df_archive.groupby('selected_breed').mean().favorite_count.sort_values(axis=0,ascending=False).index[:11]
# Get data frame with top ten dog breeds
df_top_breeds = df_archive[df_archive.selected_breed.isin(top_favorite_breeds)]
# Plot the top ten dog breeds
sb.pointplot(x = df_top_breeds.favorite_count, y = df_top_breeds.selected_breed,
color = 'skyblue', order = top_favorite_breeds, linestyles='');
plt.xlabel('Mean favorite count');
plt.ylabel('');
plt.title('Top ten dog breeds in terms of mean "likes"');
plt.xticks(rotation=30);
# The top ten dog breeds in terms of mean "likes" are different than the ones in terms of retweets. Even though the first place is also held by the Bedlington Terrier, it is very similar to the mean favourite count of the Saluki dogs.
| wrangle_act.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Installation
# Install the python package.
# !pip install doki-theme-jupyter
# You now have `dokithemejupyter` available in your command line interface!
#
# Let's install a theme, I'll choose my favorite girl.
# !dokithemejupyter --set-theme "Franxx: Zero Two Light"
# ## Matplotlib Integration
#
# The Doki Theme for Jupypter Notebook comes with a Matplotlib decorating API.
# This styles all of your plotting figures to match your current theme!
#
# You'll just need to add and run the code below to any notebook for it to take effect.
from dokithemejupyter import decorator
decorator.decorate_plotter()
# ### Decorated plotting examples
# +
# Demo code
# %matplotlib inline
from fastai.basics import *
n=100
x = torch.ones(n,4)
x[:,0].uniform_(-1.,1)
x[:5]
x = torch.index_select(x, 1, torch.tensor([0,0,0,2]))
x[:,0].apply_(lambda a: 3*a**3)
x[:,1].apply_(lambda a: -2*a**2)
x[:5]
a = tensor(3.,1.,2.,4.); a
y = x@a + torch.rand(n)
# -
plt.scatter(x[:,2], y);
a = tensor(-1.,1,-1,1)
y_hat = x@a
plt.scatter(x[:,2],y)
plt.scatter(x[:,2],y_hat);
plt.scatter(x[:,0],y_hat);
plt.figure(linewidth=2)
plt.plot([1,2,3])
# +
np.random.seed(19680801)
# example data
mu = 100 # mean of distribution
sigma = 15 # standard deviation of distribution
x = mu + sigma * np.random.randn(437)
num_bins = 50
fig, ax = plt.subplots()
# the histogram of the data
n, bins, patches = ax.hist(x, num_bins, density=True)
# add a 'best fit' line
y = ((1 / (np.sqrt(2 * np.pi) * sigma)) *
np.exp(-0.5 * (1 / sigma * (bins - mu))**2))
ax.plot(bins, y, '--')
ax.set_xlabel('Smarts')
ax.set_ylabel('Probability density')
ax.set_title(r'Histogram of IQ: $\mu=100$, $\sigma=15$')
# Tweak spacing to prevent clipping of ylabel
fig.tight_layout()
plt.show()
# -
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MetNet Pytorch
# This repo implements my own approximation of [MetNet](https://arxiv.org/abs/2003.12140).
#
# 
# Take a look at the notebooks, you can install this repo using:
#
# ```bash
# pip install -e .
# ```
from fastai.vision.all import *
from metnet_pytorch.model import DownSampler, MetNet
# Define the MetNet params:
past_instants = 6
horizon = 10
xtra_features = 5
image_encoder = DownSampler(3 + xtra_features + horizon)
metnet = MetNet(image_encoder, hidden_dim=128,
ks=3, n_layers=1, horizon=horizon,
head=create_head(128, 1), n_feats=xtra_features, debug=True)
metnet.eval();
imgs = torch.rand(1,past_instants,3,64,64)
timeseries = torch.rand(1,xtra_features,past_instants)
with torch.no_grad():
metnet(imgs, timeseries)
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Talking to Db2 with open RESTful APIs and micro-services
# The new Db2 Data Management Console is a free browser based user interface included with Db2 for Linux, UNIX and Windows. Its more than a graphical user interface to monitor, manage and optimize Db2. It is a set of open RESTful APIs and micro-services for DB2.
#
# Everything in the User Interface is available through an open and fully documented RESTful Services API. You can also embed elements of the user interface into your own webpages, or Jupyter notebooks.
#
# This Jupyter Notebook contains examples of how to use the open RESTful APIs and the composable user interfaces that are available in the Db2 Data Management Console.
#
# You can find out more about the Db2 Console at www.ibm.biz/Db2Console.
# ### Where to find this sample online
# You can find a copy of this notebook at https://github.com/Db2-DTE-POC/db2dmc.
# ### First we will import a few helper classes
# We need to pull in a few standard Python libraries so that we can work with REST, JSON and communicate with the Db2 Console APIs.
# Import the class libraries
import requests
import ssl
import json
from pprint import pprint
from requests import Response
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
# ### The Db2 Class
# Next we will create a Db2 helper class that will encapsulate the Rest API calls that we can use to access the Db2 Console service.
#
# To access the service, we need to first authenticate with the service and create a reusable token that we can use for each call to the service. This ensures that we don't have to provide a userID and password each time we run a command. The token makes sure this is secure.
#
# Each request is constructed of several parts. First, the URL and the API identify how to connect to the service. Second the REST service request that identifies the request and the options. For example '/metrics/applications/connections/current/list'. Some complex requests also include a JSON payload. For example running SQL includes a JSON object that identifies the script, statement delimiters, the maximum number of rows in the results set as well as what do if a statement fails.
#
# The full set of APIs are documents as part of the Db2 Data Management Console user interface.
# Run the Db2 Class library
# Used to construct and reuse an Autentication Key
# Used to construct RESTAPI URLs and JSON payloads
class Db2():
def __init__(self, url, verify = False, proxies=None, ):
self.url = url
self.proxies = proxies
self.verify = verify
def authenticate(self, userid, password, profile=""):
credentials = {'userid':userid, 'password':password}
r = requests.post(self.url+'/auth/tokens', verify=self.verify, json=credentials, proxies=self.proxies)
if (r.status_code == 200):
bearerToken = r.json()['token']
if profile == "":
self.headers = {'Authorization': 'Bearer'+ ' '+bearerToken}
else:
self.headers = {'Authorization': 'Bearer'+ ' '+bearerToken, 'X-DB-Profile': profile}
else:
print ('Unable to authenticate, no bearer token obtained')
def getRequest(self, api, json=None):
return requests.get(self.url+api, verify = self.verify, headers=self.headers, proxies = self.proxies, json=json)
def postRequest(self, api, json=None):
return requests.post(self.url+api, verify = self.verify, headers=self.headers, proxies = self.proxies, json=json)
def getStatusCode(self, response):
return (response.status_code)
def getJSON(self, response):
return (response.json())
# ## Establishing a Connection to the Console
# ### Example Connections
# To connect to the Db2 Data Management Console service you need to provide the URL, the service name (v4) and profile the console user name and password as well as the name of the connection profile used in the console to connect to the database you want to work with. For this lab we are assuming that the following values are used for the connection:
# * Userid: db2inst1
# * Password: <PASSWORD>
# * Connection: sample
#
# **Note:** If the Db2 Data Management Console has not completed initialization, the connection below will fail. Wait for a few moments and then try it again.
# +
# Connect to the Db2 Data Management Console service
Console = 'http://localhost:11080'
profile = 'SAMPLE'
user = 'DB2INST1'
password = '<PASSWORD>'
# Set up the required connection
profileURL = "?profile="+profile
databaseAPI = Db2(Console+'/dbapi/v4')
databaseAPI.authenticate(user, password, profile)
database = Console
# -
# ### Confirm the connection
# To confirm that your connection is working you can check your console connection to get the details of the specific database connection you are working with. Since your console user id and password may be limited as to which databases they can access you need to provide the connection profile name to drill down on any detailed information for the database.
# Take a look at the JSON that is returned by the call in the cell below. You can see the name of the connection profile, the database name, the database instance the database belongs to, the version, release and edition of Db2 as well as the operating system it is running on.
# List Monitoring Profile
r = databaseAPI.getRequest('/dbprofiles/'+profile)
json = databaseAPI.getJSON(r)
print(json)
# You can also check the status of the moitoring service. This call take a bit longer since it is running a quick diagnostic check on the Db2 Data Management Console monitoring service. You should see that the both the database and authentication services are online.
# Get Monitor Status
r = databaseAPI.getRequest('/monitor')
json = databaseAPI.getJSON(r)
print(json)
# ## Object Exploration
# ### List the Available Schemas in the Database
# You can call the Db2 Data Management Console micro service to provide an active console component that you can include in an IFrame directly into your notebook. The first time you access this you will have to log in just like any other time you use the console for the first time. If you want to see all the schemas, including the catalog schemas, select the "Show system schemas" toggle at the right side of the panel.
# * Userid: db2inst1
# * Password: <PASSWORD>
#
# **Note:** You may need to logon to the console for the frame to be displayed.
#
# When the interface appears:
#
# Click on **Show system schemas** at the right side of the screen. This displays all the schemas in the Db2 catalog as well as user schemas.
from IPython.display import IFrame
IFrame(database+'/console/?mode=compact#explore/schema'+profileURL, width=1400, height=500)
# You can get the same list through the REST service call. In this example the service call text was defined in the Db2 class at the start of the notebook. By default it includes both user and catalog schemas.
#
# If the call is successful it will return a 200 status code. The API call returns a JSON structure that we turn into a Pandas DataFrame using the normalize function. You can then list the columns of data available in the Data Frame and display the first 10 rows in the data frame.
#
# Many of the examples below list the columns available in the dataframe to make it easier for you to adapt the examples to your own needs.
# For this next example we need to import the Pandas libraries to use DataFrames
import pandas as pd
from pandas.io.json import json_normalize
# +
r = databaseAPI.getRequest('/schemas')
if (databaseAPI.getStatusCode(r)==200):
json = databaseAPI.getJSON(r)
df = pd.DataFrame(json_normalize(json['resources']))
print(', '.join(list(df)))
display(df[['name']].head(10))
else:
print(databaseAPI.getStatusCode(r))
# -
# ### Object Search
# You can search the objects in your database through the search objects API. This API requires an JSON payload to define the search criteria which can be complex. In this example we are looking for Views with "table" in their name. It will search through both user and catalog views.
# +
# Search for tables across all schemas that match simple search critera
# Display the first 100
# Switch between searching tables or views
obj_type = 'view'
# obj_type = 'table'
search_text = 'TABLE'
rows_return=10
show_systems='true'
is_ascend='true'
json = {"search_name":search_text,"rows_return":rows_return,"show_systems":show_systems,"obj_type":obj_type,"filters_match":"ALL","filters":[]}
r = databaseAPI.postRequest('/admin/'+str(obj_type)+'s',json);
if (databaseAPI.getStatusCode(r)==200):
json = databaseAPI.getJSON(r)
df = pd.DataFrame(json_normalize(json))
print('Columns:')
print(', '.join(list(df)))
display(df[[obj_type+'_name']].head(100))
else:
print("RC: "+str(databaseAPI.getStatusCode(r)))
# -
# This example returns all the tables in a single schema.
# +
# Find all the tables in the SYSIBM schema and display the first 10
schema = 'SYSIBM'
r = databaseAPI.getRequest('/schemas/'+str(schema)+'/tables');
if (databaseAPI.getStatusCode(r)==200):
json = databaseAPI.getJSON(r)
df = pd.DataFrame(json_normalize(json['resources']))
print(', '.join(list(df)))
display(df[['schema','name']].head(10))
else:
print(databaseAPI.getStatusCode(r))
# -
# ### Accessing Key Performance Metrics
# You can access key high level performance metrics by directly including the monitoring summary page in an IFrame or calling the available API. To see the time series history of the number of rows read in your system over the last day, run the statement below. Then scroll to the right side and find the Database Throughput Widget. Then select Rows Read and Last 24 hours.
IFrame(database+'/console/?mode=compact#monitor/summary'+profileURL, width=1400, height=500)
# To access the same data directly through an API you can use the getRowsRead function as defined in the Db2 class at the start of the notebook. To extract the timeseries data from the JSON returned from the API call you need to access the 'timeseries' part of the full JSON data set.
#
# The example below retrieves the last hour of data, converts it from JSON to a DataFrame and then displays and graphs the data. Notice that the timeseries data is returned as EPOC data. That is the number of seconds since January 1st 1970. The epochtotimeseries routine we created earlier in the lab converts that to human readable timeseries data.
# ### Time Series Data
# Since Db2 stores time series data as epoch time we need to do some simple calculations to determine current time as well as the duration of a week or a day.
# +
# Retrieve the number of rows read over the last day
import time
endTime = int(time.time())*1000
startTime = endTime-(60*60*1000)
# Return the rows read rate over the last hour
r = databaseAPI.getRequest('/metrics/rows_read?start='+str(startTime)+'&end='+str(endTime));
if (databaseAPI.getStatusCode(r)==200):
json = databaseAPI.getJSON(r)
if json['count'] > 0:
df = pd.DataFrame(json_normalize(json['timeseries'])) #extract just the timeseries data
print('Available Columns')
print(', '.join(list(df)))
else:
print('No data returned')
else:
print(databaseAPI.getStatusCode(r))
# -
# ### EPOC Time Conversion
# Db2 returns time series data in Unix epoch time. The first cell creates a routine to converts between epoch and human readable time series format. The next cell applies the function to every value in the timestamp column and displays the last 20 values.
# Setup data frame set calculation functions
def epochtotimeseries(epoch):
return time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(epoch/1000))
# Convert from EPOCH to timeseries data
# Display the last 20 datapoints
df['timestamp'] = df['timestamp'].apply(epochtotimeseries)
display(df[['timestamp','rows_read_per_min']].tail(20))
# Finally we can draw a graph of the last ten values.
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
df[['timestamp','rows_read_per_min']].tail(10).plot.line(x='timestamp',y='rows_read_per_min', figsize=(20,4))
plt.show()
# ### Storage Usage
# You can access the storage report page directly by calling it into an IFrame or you can access the data from an API. In the report below you can select the timeframe for storage usage, group by table or schema, select the object you want to analyze and then select View Details from the Actions column.
IFrame(database+'/console/?mode=compact#monitor/storage'+profileURL, width=1400, height=480)
# You can also list storage by schema. The following example retrieves the current level of storage usage.
# +
# List storage used by schema
# Display the top ten schemas
r = databaseAPI.getRequest('/metrics/storage/schemas?end=0&include_sys=true&limit=1000&offset=0&start=0')
if (databaseAPI.getStatusCode(r)==200):
json = databaseAPI.getJSON(r)
if json['count'] > 0:
df = pd.DataFrame(json_normalize(json['resources']))
print(', '.join(list(df)))
df['space_mb'] = df['data_physical_size_kb'].apply(lambda x: x / 1024)
df = df.sort_values(by='data_physical_size_kb', ascending=False)
display(df[['tabschema','space_mb']].head(10))
else:
print('No data returned')
else:
print("RC: "+str(databaseAPI.getStatusCode(r)))
# -
# ### Next Steps
# You can find a copy of this notebook at https://github.com/Db2-DTE-POC/db2dmc. This github library includes several other notebooks that that cover more advanced examples of how to use Db2 and Jupyter together through open APIs.
#
# You can also access a free hands-on interactive lab that uses all of the notebooks at: https://www.ibm.com/demos/collection/IBM-Db2-Data-Management-Console/. After you sign up for the lab you will get access to a live cloud based system running Db2, the Db2 Console as well as extensive Jupyter Notebooks and Python to help you learn more.
# #### Credits: IBM 2019, <NAME> [<EMAIL>]
| Console_API_Services_Blog.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1806554 <NAME> Python Assignment 4
# # 1
# 1 method
import matplotlib.pyplot as plt
import cv2
i = cv2.imread('demo.png')
plt.imshow(i)
# 2 method
out = cv2.imshow('Image Output',i)
cv2.waitKey(0)
cv2.destroyAllWindows()
# 3 method
from PIL import Image
im = Image.open(r"demo.png")
im.show()
# # 2
i.shape
# # 3
gi = cv2.cvtColor(i, cv2.COLOR_BGR2GRAY)
plt.imshow(gi)
# # 4
import imutils as im
ih = im.rotate(i,90)
plt.imshow(ih)
# vertical flip
iv = cv2.flip(i,0)
plt.imshow(iv)
# horizontal flip
iv = cv2.flip(i,1)
plt.imshow(iv)
# # 5
#
# resize
new_size = 100
ds = (i.shape[1],new_size)
output = cv2.resize(i, ds, interpolation = cv2.INTER_AREA)
plt.imshow(output)
# # 6
# new window
out = cv2.imshow('Image Output',i)
cv2.waitKey(0)
cv2.destroyAllWindows()
# # 7
# show in jupyter
plt.imshow(i)
# # 8
# concat side by side horizontal
im_h = cv2.hconcat([i,i])
cv2.imwrite('opencv_hconcat.jpg', im_h)
j = cv2.imread('opencv_hconcat.jpg')
plt.imshow(j)
# # 9
# crop image
#i.shape
val = input()
(hs,he,ws,we) = map(int,val.strip().split(' '))
cropped = i[hs:he, ws:we]
plt.imshow(cropped)
# # 10
# Binarized Image
import numpy as np
img = cv2.imread('demo.png')
height,width,channels = img.shape
img_binary = np.zeros((height,width,1))
img_grayscale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
(thresh, img_binary) = cv2.threshold(img_grayscale, 128, 255, cv2.THRESH_BINARY)
cv2.imwrite('image_binary.jpg',img_binary)
cv2.imshow('image',img_binary)
cv2.waitKey(0)
cv2.destroyAllWindows()
# # 11
# binary image
matz = np.random.randint(2,size = (5,5))
print(matz)
plt.imshow(matz, cmap="gray")
plt.show()
# # 12
# RGB Channels
rgbImage = cv2.imread('demo.png')
# redChannel = rgbImage[:,:,1]
# greenChannel = rgbImage[:,:,2]
# blueChannel = rgbImage[:,:,2]
def channelShift(i):
return rgbImage[:,:,i]
x = (0,1,2)
r,g,b = map(channelShift,x)
plt.imshow(r)
plt.imshow(g)
plt.imshow(b)
# # 13
# Text on Image
img = cv2.imread('demo.png')
font = cv2.FONT_HERSHEY_DUPLEX
org = (25,35)
fontScale = 1
color = (255,0,0)
thickness = 2
sentence = input()
image = cv2.putText(img,sentence,org,font,fontScale,color,thickness,cv2.LINE_AA)
cv2.imwrite('opencv_T.jpg',image)
j = cv2.imread('opencv_T.jpg')
plt.imshow(j)
# # 14
# +
# images in folder
import cv2
import os
def printImageNames(folder):
images = []
for filename in os.listdir(folder):
img = cv2.imread(os.path.join(folder,filename))
if img is not None:
images.append(filename)
images.sort()
return images
folder="C:/Users/KIIT/Documents/College-Stuff/T&T/python scripts/"
printImgNames(folder)
# -
# # 15
# count Images
def cntImages(path):
cnt = 0
for filename in os.listdir(path):
img = cv2.imread(os.path.join(path,filename))
if img is not None:
cnt+=1
return cnt
path="C:/Users/KIIT/Documents/College-Stuff/T&T/python scripts/"
cntImages(path)
# # 16
# # copy and Name change
import shutil
import os
src_dir = "C:/Users/KIIT/Documents/College-Stuff/T&T/python scripts/"
dst_dir = "C:/Users/KIIT/Documents/College-Stuff/T&T/python scripts/work/"
for filename in glob.glob(os.path.join(src_dir, '*.jpg')):
shutil.copy(filename, dst_dir)
for cnt,filename in enumerate(os.listdir("work")):
new_name = str(filename)
dst = new_name[:-4] + "_" + str(cnt) + ".jpg"
src = 'work/' + filename
dst = 'work/' + dst
os.rename(src,dst)
# # 17
# Specifications
def Info(folder):
for filename in os.listdir(folder):
img = cv2.imread(os.path.join(folder,filename))
if img is not None:
with open("images_info.txt","a") as f:
f.write("filename : "+str(filename))
f.write(" Info -> \n")
f.write("height : "+str(img.shape[0])+" "
"width : "+str(img.shape[1])+" "
"channels : "+str(img.shape[2]))
f.write("\n")
with open("images_info.txt") as f:
for line in f:
print(line,end = "")
folder="C:/Users/KIIT/Documents/College-Stuff/T&T/python scripts/"
Info(folder)
| T&T/python scripts/4_1806554.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dummy Classifier - Useful
# > How to have a first idea on the quality of your model
#
# - toc: true
# - badges: false
# - comments: true
# - author: <NAME>
# - categories: [sklearn]
# The [Dummy classifier](https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html#sklearn.dummy.DummyClassifier.predict) will give us an idea of the "minimum" quality we can achieve.
#
# It returns either a fixed value or the most frequent value of the training sample.
#
# The quality of its score will be used as a floor for the future estimation. The objective is to do better or much better than the idiot!
# # Preparation
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.dummy import DummyClassifier
myDataFrame = pd.read_csv("../../scikit-learn-mooc/datasets/penguins_classification.csv")
# ## The set
target_column = 'Species'
target = myDataFrame[target_column]
target.value_counts()
# ## Here we have the weight of each class, so also the minimum quality of the estimate
target.value_counts(normalize=True)
# ## Continuation of preparation
data = myDataFrame.drop(columns=target_column)
data.columns
numerical_columns = ['Culmen Length (mm)', 'Culmen Depth (mm)']
data_numeric = data[numerical_columns]
data_train, data_test, target_train, target_test = train_test_split(
data_numeric,
target,
#random_state=42,
test_size=0.25)
# # The dummy model
# ## Prior (default) same as most frequent
# The value returne is the most frequent in the training set
#
# model = DummyClassifier(strategy='prior')
model = DummyClassifier()
model.fit(data_train, target_train);
a = model.predict(data_test)
n = a.size
unique, counts = np.unique(a, return_counts=True)
dict(zip(unique, counts/n))
accuracy = model.score(data_test, target_test)
print(f"Accuracy of logistic regression: {accuracy:.3f}")
# ## Stratified
# The value return is found randomly by respecting the class distribution of the training
#
# model = DummyClassifier(strategy='stratified', random_state= oneInt)
model = DummyClassifier(strategy='stratified')
model.fit(data_train, target_train);
a = model.predict(data_test)
n = a.size
unique, counts = np.unique(a, return_counts=True)
dict(zip(unique, counts/n))
accuracy = model.score(data_test, target_test)
print(f"Accuracy of logistic regression: {accuracy:.3f}")
# ## Uniform
# The value return is generated uniformly at random
#
# model = DummyClassifier(strategy='uniform', random_state= oneInt)
model = DummyClassifier(strategy='uniform')
model.fit(data_train, target_train);
a = model.predict(data_test)
n = a.size
unique, counts = np.unique(a, return_counts=True)
dict(zip(unique, counts/n))
accuracy = model.score(data_test, target_test)
print(f"Accuracy of logistic regression: {accuracy:.3f}")
# ## Constant
# Always predicts a constant label that is provided. This is useful for metrics that evaluate a non-majority class
#
# model = DummyClassifier(strategy='constant', constant="oneConstant")
model = DummyClassifier(strategy='constant', constant="Chinstrap")
model.fit(data_train, target_train);
a = model.predict(data_test)
n = a.size
unique, counts = np.unique(a, return_counts=True)
dict(zip(unique, counts/n))
accuracy = model.score(data_test, target_test)
print(f"Accuracy of logistic regression: {accuracy:.3f}")
# # Conclusion
#
# The best estimation is to prior, so any model better than that is good.
#
# We thus have a floor value, then obviously the higher the score the better the estimation.
| _notebooks/2021-05-28-DummyClassifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Load the "autoreload" extension
# %load_ext autoreload
# %autoreload 2
import os
import sys
# add the 'src' directory as one where we can import modules
src_dir = os.path.join(os.getcwd(), os.pardir, 'src')
sys.path.append(src_dir)
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import numpy as np
import math
import cv2
from data import voc_data_helpers
from data.voc_data_helpers import get_img_sets, extract_img_data
from minibatch import resize, get_rois, nms, get_ious, calc_iou, rpn_y_true
# %matplotlib inline
import keras
from keras.layers import Input
from keras.layers.convolutional import Conv2D
from keras.models import Model, model_from_json
from keras.optimizers import SGD
from vgg16 import vgg16_rpn, vgg16_classifier, vgg16_base
from vgg16_dets import get_dets
from custom_layers import RoiResizeConv
from data import voc_data_helpers
from map_util import get_map
from loss_functions import cls_loss_rpn, bbreg_loss_rpn
import train_rpn
NUM_CLASSES_VOC = 20
DEFAULT_NUM_ITERATIONS = 10
DEFAULT_LEARN_RATE = 1e-3
DEFAULT_MOMENTUM = 0.9
VOC_PATH = 'a'
NUM_ROIS = 64
print(sys.version)
# -
class_mapping = voc_data_helpers.get_class_mapping(VOC_PATH)
# +
base_model = vgg16_base(NUM_CLASSES_VOC + 1)
model_rpn = vgg16_rpn(base_model, NUM_CLASSES_VOC, include_conv = True)
model_rpn.load_weights('../models/rpn_weights_tmp.h5')
frcnn_json_file = open('../models/frcnn_module.json', 'r')
frcnn_loaded_model_json = frcnn_json_file.read()
frcnn_json_file.close()
model_frcnn = model_from_json(frcnn_loaded_model_json,
custom_objects={'RoiResizeConv': RoiResizeConv})
model_frcnn.load_weights('../models/det_weights_step4.h5')
# -
#img, img_metadata = voc_data_helpers.extract_img_data(VOC_PATH, '001915')
img, img_metadata = voc_data_helpers.extract_img_data(VOC_PATH, '001915')
print(img_metadata)
width, height = img_metadata['width'], img_metadata['height']
new_width, new_height = resize(width, height, 600)
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
resized_img = cv2.resize(cv_rgb, (new_width, new_height), interpolation=cv2.INTER_CUBIC)
plt.imshow(resized_img)
batched_img = np.expand_dims(resized_img, axis=0)
dets = get_dets(model_rpn, model_frcnn, img, img_metadata, class_mapping)
fig,ax = plt.subplots(1)
ax.imshow(cv_rgb)
for det in dets:
x1, y1, x2, y2 = det['x1'], det['y1'], det['x2'], det['y2']
class_name = det['cls_name']
prob = det['prob']
rect = patches.Rectangle((x1, y1), (x2 - x1), (y2 - y1), linewidth=1,edgecolor='r',facecolor='none')
ax.add_patch(rect)
label = "{}: {:.3f}".format(class_name, prob)
ax.annotate(label, xy=(x1, y1), color='r', fontsize=20)
train_set, val_set, trainval_set = get_img_sets(VOC_PATH)
all_img_data = [extract_img_data(VOC_PATH, path) for path in val_set]
few_imgs = all_img_data[12:13]
get_map(model_rpn, model_frcnn, all_img_data, class_mapping, num_rois=64)
| notebooks/Test VGG16 VOC.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Values and Variables
# + [markdown] slideshow={"slide_type": "-"} tags=["remove-cell"]
# **CS1302 Introduction to Computer Programming**
# ___
# + slideshow={"slide_type": "fragment"} tags=["remove-cell"]
# %reload_ext mytutor
# + [markdown] slideshow={"slide_type": "slide"}
# ## Integers
# + [markdown] slideshow={"slide_type": "subslide"}
# **How to enter an [integer](https://docs.python.org/3/reference/lexical_analysis.html#integer-literals) in a program?**
# + slideshow={"slide_type": "fragment"}
15 # an integer in decimal
# + slideshow={"slide_type": "-"}
0b1111 # a binary number
# + slideshow={"slide_type": "-"}
0xF # hexadecimal (base 16) with possible digits 0, 1,2,3,4,5,6,7,8,9,A,B,C,D,E,F
# + [markdown] slideshow={"slide_type": "subslide"}
# **Why all outputs are the same?**
# + [markdown] slideshow={"slide_type": "fragment"}
# - What you have entered are *integer literals*, which are integers written out literally.
# - All the literals have the same integer value in decimal.
# - By default, if the last line of a code cell has a value, the jupyter notebook (*IPython*) will store and display the value as an output.
# + slideshow={"slide_type": "fragment"}
3 # not the output of this cell
4 + 5 + 6
# + [markdown] slideshow={"slide_type": "fragment"}
# - The last line above also has the same value, `15`.
# - It is an *expression* (but not a literal) that *evaluates* to the integer value.
# + [markdown] slideshow={"slide_type": "subslide"}
# **Exercise** Enter an expression that evaluates to an integer value, as big as possible.
# (You may need to interrupt the kernel if the expression takes too long to evaluate.)
# + nbgrader={"grade": true, "grade_id": "big-int", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false} slideshow={"slide_type": "-"} tags=["remove-output"]
# There is no maximum for an integer for Python3.
# See https://docs.python.org/3.1/whatsnew/3.0.html#integers
11 ** 100000
# + [markdown] slideshow={"slide_type": "slide"}
# ## Strings
# + [markdown] slideshow={"slide_type": "subslide"}
# **How to enter a [string](https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals) in a program?**
# + slideshow={"slide_type": "fragment"}
'\U0001f600: I am a string.' # a sequence of characters delimited by single quotes.
# + slideshow={"slide_type": "fragment"}
"\N{grinning face}: I am a string." # delimited by double quotes.
# + slideshow={"slide_type": "fragment"}
"""\N{grinning face}: I am a string.""" # delimited by triple single/double quotes.
# + [markdown] slideshow={"slide_type": "fragment"}
# - `\` is called the *escape symbol*.
# - `\U0001f600` and `\N{grinning face}` are *escape sequences*.
# - These sequences represent the same grinning face emoji by its Unicode in hexadecimal and its name.
# + [markdown] slideshow={"slide_type": "subslide"}
# **Why use different quotes?**
# + slideshow={"slide_type": "fragment"}
print('I\'m line #1.\nI\'m line #2.') # \n is a control code for line feed
print("I'm line #3.\nI'm line #4.") # no need to escape single quote.
print('''I'm line #5.
I'm line #6.''') # multi-line string
# + [markdown] slideshow={"slide_type": "fragment"}
# Note that:
# - The escape sequence `\n` does not represent any symbol.
# - It is a *control code* that creates a new line when printing the string.
# - Another common control code is `\t` for tab.
# + [markdown] slideshow={"slide_type": "fragment"}
# Using double quotes, we need not escape the single quote in `I'm`.
# + [markdown] slideshow={"slide_type": "fragment"}
# Triple quotes delimit a multi-line string, so there is no need to use `\n`.
# (You can copy and paste a multi-line string from elsewhere.)
# + [markdown] slideshow={"slide_type": "subslide"}
# In programming, there are often many ways to do the same thing.
# The following is a one-line code ([one-liner](https://en.wikipedia.org/wiki/One-liner_program)) that prints multiple lines of strings without using `\n`:
# + slideshow={"slide_type": "fragment"}
print("I'm line #1", "I'm line #2", "I'm line #3", sep='\n') # one liner
# + [markdown] slideshow={"slide_type": "fragment"}
# - `sep='\n'` is a *keyword argument* that specifies the separator of the list of strings.
# - By default, `sep=' '`, a single space character.
# + [markdown] slideshow={"slide_type": "subslide"}
# In IPython, we can get the *docstring* (documentation) of a function conveniently using the symbol `?`.
# + slideshow={"slide_type": "-"}
# ?print
# + slideshow={"slide_type": "-"}
# print?
# + [markdown] slideshow={"slide_type": "subslide"}
# **Exercise** Print a cool multi-line string below.
# + nbgrader={"grade": true, "grade_id": "multi-line", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false} slideshow={"slide_type": "-"}
print('''
(ง •̀_•́)ง
╰(●’◡’●)╮
(..•˘_˘•..)
(づ ̄ 3 ̄)づ
''')
# See also https://github.com/glamp/bashplotlib
# Star Wars via Telnet http://asciimation.co.nz/
# + [markdown] slideshow={"slide_type": "slide"}
# ## Variables and Assignment
# + [markdown] slideshow={"slide_type": "subslide"}
# It is useful to store a value and retrieve it later.
# To do so, we assign the value to a variable:
# + slideshow={"slide_type": "fragment"}
x = 15
x # output the value of x
# + [markdown] slideshow={"slide_type": "subslide"}
# **Is assignment the same as equality?**
# + [markdown] slideshow={"slide_type": "fragment"}
# No because:
# - you cannot write `15 = x`, but
# - you can write `x = x + 1`, which increases the value of `x` by `1`.
# + [markdown] slideshow={"slide_type": "fragment"}
# **Exercise** Try out the above code yourself.
# + nbgrader={"grade": true, "grade_id": "assign-vs-eq", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false} slideshow={"slide_type": "-"}
x = x + 1
x
# + [markdown] slideshow={"slide_type": "subslide"}
# Let's see the effect of assignment step-by-step:
# 1. Run the following cell.
# 1. Click `Next >` to see the next step of the execution.
# + slideshow={"slide_type": "-"}
# %%mytutor -h 200
x = 15
x = x + 1
# + [markdown] slideshow={"slide_type": "subslide"}
# The following *tuple assignment* syntax can assign multiple variables in one line.
# + slideshow={"slide_type": "-"}
# %%mytutor -h 200
x, y, z = '15', '30', 15
# + [markdown] slideshow={"slide_type": "fragment"}
# One can also use *chained assignment* to set different variables to the same value.
# + slideshow={"slide_type": "-"}
# %%mytutor -h 250
x = y = z = 0
# + [markdown] slideshow={"slide_type": "subslide"}
# Variables can be deleted using `del`. Accessing a variable before assignment raises a Name error.
# + slideshow={"slide_type": "-"}
del x, y
x, y
# + [markdown] slideshow={"slide_type": "slide"}
# ## Identifiers
# + [markdown] slideshow={"slide_type": "fragment"}
# *Identifiers* such as variable names are case sensitive and follow certain rules.
# + [markdown] slideshow={"slide_type": "subslide"}
# **What is the syntax for variable names?**
# + [markdown] slideshow={"slide_type": "fragment"}
# 1. Must start with a letter or `_` (an underscore) followed by letters, digits, or `_`.
# 1. Must not be a [keyword](https://docs.python.org/3.7/reference/lexical_analysis.html#keywords) (identifier reserved by Python):
# + [markdown] slideshow={"slide_type": "-"}
# <pre>False await else import pass
# None break except in raise
# True class finally is return
# and continue for lambda try
# as def from nonlocal while
# assert del global not with
# async elif if or yield</pre>
# + [markdown] slideshow={"slide_type": "fragment"}
# **Exercise** Evaluate the following cell and check if any of the rules above is violated.
# + code_folding=[] slideshow={"slide_type": "-"}
from ipywidgets import interact
@interact
def identifier_syntax(assignment=['a-number = 15',
'a_number = 15',
'15 = 15',
'_15 = 15',
'del = 15',
'Del = 15',
'type = print',
'print = type',
'input = print']):
exec(assignment)
print('Ok.')
# + [markdown] nbgrader={"grade": true, "grade_id": "invalid-identifiers", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false} slideshow={"slide_type": "-"}
# 1. `a-number = 15` violates Rule 1 because `-` is not allowed. `-` is interpreted as an operator.
# 1. `15 = 15` violates Rule 1 because `15` starts with a digit instead of letter or _.
# 1. `del = 15` violates Rule 2 because `del` is a keyword.
# + [markdown] slideshow={"slide_type": "fragment"}
# What can we learn from the above examples?
# + [markdown] slideshow={"slide_type": "fragment"}
# - `del` is a keyword and `Del` is not because identifiers are case sensitive.
# - Function/method/type names `print`/`input`/`type` are not keywords and can be reassigned.
# This can useful if you want to modify the default implementations without changing their source code.
# + [markdown] slideshow={"slide_type": "fragment"}
# To help make code more readable, additional style guides such as [PEP 8](https://www.python.org/dev/peps/pep-0008/#function-and-variable-names) are available:
# + [markdown] slideshow={"slide_type": "-"}
# - Function names should be lowercase, with words separated by underscores as necessary to improve readability.
# - Variable names follow the same convention as function names.
# + [markdown] slideshow={"slide_type": "subslide"}
# ## User Input
# + [markdown] slideshow={"slide_type": "fragment"}
# **How to let the user input a value at *runtime*,
# i.e., as the program executes?**
# + [markdown] slideshow={"slide_type": "fragment"}
# We can use the method `input`:
# - There is no need to delimit the input string by quotation marks.
# - Simply press `enter` after typing a string.
# + slideshow={"slide_type": "-"} tags=["remove-output"]
print('Your name is', input('Please input your name: '))
# + [markdown] slideshow={"slide_type": "fragment"}
# - The `input` method prints its argument, if any, as a [prompt](https://en.wikipedia.org/wiki/Command-line_interface#Command_prompt).
# - It takes user's input and *return* it as its value. `print` takes in that value and prints it.
# + [markdown] slideshow={"slide_type": "fragment"}
# **Exercise** Explain whether the following code prints `'My name is Python'`. Does `print` return a value?
# + slideshow={"slide_type": "-"}
print('My name is', print('Python'))
# + [markdown] nbgrader={"grade": true, "grade_id": "print-returns-none", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false} slideshow={"slide_type": "-"}
# - Unlike `input`, the function `print` does not return the string it is trying to print. Printing a string is, therefore, different from returning a string.
# - `print` actually returns a `None` object that gets printed as `None`.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Type Conversion
# + [markdown] slideshow={"slide_type": "subslide"}
# The following program tries to compute the sum of two numbers from user inputs:
# + slideshow={"slide_type": "-"} tags=["remove-output"]
num1 = input('Please input an integer: ')
num2 = input('Please input another integer: ')
print(num1, '+', num2, 'is equal to', num1 + num2)
# + [markdown] slideshow={"slide_type": "fragment"}
# **Exercise** There is a [bug](https://en.wikipedia.org/wiki/Software_bug) in the above code. Can you locate the error?
# + [markdown] nbgrader={"grade": true, "grade_id": "cell-d1d22bc89eb9f7b6", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false} slideshow={"slide_type": "-"}
# The two numbers are concatenated instead of added together.
# + [markdown] slideshow={"slide_type": "subslide"}
# `input` *returns* user input as a string.
# E.g., if the user enters `12`, the input is
# - not treated as the integer twelve, but rather
# - treated as a string containing two characters, one followed by two.
# + [markdown] slideshow={"slide_type": "fragment"}
# To see this, we can use `type` to return the data type of an expression.
# + slideshow={"slide_type": "-"} tags=["remove-output"]
num1 = input('Please input an integer: ')
print('Your input is', num1, 'with type', type(num1))
# + [markdown] slideshow={"slide_type": "fragment"}
# **Exercise** `type` applies to any expressions. Try it out below on `15`, `print`, `print()`, `input`, and even `type` itself and `type(type)`.
# + nbgrader={"grade": true, "grade_id": "type", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false} slideshow={"slide_type": "-"}
type(15), type(print), type(print()), type(input), type(type), type(type(type))
# + [markdown] slideshow={"slide_type": "fragment"}
# **So what happens when we add strings together?**
# + slideshow={"slide_type": "-"}
'4' + '5' + '6'
# + [markdown] slideshow={"slide_type": "fragment"}
# **How to fix the bug then?**
# + [markdown] slideshow={"slide_type": "fragment"}
# We can convert a string to an integer using `int`.
# + slideshow={"slide_type": "-"}
int('4') + int('5') + int('6')
# + [markdown] slideshow={"slide_type": "fragment"}
# We can also convert an integer to a string using `str`.
# + slideshow={"slide_type": "-"}
str(4) + str(5) + str(6)
# + [markdown] slideshow={"slide_type": "fragment"}
# **Exercise** Fix the bug in the following cell.
# + nbgrader={"grade": false, "grade_id": "string-concat-bug", "locked": false, "schema_version": 3, "solution": true, "task": false} slideshow={"slide_type": "-"} tags=["remove-output"]
num1 = input('Please input an integer: ')
num2 = input('Please input another integer: ')
# print(num1, '+', num2, 'is equal to', num1 + num2) # fix this line below
### BEGIN SOLUTION
print(num1, '+', num2, 'is equal to', int(num1) + int(num2))
### END SOLUTION
# + [markdown] slideshow={"slide_type": "slide"}
# ## Error
# + [markdown] slideshow={"slide_type": "fragment"}
# In addition to writing code, a programmer spends significant time in *debugging* code that contains errors.
#
# + [markdown] slideshow={"slide_type": "fragment"}
# **Can an error be automatically detected by the computer?**
# + [markdown] slideshow={"slide_type": "fragment"}
# - You have just seen an example of *logical error*, which is due to an error in the logic.
# - The ability to debug or even detect such error is, unfortunately, beyond Python's intelligence.
# + [markdown] slideshow={"slide_type": "fragment"}
# Other kinds of error may be detected automatically.
# As an example, note that we can omit `+` for string concatenation, but we cannot omit it for integer summation:
# + slideshow={"slide_type": "fragment"}
print('Skipping + for string concatenation')
'4' '5' '6'
# + slideshow={"slide_type": "-"}
print('Skipping + for integer summation')
4 5 6
# + [markdown] slideshow={"slide_type": "fragment"}
# Python interpreter detects the bug and raises a *syntax* error.
# + [markdown] slideshow={"slide_type": "subslide"}
# **Why Syntax error can be detected automatically?
# Why is the print statement before the error not executed?**
# + [markdown] slideshow={"slide_type": "fragment"}
# - The Python interpreter can easily detect syntax error even before executing the code simply because
# - the interpreter fails to interpret the code, i.e., translates the code to lower-level executable code.
# + [markdown] slideshow={"slide_type": "subslide"}
# The following code raises a different kind of error.
# + slideshow={"slide_type": "-"}
print("Evaluating '4' + '5' + 6")
'4' + '5' + 6 # summing string with integer
# + [markdown] slideshow={"slide_type": "fragment"}
# **Why Python throws a TypeError when evaluating `'4' + '5' + 6`?**
# + [markdown] slideshow={"slide_type": "fragment"}
# There is no default implementation of `+` operation on a value of type `str` and a value of type `int`.
# + [markdown] slideshow={"slide_type": "fragment"}
# - Unlike syntax error, the Python interpreter can only detect type error at runtime (when executing the code.)
# - Hence, such error is called a *runtime error*.
#
# + [markdown] slideshow={"slide_type": "subslide"}
# **Why is TypeError a runtime error?**
# + [markdown] slideshow={"slide_type": "fragment"}
# The short answer is that Python is a [strongly-and-dynamically-typed](https://en.wikipedia.org/wiki/Strong_and_weak_typing) language:
# - Strongly-typed: Python does not force a type conversion to avoid a type error.
# - Dynamically-typed: Python allow data type to change at runtime.
# + [markdown] slideshow={"slide_type": "fragment"}
# The underlying details are more complicated than required for this course. It helps if you already know the following languages:
# - JavaScript, which is a *weakly-typed* language that forces a type conversion to avoid a type error.
# - C, which is a *statically-typed* language that does not allow data type to change at runtime.
# + slideshow={"slide_type": "-"} language="javascript"
# alert('4' + '5' + 6) // no error because 6 is converted to a str automatically
# + [markdown] slideshow={"slide_type": "fragment"}
# A weakly-typed language may seem more robust, but it can lead to [more logical errors](https://www.oreilly.com/library/view/fluent-conference-javascript/9781449339203/oreillyvideos1220106.html).
# To improve readability, [typescript](https://www.typescriptlang.org/) is a strongly-typed replacement of javascript.
# + [markdown] slideshow={"slide_type": "fragment"}
# **Exercise** Not all the strings can be converted into integers. Try breaking the following code by providing invalid inputs and record them in the subsequent cell. Explain whether the errors are runtime errors.
# + slideshow={"slide_type": "-"} tags=["remove-output"]
num1 = input('Please input an integer: ')
num2 = input('Please input another integer: ')
print(num1, '+', num2, 'is equal to', int(num1) + int(num2))
# + [markdown] nbgrader={"grade": true, "grade_id": "invalid-input", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false} slideshow={"slide_type": "-"}
# The possible invalid inputs are:
# > `4 + 5 + 6`, `15.0`, `fifteen`
#
# It raises a value error, which is a runtime error detected during execution.
#
# Note that the followings are okay
# > int('-1'), eval('4 + 5 + 6')
# + [markdown] slideshow={"slide_type": "slide"}
# ## Floating Point Numbers
# + [markdown] slideshow={"slide_type": "fragment"}
# Not all numbers are integers. In Enginnering, we often need to use fractions.
# + [markdown] slideshow={"slide_type": "subslide"}
# **How to enter fractions in a program?**
# + slideshow={"slide_type": "fragment"}
x = -0.1 # decimal number
y = -1.0e-1 # scientific notation
z = -1/10 # fraction
x, y, z, type(x), type(y), type(z)
# + [markdown] slideshow={"slide_type": "subslide"}
# **What is the type `float`?**
# + [markdown] slideshow={"slide_type": "fragment"}
# - `float` corresponds to the [*floating point* representation](https://en.wikipedia.org/wiki/Floating-point_arithmetic#Floating-point_numbers).
# - A `float` in stored exactly the way we write it in scientific notation:
#
# $$
# \overbrace{-}^{\text{sign}} \underbrace{1.0}_{\text{mantissa}\kern-1em}e\overbrace{-1}^{\text{exponent}\kern-1em}=-1\times 10^{-1}
# $$
# - The [truth](https://www.h-schmidt.net/FloatConverter/IEEE754.html) is more complicated than required for the course.
# + [markdown] slideshow={"slide_type": "fragment"}
# Integers in mathematics may be regarded as a `float` instead of `int`:
# + slideshow={"slide_type": "-"}
type(1.0), type(1e2)
# + [markdown] slideshow={"slide_type": "fragment"}
# You can also convert an `int` or a `str` to a `float`.
# + slideshow={"slide_type": "-"}
float(1), float('1')
# + [markdown] slideshow={"slide_type": "subslide"}
# **Is it better to store an integer as `float`?**
# + [markdown] slideshow={"slide_type": "fragment"}
# Python stores a [floating point](https://docs.python.org/3/library/sys.html#sys.float_info) with finite precision (usually as a 64bit binary fraction):
# + slideshow={"slide_type": "fragment"}
import sys
sys.float_info
# + [markdown] slideshow={"slide_type": "fragment"}
# It cannot represent a number larger than the `max`:
# + slideshow={"slide_type": "-"}
sys.float_info.max * 2
# + [markdown] slideshow={"slide_type": "fragment"}
# The precision also affects the check for equality.
# + slideshow={"slide_type": "-"}
(1.0 == 1.0 + sys.float_info.epsilon * 0.5, # returns true if equal
1.0 == 1.0 + sys.float_info.epsilon * 0.6, sys.float_info.max + 1 == sys.float_info.max)
# + [markdown] slideshow={"slide_type": "fragment"}
# Another issue with float is that it may keep more decimal places than desired.
# + slideshow={"slide_type": "-"}
1/3
# + [markdown] slideshow={"slide_type": "subslide"}
# **How to [round](https://docs.python.org/3/library/functions.html#round) a floating point number to the desired number of decimal places?**
# + slideshow={"slide_type": "fragment"}
round(2.665,2), round(2.675,2)
# + [markdown] slideshow={"slide_type": "fragment"}
# **Why 2.675 rounds to 2.67 instead of 2.68?**
# + [markdown] slideshow={"slide_type": "fragment"}
# - A `float` is actually represented in binary.
# - A decimal fraction [may not be represented exactly in binary](https://docs.python.org/3/tutorial/floatingpoint.html#tut-fp-issues).
# + [markdown] slideshow={"slide_type": "subslide"}
# The `round` function can also be applied to an integer.
# + slideshow={"slide_type": "fragment"}
round(150,-2), round(250,-2)
# + [markdown] slideshow={"slide_type": "fragment"}
# **Why 250 rounds to 200 instead of 300?**
# + [markdown] slideshow={"slide_type": "fragment"}
# - Python 3 implements the default rounding method in [IEEE 754](https://en.wikipedia.org/w/index.php?title=IEEE_754#Rounding_rules).
# + [markdown] slideshow={"slide_type": "slide"}
# ## String Formatting
# + [markdown] slideshow={"slide_type": "subslide"}
# **Can we round a `float` or `int` for printing but not calculation?**
# + [markdown] slideshow={"slide_type": "fragment"}
# This is possible with [*format specifications*](https://docs.python.org/3/library/string.html#format-specification-mini-language).
# + slideshow={"slide_type": "-"}
x = 10000/3
print('x ≈ {:.2f} (rounded to 2 decimal places)'.format(x))
x
# + [markdown] slideshow={"slide_type": "subslide"}
# - `{:.2f}` is a *format specification*
# - that gets replaced by a string
# - that represents the argument `x` of `format`
# - as a decimal floating point number rounded to 2 decimal places.
# + [markdown] slideshow={"slide_type": "fragment"}
# **Exercise** Play with the following widget to learn the effect of different format specifications. In particular, print `10000/3` as `3,333.33`.
# + code_folding=[7] slideshow={"slide_type": "-"}
from ipywidgets import interact
@interact(x='10000/3',
align={'None':'','<':'<','>':'>','=':'=','^':'^'},
sign={'None':'','+':'+','-':'-','SPACE':' '},
width=(0,20),
grouping={'None':'','_':'_',',':','},
precision=(0,20))
def print_float(x,sign,align,grouping,width=0,precision=2):
format_spec = f"{{:{align}{sign}{'' if width==0 else width}{grouping}.{precision}f}}"
print("Format spec:",format_spec)
print("x ≈",format_spec.format(eval(x)))
# + nbgrader={"grade": true, "grade_id": "format-spec", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false}
print('{:,.2f}'.format(10000/3))
# + [markdown] slideshow={"slide_type": "subslide"}
# String formatting is useful for different data types other than `float`.
# E.g., consider the following program that prints a time specified by some variables.
# + slideshow={"slide_type": "-"}
# Some specified time
hour = 12
minute = 34
second = 56
print("The time is " + str(hour) + ":" + str(minute) + ":" + str(second)+".")
# + [markdown] slideshow={"slide_type": "fragment"}
# Imagine you have to show also the date in different formats.
# The code can become very hard to read/write because
# - the message is a concatenation of multiple strings and
# - the integer variables need to be converted to strings.
# + [markdown] slideshow={"slide_type": "fragment"}
# Omitting `+` leads to syntax error. Removing `str` as follows also does not give the desired format.
# + slideshow={"slide_type": "-"}
print("The time is ", hour, ":", minute, ":", second, ".") # note the extra spaces
# + [markdown] slideshow={"slide_type": "subslide"}
# To make the code more readable, we can use the `format` function as follows.
# + slideshow={"slide_type": "-"}
message = "The time is {}:{}:{}."
print(message.format(hour,minute,second))
# + [markdown] slideshow={"slide_type": "fragment"}
# - We can have multiple *place-holders* `{}` inside a string.
# - We can then provide the contents (any type: numbers, strings..) using the `format` function, which
# - substitutes the place-holders by the function arguments from left to right.
# + [markdown] slideshow={"slide_type": "subslide"}
# According to the [string formatting syntax](https://docs.python.org/3/library/string.html#format-string-syntax), we can change the order of substitution using
# - indices *(0 is the first item)* or
# - names inside the placeholder `{}`:
# + slideshow={"slide_type": "fragment"}
print("You should {0} {1} what I say instead of what I {0}.".format("do", "only"))
print("The surname of {first} {last} is {last}.".format(first="John", last="Doe"))
# + [markdown] slideshow={"slide_type": "subslide"}
# You can even put variables inside the format specification directly and have a nested string formatting.
# + slideshow={"slide_type": "-"}
align, width = "^", 5
print(f"{{:*{align}{width}}}".format(x)) # note the syntax f"..."
# + [markdown] slideshow={"slide_type": "subslide"}
# **Exercise** Play with the following widget to learn more about the formating specification.
# 1. What happens when `align` is none but `fill` is `*`?
# 1. What happens when the `expression` is a multi-line string?
# + code_folding=[] slideshow={"slide_type": "-"}
from ipywidgets import interact
@interact(expression=r"'ABC'",
fill='*',
align={'None':'','<':'<','>':'>','=':'=','^':'^'},
width=(0,20))
def print_objectt(expression,fill,align='^',width=10):
format_spec = f"{{:{fill}{align}{'' if width==0 else width}}}"
print("Format spec:",format_spec)
print("Print:",format_spec.format(eval(expression)))
# + [markdown] nbgrader={"grade": true, "grade_id": "string-formatting", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false}
# 1. It returns a ValueError because align must be specified when fill is.
# 1. The newline character is simply regarded a character. The formatting is not applied line-by-line. E.g., try 'ABC\nDEF'.
| _build/jupyter_execute/Lecture2/Values and Variables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Praktikum 5 | Pengolahan Citra
# ## Brightness, Contrass, Autolevel Contrass
# <NAME> | 2103161037 | 2 D3 Teknik Informatika B
# -------------------------------------------------------
# ## Import dependency
import numpy as np
import imageio
import matplotlib.pyplot as plt
# ## Membaca gambar
img = imageio.imread("gambar4.jpg")
# ## Mendapatkan resolusi dan type dari gambar
img_height = img.shape[0]
img_width = img.shape[1]
img_channel = img.shape[2]
img_type = img.dtype
# ------------------------------------------------
# ## Brightness Grayscale
# ### Membuat variabel img_brightness untuk menampung hasil
img_brightness = np.zeros(img.shape, dtype=np.uint8)
# ### Melakukan penambahan brightness dengan nilai yg menjadi parameter
def brighter(nilai):
for y in range(0, img_height):
for x in range(0, img_width):
red = img[y][x][0]
green = img[y][x][1]
blue = img[y][x][2]
gray = (int(red) + int(green) + int(blue)) / 3
gray += nilai
if gray > 255:
gray = 255
if gray < 0:
gray = 0
img_brightness[y][x] = (gray, gray, gray)
# ### Menampilkan beberapa hasil dengan nilai brightness -100 dan 100
# +
brighter(-100)
plt.imshow(img_brightness)
plt.title("Brightness -100")
plt.show()
brighter(100)
plt.imshow(img_brightness)
plt.title("Brightness 100")
plt.show()
# -
# -------------------------------------------------------------
# ## Brightness RGB
# ### Membuat variabel img_rgbbrightness untuk menampung hasil
img_rgbbrightness = np.zeros(img.shape, dtype=np.uint8)
# ### Melakukan penambahan brightness dengan nilai yg menjadi parameter
def rgbbrighter(nilai):
for y in range(0, img_height):
for x in range(0, img_width):
red = img[y][x][0]
red += nilai
if red > 255:
red = 255
if red < 0:
red = 0
green = img[y][x][1]
green += nilai
if green > 255:
green = 255
if green < 0:
green = 0
blue = img[y][x][2]
blue += nilai
if blue > 255:
blue = 255
if blue < 0:
blue = 0
img_rgbbrightness[y][x] = (red, green, blue)
# ### Menampilkan beberapa hasil dengan nilai brightness -100 dan 100
# +
rgbbrighter(-100)
plt.imshow(img_rgbbrightness)
plt.title("Brightness -100")
plt.show()
rgbbrighter(100)
plt.imshow(img_rgbbrightness)
plt.title("Brightness 100")
plt.show()
# -
# ------------------------------------------------------------------
# ## Contrass
# ### 1. Membuat variabel img_contrass untuk menampung hasil
img_contrass = np.zeros(img.shape, dtype=np.uint8)
# ### 2. Melakukan penambahan contrass dengan nilai yg menjadi parameter
def contrass(nilai):
for y in range(0, img_height):
for x in range(0, img_width):
red = img[y][x][0]
green = img[y][x][1]
blue = img[y][x][2]
gray = (int(red) + int(green) + int(blue)) / 3
gray *= nilai
if gray > 255:
gray = 255
img_contrass[y][x] = (gray, gray, gray)
# ### Menampilkan beberapa hasil dengan nilai contrass 50 dan 100
# +
contrass(2)
plt.imshow(img_contrass)
plt.title("Contrass 2")
plt.show()
contrass(3)
plt.imshow(img_contrass)
plt.title("Contrass 3")
plt.show()
# -
# ---------------------------------------------
# ## Contrass Autolevel
# ### 1. Membuat variabel img_contrass untuk menampung hasil
img_autocontrass = np.zeros(img.shape, dtype=np.uint8)
# ### 2. Melakukan penambahan contrass secara otomatis
def autocontrass():
xmax = 300
xmin = 0
d = 0
# Mendapatkan nilai d, dimana nilai d ini akan berpengaruh pada hitungan
# untuk mendapatkan tingkat kontras
for y in range(0, img_height):
for x in range(0, img_width):
red = img[y][x][0]
green = img[y][x][1]
blue = img[y][x][2]
gray = (int(red) + int(green) + int(blue)) / 3
if gray < xmax:
xmax = gray
if gray > xmin:
xmin = gray
d = xmin-xmax
for y in range(0, img_height):
for x in range(0, img_width):
red = img[y][x][0]
green = img[y][x][1]
blue = img[y][x][2]
gray = (int(red) + int(green) + int(blue)) / 3
gray = int(float(255/d) * (gray-xmax))
img_autocontrass[y][x] = (gray, gray, gray)
# ### 3. Menampilkan hasil autolevel contrass
autocontrass()
plt.imshow(img_autocontrass)
plt.title("Contrass Autolevel")
plt.show()
# ---------------------------------------------
| .ipynb_checkpoints/Praktikum 05 - Brightness, Contrass, Autolevel-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.3 64-bit (''base'': conda)'
# language: python
# name: python38364bitbaseconda95a146a3e0f24d269763f2b941d4e57c
# ---
# +
# Dependencies
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import string
import nltk
import sys
import os
import re
# Download corpus from NLTK
nltk.download()
# Set matplotlib inline
# %matplotlib inline
# + pycharm={"name": "#%%\n"}
# Define path to CSV file
path = 'data/styles.csv'
# Load data from CSV file
ds = pd.read_csv(path, index_col='id', sep=',', on_bad_lines='warn')
ds.describe(include='all')
# + pycharm={"name": "#%%\n"}
# Define tags column
tags = ds.productDisplayName.astype(str)
# Make words lowercase
tags = tags.str.lower()
# Define punctuation
punctuation = set(string.punctuation)
# Remove punctuation
tags = tags.apply(lambda s: ''.join([c for c in s if c not in punctuation]))
# Tokenize words
tags = tags.apply(lambda s: nltk.word_tokenize(s))
# Define stop words
stop_words = set(nltk.corpus.stopwords.words('english'))
# Remove stop words
tags = tags.apply(lambda s: [w for w in s if w not in stop_words])
# Remove numbers
tags = tags.apply(lambda s: [w for w in s if bool(re.search(r'\d', w)) != True])
# Remove words showrter than three characters
tags = tags.apply(lambda s: [w for w in s if len(w) > 2])
tags.head()
# + pycharm={"name": "#%%\n"}
# Retrieve words
words = pd.Series([w for s in tags.tolist() for w in s])
# Count word occurrences
count = words.value_counts()
count.describe()
# + pycharm={"name": "#%%\n"}
fig, axs = plt.subplots(1, 2, figsize=(30, 10))
_ = axs[0].bar(count[:10].index, count[:10].values)
_ = axs[1].bar(count[-10:].index, count[-10:].values)
_ = plt.show()
# + pycharm={"name": "#%%\n"}
# Define a vocabulary of words
# NOTE Words with less than two appearences are excluded
vocabulary = pd.Series(list({w for w in words.values if count[w] > 3}))
vocabulary.describe()
# + pycharm={"name": "#%%\n"}
# # One-hot-encode vocabulary
# encoded = pd.get_dummies(vocabulary)
# encoded.head()
# + pycharm={"name": "#%%\n"}
# Swap index and value
vocabulary = pd.Series(dict((w, i) for i, w in vocabulary.iteritems()))
# Store vocabulary
vocabulary.to_json('data/vocabulary.json')
vocabulary.head()
# + pycharm={"name": "#%%\n"}
# Use vocabulary to turn words into numbers
tokens = tags.apply(lambda s: [vocabulary[w] for w in s if w in vocabulary.index])
tokens.to_json('data/tokens.json')
tokens.head()
# + pycharm={"name": "#%%\n"}
| dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="yvRFuFBLrsac"
# # Machine Learning Basics
# In this module, you'll be implementing a simple Linear Regressor and Logistic Regressor. You will be using the Salary Data for the tasks in this module. <br> <br>
# **Pipeline:**
# * Acquiring the data - done
# * Handling files and formats - done
# * Data Analysis - done
# * Prediction
# * Analysing results
# + [markdown] colab_type="text" id="AwvgLLICtyt_"
# ## Imports
# You may require NumPy, pandas, matplotlib and scikit-learn for this module. Do not, however, use the inbuilt Linear and Logistic Regressors from scikit-learn.
# -
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import pickle as pkl
import math
# + [markdown] colab_type="text" id="yE5Sz6nKvjTS"
# ## Dataset
# You can load the dataset and perform any dataset related operations here. Split the data into training and testing sets. Do this separately for the regression and classification problems.
# +
data = pd.read_csv(r'C:\Users\siddh\Downloads\MLBasics-master\Data\SalaryData.txt',sep = " ")
test_split = int(0.75 * data.shape[0])
linear_data_train = data.loc[:test_split]
print(" Linear Regression Training Data Frame \n")
print(linear_data_train)
print("\n")
linear_data_test = data.loc[test_split+1:]
print(" Linear Regression Testing Data Frame \n")
print(linear_data_test)
print("\n")
fig,ax1 = plt.subplots()
ax1.set_title(" Linear Regression Plot ",fontsize = 20)
ax1.set_xlabel(" YearsExperience ",fontsize = 15)
ax1.set_ylabel(" Salary ",fontsize = 15)
X = data['YearsExperience']
Y = data['Salary']
ax1.scatter(X,Y,color = 'red')
plt.show()
# -
data
# + [markdown] colab_type="text" id="VienPTZA1ZEr"
# ## Task 1a - Linear Regressor
# Code your own Linear Regressor here, and fit it to your training data. You will be predicting salary based on years of experience.
# +
X = data['YearsExperience']
X = X.to_numpy()
X = X.reshape(-1,1)
ones = np.ones([X.shape[0], 1])
X = np.concatenate([ones, X],1)
y = data['Salary']
y = y.to_numpy()
y = y.reshape(-1,1)
fig,ax1 = plt.subplots()
ax1.set_title(" Linear Regression Training Plot ",fontsize = 20)
ax1.set_xlabel(" Years Experience ",fontsize = 15)
ax1.set_ylabel(" Salary ",fontsize = 15)
ax1.scatter(data['YearsExperience'],data['Salary'],color = 'red')
plt.show()
# +
alpha = 0.0001
iters = 100000
theta = np.array([[1.0, 1.0]])
def computeCost(X, y, theta):
inner = np.power(((X @ theta.T) - y), 2)
return np.sum(inner) / (2 * len(X))
print("Cost Function : ",computeCost(X, y, theta))
def gradientDescent(X, y, theta, alpha, iters):
for i in range(iters):
theta = theta - (alpha/len(X)) * np.sum((X @ theta.T - y) * X, axis=0)
cost = computeCost(X, y, theta)
return (theta, cost)
g, cost = gradientDescent(X, y, theta, alpha, iters)
print("Theta Values : " ,g)
print("Cost Function After Gradient Descent : ",cost)
plt.scatter(data['YearsExperience'].to_numpy().reshape(-1,1), y,color = "red")
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = g[0][0] + g[0][1]* x_vals
plt.plot(x_vals, y_vals, '--')
# +
y_pred = []
x_actual = data['YearsExperience'].to_numpy()
y_actual = data['Salary'].to_numpy()
y_pred = g[0][0] + g[0][1] * x_actual
mse = np.sum((y_pred - y_actual)**2)
rmse = np.sqrt(mse/30)
print("RMSE Value : ",rmse)
ssr = np.sum((y_pred - y_actual)**2)
print("SSR Value : ",ssr)
sst = np.sum((y_actual - np.mean(y_actual))**2)
print("SST Value : ",sst)
r2_score = 1 - (ssr/sst)
print("R2 Score : ",r2_score)
# -
# ## Task 1b - Logistic Regression
# Code your own Logistic Regressor here, and fit it to your training data. You will first have to create a column, 'Salary<60000', which contains '1' if salary is less than 60000 and '0' otherwise. This is your target variable, which you will aim to predict based on years of experience.
# +
data1 = pd.read_csv(r'C:\Users\siddh\Downloads\MLBasics-master\Data\SalaryData.txt',sep = " ")
salarylessthan60000 = []
for sal in data1['Salary']:
salarylessthan60000.append(int(sal < 60000))
data1['Salary<60000'] = salarylessthan60000
test_split = int(0.75 * data1.shape[0])
logistic_data_train = data1.loc[:test_split]
logistic_data_test = data1.loc[test_split+1:]
print(" Logistic Regression Training Data Frame \n")
print(logistic_data_train)
print("\n")
print(" Logistic Regression Testing Data Frame \n")
print(logistic_data_test)
print("\n")
fig,ax1 = plt.subplots()
ax1.set_title(" Logistic Regression Plot ",fontsize = 20)
ax1.set_xlabel(" Years Experience ",fontsize = 15)
ax1.set_ylabel(" Salary<60000 ",fontsize = 15)
X = data1['YearsExperience']
Y = data1['Salary<60000']
ax1.scatter(X,Y,color = 'green')
plt.show()
# +
X = data1.iloc[:,:-1]
Y = data1.iloc[:, -1]
lessthan60000 = data1.loc[Y == 1]
greaterthan60000 = data1.loc[Y == 0]
fig,ax1 = plt.subplots()
ax1.set_title(" Logistic Regression Plot ",fontsize = 20)
ax1.set_xlabel(" Years Experience ",fontsize = 15)
ax1.set_ylabel(" Salary ",fontsize = 15)
ax1.scatter(lessthan60000['YearsExperience'],lessthan60000['Salary<60000'],color = 'green',label = 'LessThan60000')
ax1.scatter(greaterthan60000['YearsExperience'],greaterthan60000['Salary<60000'],color = 'red', label='GreaterThan60000')
plt.legend()
plt.show()
# +
def sigmoid(X):
return (1 / (1 + np.exp((-X))))
x0 = 1
x1 = 1
rate = 0.0003
x = np.array(data1['YearsExperience'])
y = np.array(data1['Salary<60000'])
theta1 = 0.0
theta2 = 0.0
acc=[]
cost = []
iter = 0
while iter < 5000:
cost.append(0.0)
theta1 = 0.0
theta2 = 0.0
acc.append(0.0)
for i in range(len(x)):
y_pred = sigmoid(x[i] * x1 + x0)
theta1 += y_pred - y[i]
theta2 += (y_pred - y[i] )* x[i]
cost[iter] += y[i] * np.log(y_pred) + (1 - y[i]) * np.log(1 - y_pred)
if int(y_pred > 0.5) == y[i]:
acc[iter] += 1
acc[iter] = acc[iter] / len(x)
cost[iter] /= len(x)
x0 = x0 - rate / len(x) * theta1
x1 = x1 - rate / len(x) * theta2
iter += 1
print("ACCL : ",acc[iter-2])
print("Slope: ",x1)
print("Intercept: ",x0)
plt.plot([i for i in range(iter)],cost,label = 'Cost Function')
plt.scatter(x=[i for i in range(iter)],y=acc, label = 'Accuracy',color='green')
plt.legend(loc='lower right',fontsize=15)
plt.show()
# +
x_plt = np.linspace(0,11,100)
y_plt = np.array([sigmoid(xi * x1+ x0) for xi in x_plt])
fig,ax1 = plt.subplots()
ax1.scatter(data1['YearsExperience'],data1['Salary<60000'],label = 'Salary < 60000 Predicted Incorrectly ')
ax1.set_title('Logistic Regression Plot',fontsize = 20)
ax1.set_xlabel('Experience',fontsize = 15)
ax1.set_ylabel('Salary < 60000',fontsize = 15)
x_pred = np.array(data1['YearsExperience'])
y_pred = [int(sigmoid(xi * x1 + x0) > 0.5) for xi in x_pred]
ax1.plot(x_plt,y_plt,color = 'green',label = 'Predicted Curve')
ax1.scatter(x_pred,y_pred,color = 'red',label = 'Predicted Values')
ax1.legend(loc= 'upper right',fontsize=15)
plt.show()
# + [markdown] colab_type="text" id="vaCu6RS52qYf"
# ## Task 2 - Results
# Analyse the quality of the ML models you built using metrics such as R2, MAE and RMSE for the Linear Regressor, and Accuracy for the Logistic Regressor. Evaluate their performance on the testing set.
# -
| Module3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# name: ir
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/tbonne/IntroPychStats/blob/main/lm_EDA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="trouKhmEw-ii"
# <img src='http://drive.google.com/uc?export=view&id=1fDQUuaVfjkpHK2MpaeIQBe4UBME7l4SP' width=500>
#
#
# + [markdown] id="5Z2vKe5lxjuk"
# #<font color='darkorange'>Exploritory Data Analysis</font>
# + [markdown] id="_QeNGJdGx3tD"
# In this notebook we'll do some exploratory data analysis! You'll split into groups and choose a dataset to work on. I've place a few in this colab document so you can choose whichever one your group finds the most interesting.
#
# + [markdown] id="GNz8wsU5jsPk"
# ### 1. Load in the data
# + [markdown] id="h5dkpbBqARsC"
# Lets load in some packages. These have functions that other people have made, and will hopefully make our lives a lot easier!
# + id="NMCdcYmyAQNB"
install.packages("jtools")
install.packages("ggstance")
library(jtools)
# + [markdown] id="FvGBLDcmW8YG"
# Below i've put in a few datasets that you might want to work with. Choose the one you find most interesting by removing the hashtag at the start of the line.
#
# To read more about each option:
# > [Boston house prices](https://www.kaggle.com/vikrishnan/boston-house-prices)
#
# > [Country happiness levels](https://www.kaggle.com/unsdsn/world-happiness)
#
# > [Student drinking behaviour on grades](https://www.kaggle.com/larsen0966/student-performance-data-set)
#
# > [Impact of female/male hurrican names](https://www.pnas.org/content/111/24/8782)
#
# > [Impact of a lunch program on student performance](https://www.kaggle.com/spscientist/students-performance-in-exams)
#
# > [Belief in fate replication](https://journals.sagepub.com/doi/10.1177/2515245918810225) (section 18. Reluctance to tempt fate )
#
# > [Level of crime in US states](https://www.sheffield.ac.uk/mash/statistics/datasets)
#
# > [Level of happiness and alcohol consumption](https://www.kaggle.com/marcospessotto/happiness-and-alcohol-consumption)
#
# > [Well-being and sociometric status](https://journals.sagepub.com/doi/10.1177/2515245918810225) (section 12. Sociometric status and well-being)
#
#
#
# + [markdown] id="l5wIgtzAxJPM"
# If none of these datasets intrest your group and you are feeling adventurous check out [kaggle.com/datasets](https://www.kaggle.com/datasets)!
#
# You might have to create an account and log in, but you'll get access to a lot of data. To bring the data into colab, just:
#
# > Download the data from kaggle (csv format if possible)
# > Click on the folder icon (top left of your screen)
# > Drag and drop your file into the files
# > Right click on your new file, then copy path
# > Finally, paste the path into read.csv("paste_your_path_here", header=T)
#
# You should then be able to see the data!
#
# + id="W4l2Cb0PzjGW"
#Work with the boston house prices
df_EDA <- read.csv("https://raw.githubusercontent.com/tbonne/IntroPychStats/main/data/BostonHousesPrices.csv", header = T)
#Work with the country happiness dataset
#df_EDA <- read.csv("https://raw.githubusercontent.com/tbonne/IntroPychStats/main/data/worldHappiness.csv", header = T)
#student drinking behaviour and grades
#df_EDA <- read.csv("https://raw.githubusercontent.com/tbonne/IntroPychStats/main/data/student_alcohol.csv", header = T)
#Impact of female/male hurrican names on death and damage
#df_EDA <- read.csv("https://raw.githubusercontent.com/tbonne/IntroPychStats/main/data/hurricane_names.csv", header = T)
#student performance related to preperation and lunch programs
#df_EDA <- read.csv("https://raw.githubusercontent.com/tbonne/IntroPychStats/main/data/StudentsPerformance.csv", header = T)
#belife in fate dataset
#df_EDA <- read.csv("https://raw.githubusercontent.com/tbonne/IntroPychStats/main/data/Risen_study.csv", header = T)
#Level of crime in US states
#df_EDA <- read.csv("https://raw.githubusercontent.com/tbonne/IntroPychStats/main/data/Crime_R.csv", header = T)
#Country happiness and alchohol consumption
#df_EDA <- read.csv("https://raw.githubusercontent.com/tbonne/IntroPychStats/main/data/HappinessAlcoholConsumption.csv", header = T)
#Are people with more favourable social connections (sociometric status) viewed as having higher well-being?
#df_EDA <- read.csv("https://raw.githubusercontent.com/tbonne/IntroPychStats/main/data/Anderson_1_study_manyLabs2.csv", header = T)
#load any data you find!
#df_EDA <- read.csv("paste_your_copied_path_here!", header = T)
#let's take a look at the data
head(df_EDA)
# + [markdown] id="-lze8JTlc74F"
# Now that we can see the data, think about a question you might like to ask or about what variable you'd like to predict.
#
# > E.g., what predicts house prices?
# + [markdown] id="87dWXJj-dIvS"
# Write out your question in words here:
#
# + [markdown] id="AMbKrM93zjGX"
# ### 2. Visualize our data
#
# Then let's plot a scatterplot (feel free to plot a few here, it is always a good idea to explore your data before modeling!). Here we will choose:
# > What we'd like to predict and put it on the y-axis.
#
# > What we'd like to use to help make those predictions and put it on the x-axis.
#
# > The choice of these variables should follow from the question you're asking above!
#
# + [markdown] id="1RyfZwaKAFmq"
# <font color = "darkred"> (?) for the first question mark below you should replace it with the column name that you'd like to use to make predictions. For the second question mark you should replace it with the variable you'd like to predict </font>
# + id="YCT-75fRzjGX"
plot(x=df_EDA$?,y=df_EDA$?)
# + [markdown] id="wTDGJ6E559xG"
# If one of your variables you'd like to use to make predictions is a categorical variable you might want to plot a boxplot. Replace the first question mark with the categorical variable, and the second question mark with the variable you'd like to predict.
# + id="4zQK1CIO_g8p"
plot(x=factor(df_EDA$?),y=df_EDA$?)
# + [markdown] id="K9X0c24QzjGX"
# ### 3. Define and fit a model
#
# Now we can speficy the model we'd like to fit.
# > Remember, here we use the formula: "what we'd like to predict" ~ "what we'd like to use to help make those predictions."
#
# + [markdown] id="Er24y1fOAe03"
# <font color = "darkred"> (?) for the question mark below you should replace it with the formula that will help you answer your question. </font>
# + id="g0-O08LrzjGY"
#fit a linear model
model_EDA <- lm(?, data=df_EDA)
# + [markdown] id="svHTbyO12Guk"
# This bit of code then use our inputs to find the best fit linear equation for:
# > $y \sim Normal(\mu, \sigma) $
#
# > $mu = a + b_1 * x_1 + b_2*x_2 + ...$
#
# + [markdown] id="lGtlgt5I0fJ4"
# Let's use the summ function to tell us what values of a and b it found for the best fit line.
# > Note: we'll also calculate our 95% confidence interval here too!
# + id="32dy8P1A0fiu"
#What does the best fit model look like?
summ(model_EDA, confint=TRUE, scale = TRUE)
# + [markdown] id="4KCh21RJzjGZ"
# We can see from this output that the model is pretty certain that the slope of the population is somewhere between ? and ?.
# > Those are the range of population values that are compatible with our sample!
#
# We can also get a sense of how well your model predictions reflect the observed values using R2.
# + [markdown] id="odCSqr1BZv_u"
# ### 4. Visualize the results
# + [markdown] id="XdjGuaFZ_jYT"
# Let's take a look at the estimates a little more visually
# + id="r4fj57a7_mRa"
#plot the estimates of the slopes
plot_summs(model_EDA, scale=TRUE)
# + [markdown] id="qhxESbN27BNZ"
# Let's take a look at the regression line a little more visually
# + [markdown] id="VPzC25sMA8uk"
# <font color = "darkred"> (?) for the question mark below you should replace it with a variable that you used to make your predictions. </font>
# > Note: if you used more than one variable to help make your predictions, then you can replace the question mark for each one. This will help you visualize the relationship between a predictor variable and the response.
# + id="t68PheeXo7F9"
#plot line on the data
effect_plot(model_EDA, pred = ?, interval = TRUE, plot.points = TRUE)
# + [markdown] id="ahoXU8HzYzlM"
# ### 5. Checking assumptions
# + [markdown] id="FcO_XSKdfdyr"
# **Assumption 1**
#
# Let's check the assumption that the errors (residuals) are normally distributed.
# + id="ASugh5jufsY5"
hist(model_EDA$residuals)
# + [markdown] id="EtBUci2VkxRS"
# The above plot is just like the histograms we've looked at in the past. Now we are looking at how errors are distributed.
#
# > If the errors do not look to have many small errors and few large errors (both positive and negative) then a normal distribution might not be the best model of the data. We might also be missing an important variable...
# + [markdown] id="58GItzyDhntE"
# **Assumption 2** - no patterns in the residuals
#
# Let's check the assumption that the variance in the errors is constant.
# + id="2yUMbAxyhuy7"
plot(y=model_EDA$residuals, x=model_EDA$fitted.values)
abline(h = 0, lty=3)
# + [markdown] id="nm3ciXHmj88J"
# The above plot shows you all the errors (residuals) for each value that the model predicts. Ideally, we'd like to see errors evenly distributed around 0 (i.e., the dashed line).
#
# > If there is more variance in the errors for some prediction values then this means the model is better at predicting some values than others.
# + [markdown] id="B7sq7l6wl4KZ"
# **Assumption 2** - no patterns in the residuals
#
# Let's check the assumption that the relationship between your variables is linear (i.e., that a straight line and not a curvy line fit the data best). We can see this intuatively in the origianl scatter plot, or we can look at the residuals!
# + id="U3V5fD6JmUg9"
plot(y=model_EDA$residuals, x=model_EDA$fitted.values)
abline(h = 0, lty=3)
# + [markdown] id="E-q2qacNmZdH"
# The plot above is just the line fit to the scatterplot we saw before. Intuatively you can check to see if the straight line fits the data, or if a curvy line might fit better.
# + [markdown] id="6k_jpW3Em3Oq"
# There are two things to keep in mind when checking the assumptions of the linear regression.
#
# > The first is that the assumptions do not need to be perfect to give you a resonable estimate.
#
# > The second is that often the way the model fails can help you build a better model.
# + [markdown] id="dVm_EGLicEd2"
# ### 6. Interpret and communicate the results
# + [markdown] id="paxXBaCUcGbn"
# From the results above what can you answer the question you posed in section 1?
# > What is the association between the variables that you tested?
#
# >> <font color="darkblue">E.g., I found a positive association between the number of rooms and the price of a house (4.08, 95%CI: 1.82, 6.34).</font>
#
# > What does the confidence interval tell you about how certain you are in the sign and magnitude of that association?
#
# >> <font color="darkblue">E.g., While the results suggest the sign of the association between number of rooms and price is positive, it is relativelty uncertain about the magnitude (95%CI: 1.82, 6.34).</font>
#
# > How "good" are your model predictions?
#
# >> <font color="darkblue">E.g., The model was able to predict house prices with an r2 of 0.50.</font>
#
# > How closely does your model meet the model assumptions?
#
# >> <font color="darkblue">The model residuals are approximately normally distributed (e.g., point to your histogram). A plot of the residuals vs. predicted suggest some maximum residual values depending on the predicted value (e.g., point to your residual vs. fitted plot).</font>
| lm_EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# The intended purpose of nVision is to visualize high-dimensional spaces in a reduced representation (2D/3D) and give users tools to identify patterns within their data.
#
# In our four days, we were able to explore existing tools in `scikit-learn` such as Principle Component Analysis that address this set of problems, and made a few initial composite functions for user workflows.
# + [markdown] slideshow={"slide_type": "slide"}
# The two functions we developed are
#
# * `interaction_features` for extending datasets with nonlinear combinations of the original features and
# * `pca_analysis` for an integrated pca analysis with upstream pre-processing steps
# -
# Let's say we have a 4d dataset from which we can't easily resolve subpopulations in the data:
# + slideshow={"slide_type": "slide"}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# + slideshow={"slide_type": "fragment"}
loc = {}
loc[0] = [2.5, 1, 0, 0.75]
loc[1] = [5, 3.5, 5, 5]
loc[2] = [2.5, 4, 5, 7]
loc[3] = [5, 1, 5, 0.5]
# + slideshow={"slide_type": "fragment"}
rawdata = {}
pops = 4
size = 500
rawdata = np.zeros((size*pops,5))
for i in loc:
for j, mean in enumerate(loc[i]):
rawdata[i*500:(i+1)*500,j] = np.random.normal(loc=mean, size=500)
rawdata[i*500:(i+1)*500,4] = i
data = pd.DataFrame(data=rawdata)
# + slideshow={"slide_type": "subslide"}
sns.pairplot(data.iloc[:,0:4])
# + [markdown] slideshow={"slide_type": "slide"}
# In no 2d space can we easily resolve any subpopulations. A PCA analysis allows us to reduce a high-dimensional space into a lower dimensional space.
#
# We created a simple composite function of data pre-processing steps and a pca analysis in a single function to do this:
# + slideshow={"slide_type": "fragment"}
from nVision import pca
newdata, model, comps = pca.pca_analysis(data.loc[:, 0:3])
# + slideshow={"slide_type": "fragment"}
fig, ax = plt.subplots(figsize=(5,5))
ax.scatter(newdata[0], newdata[1], alpha=0.6)
# + [markdown] slideshow={"slide_type": "fragment"}
# The overall PCA workflow is able to separate out populations of interest in a reduced dimensional space. What required 4 dimensions to describe previously now requires two.
# + slideshow={"slide_type": "skip"}
np.cumsum(model.explained_variance_ratio_)
# + [markdown] slideshow={"slide_type": "slide"}
# In the reduced feature space, we can use a clustering algorithm to parse out observations with similar features.
# + slideshow={"slide_type": "skip"}
from sklearn.cluster import MiniBatchKMeans, KMeans
from sklearn.metrics.pairwise import pairwise_distances_argmin
from sklearn.datasets.samples_generator import make_blobs
# + slideshow={"slide_type": "fragment"}
fig, ax = plt.subplots(figsize=(5,5))
k_means = KMeans(init='k-means++', n_clusters=4, n_init=10)
k_means.fit(newdata.iloc[:, 0:2])
ax.scatter(newdata[0], newdata[1], c=k_means.labels_, alpha=0.5)
# -
| scripts/nVision_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Ruby 2.3.3
# language: ruby
# name: ruby
# ---
# # Sequential Circuits | Ring Counter
# All Counters are in the module **Counter**. All methods available can be found in the documentation.
# load gemfile ruby_circuits.rb
require '../../../../../lib/ruby_ciruits'
# +
# Create a clock instance - 50Hz frequency
clock = Clock.new(1, 50)
clock.start()
# create a enable connector
enable = Connector.new(1)
# -
# Initializing RingCounter with 8 bits and clock
rc = Counter::Ring.new(8, clock)
# Initial State
print (rc.state())
# Triggering the counter 8 times
8.times do
rc.trigger()
puts(rc.state())
end
# Kill the clock after use to avoid overloading
clock.kill
| examples/jupyter_notebook/digital_circuits/sequential/Counters/RingCounter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/BrunaKuntz/Python-Curso-em-Video/blob/main/Mundo03/Desafio073.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="xolHFjDi_Hj4"
#
# # **Desafio 073**
# **Python 3 - 3º Mundo**
#
# Descrição: Crie uma tupla preenchida com os 20 primeiros colocados da Tabela do Campeonato Brasileiro de Futebol, na ordem de colocação. Depois mostre:
# a) Os 5 primeiros times.
# b) Os últimos 4 colocados.
# c) Times em ordem alfabética.
# d) Em que posição está o time da Chapecoense.
#
# Link: https://www.youtube.com/watch?v=RexybLcGewA&t=3s
# + id="8Lylf4Su-6aB"
brasileirao = ('Fortaleza', 'Athletico-PR', 'Atlético-GO', 'Bragantino', 'Bahia', 'Fluminense', 'Palmeiras', 'Flamengo', 'Atlético-MG', 'Corinthians', 'Ceará SC', 'Santos', 'Cuiabá', 'Sport Recife', 'São Paulo', 'Juventude', 'Internacional', 'Grêmio', 'América-MG', 'Chapecoense')
print("="*30)
print(f'Os 5 primeiros da lista são: {brasileirao[0: 5]}')
print("="*30)
print(f'Os 4 últimos são: {brasileirao[-4:]}')
print("="*30)
print(f'Times na ordem alfabética: {sorted(brasileirao)}')
print("="*30)
print(f'O Chapecoense está na {brasileirao.index("Chapecoense")+1}º posição')
| Mundo03/Desafio073.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Tutorial - Easy Embeddings
# > Using EasyWord, Stacked, and Document Embeddings in the AdaptNLP framework
# ## Finding Available Models with Hubs
#
# We can search for available models to utilize with Embeddings with the `HFModelHub` and `FlairModelHub`. We'll see an example below:
from adaptnlp import (
EasyWordEmbeddings,
EasyStackedEmbeddings,
EasyDocumentEmbeddings,
HFModelHub,
FlairModelHub,
DetailLevel
)
hub = HFModelHub()
models = hub.search_model_by_name('gpt2'); models
# For this tutorial we'll use the `gpt2` base model:
model = models[-1]; model
# ## Producing Embeddings using `EasyWordEmbeddings`
# First we'll use some basic example text:
example_text = "This is Albert. My last name is Einstein. I like physics and atoms."
# And then instantiate our embeddings tagger:
embeddings = EasyWordEmbeddings()
# Now let's run our `gpt2` model we grabbed earlier to generate some `EmbeddingResult` objects:
res = embeddings.embed_text(example_text, model_name_or_path=model)
# The result of this is a variety of filtered results for your disposal. The default level of information (`DetailLevel.Low`) will return an ordered dictionary with the keys of:
# - `inputs`, an array of your original sentence
# - `sentence_embeddings`, any sentence_embeddings you may have (if applicable) as an ordered dictionary of (sentence, embeddings)
# - `token_embeddings`, a similar `OrderedDict` to the `sentence_embeddings`, where the key `0` will be the embeddings of the first word, `1` is the second, and so forth:
res['inputs']
# To grab our sentence or token embeddings, simply look it up by its key:
#
# > Note: Only `StackedEmbeddings` will have sentence embeddings
res['token_embeddings'][0].shape
# Using different models is extremely easy to do. Let's try using BERT embeddings with the `bert-base-cased` model instead.
#
# Rather than passing in a `HFModelResult` or `FlairModelResult`, we can also just pass in the raw string name of the model as well:
res = embeddings.embed_text(example_text, model_name_or_path='bert-base-cased')
# Just like in the last example, we can look at the embeddings in the same way:
res['token_embeddings'][0].shape
# We can also convert our output to an easy to use dictionary, which can have a bit more information. First let's not filter our results by passing in `detail_level = None`:
res = embeddings.embed_text(example_text,
model_name_or_path='bert-base-cased',
detail_level=None)
res
# We can see see that result is now an `EmbeddingResult`, which has all the information we key'd with as available attributes:
res.inputs
# If we want to filter the object ourselves and convert it to a dictionary, we can use the `to_dict()` function:
o = res.to_dict()
print(o['inputs'], o['token_embeddings'][0].shape)
# You can specify the level of detail wanted by passing in "low", "medium", or "high" to the `to_dict` method, or use the convience `DetailLevel` class:
res_dict = res.to_dict(DetailLevel.Medium)
print(o['inputs'], o['token_embeddings'][0].shape)
# Each level returns more data from the outputs:
# - Available at all levels:
# - `original_sentence`: The original sentence
# - `tokenized_sentence`: The tokenized sentence
# - `sentence_embeddings`: Embeddings from the actual sentence (if available)
# - `token_embeddings`: Concatenated embeddings from all the tokens passed
# - `DetailLevel.Low` (or 'low'):
# - Returns information available at all levels
# - `DetailLevel.Medium` (or 'medium'):
# - Everything from `DetailLevel.Low`
# - For each token a dictionary of the embeddings and word index is added
# - `DetailLevel.High` (or 'high'):
# - Everything from `DetailLevel.Medium`
# - This will also include the original Flair `Sentence` result from the model
# Let's look at a final example with roBERTa embeddings:
res = embeddings.embed_text(example_text, model_name_or_path="roberta-base")
# And our generated embeddings:
#hide_input
print(f'Original text: {res["inputs"]}')
print(f'Model: roberta-base')
print(f'Embedding: {res["token_embeddings"][0].shape}')
# ## Producing Stacked Embeddings with `EasyStackedEmbeddings`
# `EasyStackedEmbeddings` allows you to use a variable number of language models to produce our embeddings shown above. For our example we'll combine the `bert-base-cased` and `distilbert-base-cased` models.
#
# First we'll instantiate our `EasyStackedEmbeddings`:
embeddings = EasyStackedEmbeddings("bert-base-cased", "distilbert-base-cased")
# And then generate our stacked word embeddings through our `embed_text` function:
res = embeddings.embed_text(example_text)
# We can see our results below:
#hide_input
print(f'Original text: {res["inputs"]}')
print(f'Model: roberta-base')
print(f'Embedding: {res["token_embeddings"][0]}')
# ## Document Embeddings with `EasyDocumentEmbeddings`
#
# Similar to the `EasyStackedEmbeddings`, `EasyDocumentEmbeddings` allows you to pool the embeddings from multiple models together with `embed_pool` and `embed_rnn`.
#
# We'll use our `bert-base-cased` and `distilbert-base-cased` models again:
embeddings = EasyDocumentEmbeddings("bert-base-cased", "distilbert-base-cased")
# This time we will use the `embed_pool` method to generate `DocumentPoolEmbeddings`. These do an average over all the word embeddings in a sentence:
res = embeddings.embed_pool(example_text)
# As a result rather than having embeddings by token, we have embeddings *by document*
res['inputs']
res['token_embeddings'][0]
#hide_input
print(f'Original text: {res["inputs"]}')
print(f'Model: roberta-base')
print(f'Embedding: {res["token_embeddings"][0].shape}')
# We can also generate `DocumentRNNEmbeddings` as well. Document RNN Embeddings run an RNN over all the words in the sentence and use the final state of the RNN as the embedding.
#
# First we'll call `embed_rnn`:
sentences = embeddings.embed_rnn(example_text)
# And then look at our generated embeddings:
#hide_input
print(f'Original text: {res["inputs"]}')
print(f'Model: roberta-base')
print(f'Embedding: {res["token_embeddings"][0].shape}')
| nbs/04a_tutorial.embeddings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbsphinx="hidden"
# # Realization of Non-Recursive Filters
#
# *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing.
# -
# ## Segmented Convolution
#
# In many applications one of the signals of a convolution is much longer than the other. For instance when filtering a speech signal $x_L[k]$ of length $L$ with a room impulse response $h_N[k]$ of length $N \ll L$. In such cases the [fast convolution](fast_convolution.ipynb), as introduced before, does not bring a benefit since both signals have to be zero-padded to a total length of at least $N+L-1$. Applying the fast convolution may then even be impossible in terms of memory requirements or overall delay. The filtering of a signal which is captured in real-time is also not possible by the fast convolution.
#
# In order to overcome these limitations, various techniques have been developed that perform the filtering on limited portions of the signals. These portions are known as partitions, segments or blocks. The respective algorithms are termed as *segmented* or *block-based* algorithms. The following section introduces two techniques for the segmented convolution of signals. The basic concept of these is to divide the convolution $y[k] = x_L[k] * h_N[k]$ into multiple convolutions operating on (overlapping) segments of the signal $x_L[k]$.
# ### Overlap-Add Algorithm
#
# The [overlap-add algorithm](https://en.wikipedia.org/wiki/Overlap%E2%80%93add_method) is based on splitting the signal $x_L[k]$ into non-overlapping segments $x_p[k]$ of length $P$
#
# \begin{equation}
# x_L[k] = \sum_{p = 0}^{L/P - 1} x_p[k - p \cdot P]
# \end{equation}
#
# where the segments $x_p[k]$ are defined as
#
# \begin{equation}
# x_p[k] = \begin{cases} x_L[k + p \cdot P] & \text{ for } k=0,1,\dots,P-1 \\ 0 & \text{ otherwise} \end{cases}
# \end{equation}
#
# Note that $x_L[k]$ might have to be zero-padded so that its total length is a multiple of the segment length $P$. Introducing the segmentation of $x_L[k]$ into the convolution yields
#
# \begin{align}
# y[k] &= x_L[k] * h_N[k] \\
# &= \sum_{p = 0}^{L/P - 1} x_p[k - p \cdot P] * h_N[k] \\
# &= \sum_{p = 0}^{L/P - 1} y_p[k - p \cdot P]
# \end{align}
#
# where $y_p[k] = x_p[k] * h_N[k]$. This result states that the convolution of $x_L[k] * h_N[k]$ can be split into a series of convolutions $y_p[k]$ operating on the samples of one segment only. The length of $y_p[k]$ is $N+P-1$. The result of the overall convolution is given by summing up the results from the segments shifted by multiples of the segment length $P$. This can be interpreted as an overlapped superposition of the results from the segments, as illustrated in the following diagram
#
# 
#
# The overall procedure is denoted by the name *overlap-add* technique. The convolutions $y_p[k] = x_p[k] * h_N[k]$ can be realized efficiently by the [fast convolution](fast_convolution.ipynb) using zero-padding and fast Fourier transformations (FFTs) of length $M \geq P+N-1$.
#
# A drawback of the overlap-add technique is that the next input segment is required to compute the result for the actual segment of the output. For real-time applications this introduces an algorithmic delay of one segment.
# #### Example
#
# The following example illustrates the overlap-add algorithm by showing the (convolved) segments and the overall result.
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as sig
L = 64 # length of input signal
N = 8 # length of impulse response
P = 16 # length of segments
# generate input signal
x = sig.triang(L)
# generate impulse response
h = sig.triang(N)
# overlap-add convolution
xp = np.zeros((L//P, P))
yp = np.zeros((L//P, N+P-1))
y = np.zeros(L+P-1)
for p in range(L//P):
xp[p, :] = x[p*P:(p+1)*P]
yp[p, :] = np.convolve(xp[p,:], h, mode='full')
y[p*P:(p+1)*P+N-1] += yp[p, :]
y = y[0:N+L]
# plot signals
plt.figure(figsize = (10,2))
plt.subplot(121)
plt.stem(x)
for n in np.arange(L//P)[::2]:
plt.axvspan(n*P, (n+1)*P-1, facecolor='g', alpha=0.5)
plt.title(r'Signal $x[k]$ and segments')
plt.xlabel(r'$k$')
plt.ylabel(r'$x[k]$')
plt.axis([0, L, 0, 1])
plt.subplot(122)
plt.stem(h)
plt.title(r'Impulse response $h[k]$')
plt.xlabel(r'$k$')
plt.ylabel(r'$h[k]$')
plt.axis([0, L, 0, 1])
for p in np.arange(L//P):
plt.figure(figsize = (10,2))
plt.stem(np.concatenate((np.zeros(p*P), yp[p, :])))
plt.title(r'Result of segment $p=%d$' %(p))
plt.xlabel(r'$k$')
plt.ylabel(r'$y_%d[k - %d P]$' %(p,p))
plt.axis([0, L+P, 0, 4])
plt.figure(figsize = (10,2))
plt.stem(y)
plt.title(r'Result $y[k] = x[k] * h[k]$')
plt.xlabel(r'$k$')
plt.ylabel(r'$y[k]$')
plt.axis([0, L+P, 0, 4]);
# -
# **Exercises**
#
# * Change the length `N` of the impulse response and the length `P` of the segments. What changes?
# * What influence have these two lengths on the numerical complexity of the overlap-add algorithm?
#
# Solution: The parameters `N` and `P` influence the overlap in the output and the total number of segments. The number of overlapping samples of two consecutive output segments $y_p[k]$ and $y_{p+1}[k]$ is given as $N-1$, and the total number of segments as $\frac{L}{P}$. The segmented convolution requires $\frac{L}{P}$ linear convolutions of length $P+N-1$ each. The numerical complexity is mainly determined by the overall number of multiplications which is given as $\frac{L}{P} (P+N-1)^2$. For fixed $L$ and $N$, the optimum segment length is computed by finding the minimum in terms of multiplications. It is given as $P=N-1$.
# ### Overlap-Save Algorithm
#
# The [overlap-save](https://en.wikipedia.org/wiki/Overlap%E2%80%93save_method) algorithm, also known as *overlap-discard algorithm*, follows a different strategy as the overlap-add technique introduced above. It is based on an overlapping segmentation of the input $x_L[k]$ and application of the periodic convolution for the individual segments.
#
# Lets take a closer look at the result of the periodic convolution $x_p[k] \circledast h_N[k]$, where $x_p[k]$ denotes a segment of length $P$ of the input signal and $h_N[k]$ the impulse response of length $N$. The result of a linear convolution $x_p[k]* h_N[k]$ would be of length $P + N -1$. The result of the periodic convolution of period $P$ for $P > N$ would suffer from a circular shift (time aliasing) and superposition of the last $N-1$ samples to the beginning. Hence, the first $N-1$ samples are not equal to the result of the linear convolution. However, the remaining $P- N + 1$ do so.
#
# This motivates to split the input signal $x_L[k]$ into overlapping segments of length $P$ where the $p$-th segment overlaps its preceding $(p-1)$-th segment by $N-1$ samples
#
# \begin{equation}
# x_p[k] = \begin{cases}
# x_L[k + p \cdot (P-N+1) - (N-1)] & \text{ for } k=0,1, \dots, P-1 \\
# 0 & \text{ otherwise}
# \end{cases}
# \end{equation}
#
# The part of the circular convolution $x_p[k] \circledast h_N[k]$ of one segment $x_p[k]$ with the impulse response $h_N[k]$ that is equal to the linear convolution of both is given as
#
# \begin{equation}
# y_p[k] = \begin{cases}
# x_p[k] \circledast h_N[k] & \text{ for } k=N-1, N, \dots, P-1 \\
# 0 & \text{ otherwise}
# \end{cases}
# \end{equation}
#
# The output $y[k]$ is simply the concatenation of the $y_p[k]$
#
# \begin{equation}
# y[k] = \sum_{p=0}^{\frac{L}{P-N+1} - 1} y_p[k - p \cdot (P-N+1) + (N-1)]
# \end{equation}
#
# The overlap-save algorithm is illustrated in the following diagram
#
# 
#
# For the first segment $x_0[k]$, $N-1$ zeros have to be appended to the beginning of the input signal $x_L[k]$ for the overlapped segmentation. From the result of the periodic convolution $x_p[k] \circledast h_N[k]$ the first $N-1$ samples are discarded, the remaining $P - N + 1$ are copied to the output $y[k]$. This is indicated by the alternative notation *overlap-discard* used for the technique. The periodic convolution can be realized efficiently by a FFT/IFFT of length $P$.
# #### Example
#
# The following example illustrates the overlap-save algorithm by showing the results of the periodic convolutions of the segments. The discarded parts are indicated by the red background.
# +
L = 64 # length of input signal
N = 8 # length of impulse response
P = 24 # length of segments
# generate input signal
x = sig.triang(L)
# generate impulse response
h = sig.triang(N)
# overlap-save convolution
nseg = (L+N-1)//(P-N+1) + 1
x = np.concatenate((np.zeros(N-1), x, np.zeros(P)))
xp = np.zeros((nseg, P))
yp = np.zeros((nseg, P))
y = np.zeros(nseg*(P-N+1))
for p in range(nseg):
xp[p, :] = x[p*(P-N+1):p*(P-N+1)+P]
yp[p, :] = np.fft.irfft(np.fft.rfft(xp[p, :]) * np.fft.rfft(h, P))
y[p*(P-N+1):p*(P-N+1)+P-N+1] = yp[p, N-1:]
y = y[0:N+L]
plt.figure(figsize = (10,2))
plt.subplot(121)
plt.stem(x[N-1:])
plt.title(r'Signal $x[k]$')
plt.xlabel(r'$k$')
plt.ylabel(r'$x[k]$')
plt.axis([0, L, 0, 1])
plt.subplot(122)
plt.stem(h)
plt.title(r'Impulse response $h[k]$')
plt.xlabel(r'$k$')
plt.ylabel(r'$h[k]$')
plt.axis([0, L, 0, 1])
for p in np.arange(nseg):
plt.figure(figsize = (10,2))
plt.stem(yp[p, :])
plt.axvspan(0, N-1+.5, facecolor='r', alpha=0.5)
plt.title(r'Result of periodic convolution of $x_%d[k]$ and $h_N[k]$' %(p))
plt.xlabel(r'$k$')
plt.axis([0, L+P, 0, 4])
plt.figure(figsize = (10,2))
plt.stem(y)
plt.title(r'Result $y[k] = x[k] * h[k]$')
plt.xlabel(r'$k$')
plt.ylabel(r'$y[k]$')
plt.axis([0, L+P, 0, 4]);
# -
# **Exercise**
#
# * Change the length `N` of the impulse response and the length `P` of the segments. What changes?
# * How many samples of the output signal $y[k]$ are computed per segment for a particular choice of these two values?
# * What would be a good choice for the segment length `P` with respect to the length `N` of the impulse response?
#
# Solution: Decreasing the segment length $P$ or increasing the length of the impulse response $N$ decreases the number of valid output samples per segment which is given as $P-N+1$. The computation of $L$ output samples requires $\frac{L}{P-N+1}$ cyclic convolutions of length $P$ each. Regarding the total number of multiplications, an optimal choice for the segment length is $P = 2 N - 2$.
# ### Practical Aspects and Extensions
#
# * For both the overlap-add and overlap-save algorithm the length $P$ of the segments influences the lengths of the convolutions, FFTs and the number of output samples per segment. The segment length is often chosen as
#
# * $P=N$ for overlap-add and
# * $P = 2 N$ for overlap-save.
#
# For both algorithms this requires FFTs of length $2 N$ to compute $P$ output samples. The overlap-add algorithm requires $P$ additional additions per segment in comparison to overlap-save.
#
# * For real-valued signals $x_L[k]$ and impulse responses $h_N[k]$ real-valued FFTs lower the computational complexity significantly. As alternative, the $2 N$ samples in the FFT can be distributed into the real and complex part of a FFT of length $N$ [[Zölzer](../index.ipynb#Literature)].
#
# * The impulse response can be changed in each segment in order to simulate time-variant linear systems. This is often combined with an overlapping computation of the output in order to avoid artifacts due to instationarities.
#
# * For long impulse responses $h_N[k]$ or low-delay applications, algorithms have been developed which base on an additional segmentation of the impulse response. This is known as *partitioned convolution*.
| Lectures_Advanced-DSP/nonrecursive_filters/segmented_convolution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Video Game Recommendation Engine (Python)
#
# >A tutorial showing how to use sklearn's NLP functions to produce a content-based recommendation system.
#
# - toc: true
# - badges: true
# - comments: true
# - categories: [sklearn, NLP, recommendation engine, data cleaning, python]
# - image: images/games.png
#
#
# ### Overview
#
# Welcome to my project on creating a video game recommendation system. Many streaming services utilize recommendation systems to increase customer engagement with their platform. I wanted to create a similar system for video games to display new games for users to play. In this project, we will be using a content-based recommender system. Therefore, we will base our recommendations on titles, publishers, descriptions, genres, and tags that different items share. During this project, I will be utilizing the packages Pandas, Numpy, and Sklearn. These are all standard packages for data manipulation, mathematics, and machine learning applications.
#
# Link for Dataset: https://www.kaggle.com/trolukovich/steam-games-complete-dataset
import pandas as pd
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.feature_extraction.text import CountVectorizer
Games = pd.read_csv('~/Downloads/steam_games 2.csv')
# ### Background
# The dataset features 20 columns, many that will not be of use to this type of recommendation system. As well, there are 40,833 unique video games with unique characteristics. The recommendation system is designed to suit the needs of novice gamers. Therefore, we will be excluding free games and focusing on Triple-A titles. Triple-A games are video games produced or developed by a major publisher, which allocated a large budget for both development and marketing. Many novice gamers will be familiar with Triple-A games rather than small indie games. Most Triple-A titles retail price is $59.99, however, some games release months or years after their console release to the steam platform for a discount. Therefore we will limit our dataset to only titles with a price range of $19.99 to $59.99.
Games.head(3)
# ### Step One: Filtering the price
# The original price column will be the column we intend to filter. We have a problem to sort out before we proceed with our filtering. We cannot sort the original price column because it is not considered a numerical type. We can fix this by first converting the column to a character type, then remove the dollar sign through character string slicing. After we remove the dollar sign, we can convert the column to a numerical type. Now we can proceed with applying the filter. The total number of unique games in the dataset is now 4,338.
Games.original_price
Games['original_price'] = Games['original_price'].str[1:]
Games['original_price'] = pd.to_numeric(Games['original_price'],errors='coerce')
Games = Games[(Games['original_price'] >= 19.99) & (Games['original_price'] <= 59.99)]
Games.shape
# ### Step Two: Choosing columns to use in the recommendation system
# When choosing which columns to put in the recommendation system, we should be mindful of the characteristics gamer's value. The developer variable is important to include since developers often have the same team working on different games. Therefore each game produced by the same developer will have a similar style of gameplay. Genre variable provides a broad grouping of games with similarities in form, style, or subject matter. Popular Tags variable is an in-depth description of different gaming characteristics. The Game Details variable lists a game's online offering such as whether a game is single-player or multiplayer. The last variable would be the name of the game, which is valuable because sequels and prequels will be included in the recommendation.
Games.head(3)
Games = Games[['genre','game_details','popular_tags','developer','name']]
# ### Step Three: Drop all rows with null values
# Usually, the first step in any project would be to eliminate null values. However, it is important to wait to perform this step. We have previously consolidated columns to only useful columns for the recommendation system. Now that the dataset only has useful columns, we can eliminate only rows where null values are present in the columns we have chosen. After eliminating null values the total unique games in the dataset are 3,999. We will also be adding a new column labeled Game_ID, which provides a numerical unique value to each game.
Games.head(3)
Games.dropna(inplace = True)
Games.shape
Games['Game_ID'] = range(0,3999)
Games.isnull().values.any()
Games = Games.reset_index()
# ### Step Four: Combine selected column's values into string
# Our next step is going to be creating a function that compiles all data in each column selected into one giant string. In order to do so, we are going to make an empty list called important features and then append the values of the desired columns. Then we create a column called important features, where we call the function on the dataset.
def get_important_features(data):
important_features = []
for i in range(0, data.shape[0]):
important_features.append(data['name'][i]+' '+data['developer'][i]+' '+data['popular_tags'][i]+' '+data['genre'][i]+data['game_details'][i])
return important_features
Games['important_features'] = get_important_features(Games)
Games.important_features.head(3)
# ### Step Five: Assemble similarity matrix
# First, we will be using the count vectorizer function to transform a given text into a vector. The matrix consists of a frequency of words in a string. For example the string 'Action, Action, Adventure', the matrix will display a table with the word, Action, and a frequency of two. Then we can use the cosine similarity function to measure the correlation among the different games. This function produces a matrix with the correlations between each game. The matrix contains a numerical value from zero to one, where a variable closer to one is considered a good recommendation, and a variable closer to zero is considered a poor recommendation. The diagonal line of the value one showcases a perfect correlation because it is the same game on each axis.
cm = CountVectorizer().fit_transform(Games['important_features'])
cs = cosine_similarity(cm)
print(cs)
# ### Step Six: Use the Recommendation System
# Our last step would be to enter the name of the game we wish to get recommendations from. In this case, I have chosen the game Doom Eternal. We then create a new object called title_id, where we obtain the Game_ID value for Doom Eternal, which we assigned to each title in the third step. After this step, we are going to create a list of enumerations that contain the similarity score between each game and Doom Eternal. Then we sort the similarity score in descending order to receive the games with the highest similarities to Doom Eternal. I have chosen to display the top seven games that are recommended to us based on the characteristics of Doom Eternal.
title = 'DOOM Eternal'
title_id = Games[Games.name == title]['Game_ID'].values[0]
scores = list(enumerate(cs[title_id]))
sorted_scores = sorted(scores, key = lambda x:x[1], reverse = True)
sorted_scores = sorted_scores[1:]
j = 0
print('The 7 most recommended games to', title, 'are:\n')
for item in sorted_scores:
game_title = Games[Games.Game_ID == item[0]]['name'].values[0]
print(j+1, game_title)
j = j+1
if j > 6:
break
# ### Conclusion
#
# When observing the top seven results we can see the similarities between the games. The more similarities in each column the higher the ranking will be. For instance, Doom 3: BFG Edition and DOOM have similarities in every column. While the bottom four recommendations have values in common in the genre, game details, and popular tags columns. From my personal experience playing five out of the seven recommended games, I would like to have these games recommended to me based on my interest of DOOM Eternal.
Games = Games.set_index('name')
Games.loc[['DOOM Eternal','Doom 3: BFG Edition','DOOM','Dead Space™ 2','DUSK','Max Payne 3','Unreal Tournament 3 Black','Crysis 2 - Maximum Edition'],
['genre','game_details','popular_tags','developer']]
| _notebooks/2021-09-21-Video-Game-Recommendation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from Ultrasonic import *
import Car_Motor
car = Car_Motor.Car_Motor()
sonic = UltraSonic()
car.Car_Run(50,50)
time.sleep(1)
car.Car_Stop()
time.sleep(0.3)
# parking while dist >= 5
try:
while True:
distance = sonic.Distance_test()
if distance < 5:
car.Car_Stop()
break
else:
car.Car_Run(25,100)
time.sleep(0.03)
except KeyboardInterrupt:
pass
car.Car_Stop()
print("Parking Ending")
GPIO.cleanup()
# -
car.Car_Run(50,50)
time.sleep(1)
car.Car_Stop()
time.sleep(0.3)
car.Car_Run(25,100)
time.sleep(1.7)
car.Car_Stop()
| tmp_project/ipynb_test/main_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="NpJd3dlOCStH"
# <a href="https://colab.research.google.com/github/Startup-Data/SatLunNeh/blob/master/AI%20Parts/Music%20Part/ddsp/ddsp/colab/demos/train_autoencoder.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="hMqWDc_m6rUC"
#
# ##### Copyright 2020 Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
#
#
#
#
# + id="VNhgka4UKNjf"
# Copyright 2020 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# + [markdown] id="SpXo6phTiOQM"
# # Train a DDSP Autoencoder on GPU
#
# This notebook demonstrates how to install the DDSP library and train it for synthesis based on your own data using our command-line scripts. If run inside of Colab, it will automatically use a free Google Cloud GPU.
#
# At the end, you'll have a custom-trained checkpoint that you can download to use with the [DDSP Timbre Transfer Colab](https://colab.research.google.com/github/magenta/ddsp/blob/master/ddsp/colab/demos/timbre_transfer.ipynb).
#
# <img src="https://storage.googleapis.com/ddsp/additive_diagram/ddsp_autoencoder.png" alt="DDSP Autoencoder figure" width="700">
#
# + [markdown] id="wXjcauVRB48S"
# **Note that we prefix bash commands with a `!` inside of Colab, but you would leave them out if running directly in a terminal.**
# + [markdown] id="Vn7CQ4GQizHy"
# ## Install Dependencies
#
# First we install the required dependencies with `pip`.
# + cellView="both" id="VxPuPR0j5Gs7"
# !pip install -qU ddsp[data_preparation]==1.4.0
# Initialize global path for using google drive.
DRIVE_DIR = ''
# + [markdown] id="w0fVn8yUJl_v"
# ## Setup Google Drive (Optional, Recommeded)
#
# This notebook requires uploading audio and saving checkpoints. While you can do this with direct uploads / downloads, it is recommended to connect to your google drive account. This will enable faster file transfer, and regular saving of checkpoints so that you do not lose your work if the colab kernel restarts (common for training more than 12 hours).
# + [markdown] id="L6MXUbL6KeMn"
# #### Login and mount your drive
#
# This will require an authentication code. You should then be able to see your drive in the file browser on the left panel.
# + id="m33xuTjEKazJ"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="a4vmxpj1LC7m"
# #### Set your base directory
# * In drive, put all of the audio (.wav, .mp3) files with which you would like to train in a single folder.
# * Typically works well with 10-20 minutes of audio from a single monophonic source (also, one acoustic environment).
# * Use the file browser in the left panel to find a folder with your audio, right-click **"Copy Path", paste below**, and run the cell.
# + cellView="form" id="A0bK6P9DMBTb"
#@markdown (ex. `/content/drive/My Drive/...`) Leave blank to skip loading from Drive.
DRIVE_DIR = '' #@param {type: "string"}
import os
assert os.path.exists(DRIVE_DIR)
print('Drive Folder Exists:', DRIVE_DIR)
# + [markdown] id="FELlizMtIxCH"
# ## Make directories to save model and data
# + id="Qd22WxEQI3FV"
AUDIO_DIR = 'data/audio'
AUDIO_FILEPATTERN = AUDIO_DIR + '/*'
# !mkdir -p $AUDIO_DIR
if DRIVE_DIR:
SAVE_DIR = os.path.join(DRIVE_DIR, 'ddsp-solo-instrument')
else:
SAVE_DIR = '/content/models/ddsp-solo-instrument'
# !mkdir -p "$SAVE_DIR"
# + [markdown] id="fb4YD8woYD1H"
# ## Prepare Dataset
#
# + [markdown] id="uNhH7nEbX2db"
# #### Upload training audio
#
# Upload audio files to use for training your model. Uses `DRIVE_DIR` if connected to drive, otherwise prompts local upload.
# + id="itVKEzF6m3rY"
import glob
import os
from ddsp.colab import colab_utils
if DRIVE_DIR:
mp3_files = glob.glob(os.path.join(DRIVE_DIR, '*.mp3'))
wav_files = glob.glob(os.path.join(DRIVE_DIR, '*.wav'))
audio_files = mp3_files + wav_files
else:
audio_files, _ = colab_utils.upload()
for fname in audio_files:
target_name = os.path.join(AUDIO_DIR,
os.path.basename(fname).replace(' ', '_'))
print('Copying {} to {}'.format(fname, target_name))
# !cp "$fname" $target_name
# + [markdown] id="g_XVFoN2YOat"
# ### Preprocess raw audio into TFRecord dataset
#
# We need to do some preprocessing on the raw audio you uploaded to get it into the correct format for training. This involves turning the full audio into short (4-second) examples, inferring the fundamental frequency (or "pitch") with [CREPE](http://github.com/marl/crepe), and computing the loudness. These features will then be stored in a sharded [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) file for easier loading. Depending on the amount of input audio, this process usually takes a few minutes.
#
# * (Optional) Transfer dataset from drive. If you've already created a dataset, from a previous run, this cell will skip the dataset creation step and copy the dataset from `$DRIVE_DIR/data`
# + id="MsnkAHyHVrCW"
import glob
import os
TRAIN_TFRECORD = 'data/train.tfrecord'
TRAIN_TFRECORD_FILEPATTERN = TRAIN_TFRECORD + '*'
# Copy dataset from drive if dataset has already been created.
drive_data_dir = os.path.join(DRIVE_DIR, 'data')
drive_dataset_files = glob.glob(drive_data_dir + '/*')
if DRIVE_DIR and len(drive_dataset_files) > 0:
# !cp "$drive_data_dir"/* data/
else:
# Make a new dataset.
if not glob.glob(AUDIO_FILEPATTERN):
raise ValueError('No audio files found. Please use the previous cell to '
'upload.')
# !ddsp_prepare_tfrecord \
# --input_audio_filepatterns=$AUDIO_FILEPATTERN \
# --output_tfrecord_path=$TRAIN_TFRECORD \
# --num_shards=10 \
# --alsologtostderr
# Copy dataset to drive for safe-keeping.
if DRIVE_DIR:
# !mkdir "$drive_data_dir"/
print('Saving to {}'.format(drive_data_dir))
# !cp $TRAIN_TFRECORD_FILEPATTERN "$drive_data_dir"/
# + [markdown] id="d4toX-D-AYZL"
# ### Save dataset statistics for timbre transfer
#
# Quantile normalization helps match loudness of timbre transfer inputs to the
# loudness of the dataset, so let's calculate it here and save in a pickle file.
# + id="Bp_c8P0xApY6"
from ddsp.colab import colab_utils
import ddsp.training
data_provider = ddsp.training.data.TFRecordProvider(TRAIN_TFRECORD_FILEPATTERN)
dataset = data_provider.get_dataset(shuffle=False)
PICKLE_FILE_PATH = os.path.join(SAVE_DIR, 'dataset_statistics.pkl')
_ = colab_utils.save_dataset_statistics(data_provider, PICKLE_FILE_PATH, batch_size=1)
# + [markdown] id="nIsq0HrzbOF7"
# Let's load the dataset in the `ddsp` library and have a look at one of the examples.
# + id="dA-FOmRgYdpZ"
from ddsp.colab import colab_utils
import ddsp.training
from matplotlib import pyplot as plt
import numpy as np
data_provider = ddsp.training.data.TFRecordProvider(TRAIN_TFRECORD_FILEPATTERN)
dataset = data_provider.get_dataset(shuffle=False)
try:
ex = next(iter(dataset))
except StopIteration:
raise ValueError(
'TFRecord contains no examples. Please try re-running the pipeline with '
'different audio file(s).')
colab_utils.specplot(ex['audio'])
colab_utils.play(ex['audio'])
f, ax = plt.subplots(3, 1, figsize=(14, 4))
x = np.linspace(0, 4.0, 1000)
ax[0].set_ylabel('loudness_db')
ax[0].plot(x, ex['loudness_db'])
ax[1].set_ylabel('F0_Hz')
ax[1].set_xlabel('seconds')
ax[1].plot(x, ex['f0_hz'])
ax[2].set_ylabel('F0_confidence')
ax[2].set_xlabel('seconds')
ax[2].plot(x, ex['f0_confidence'])
# + [markdown] id="9gvXBa7PbuyY"
# ## Train Model
#
# We will now train a "solo instrument" model. This means the model is conditioned only on the fundamental frequency (f0) and loudness with no instrument ID or latent timbre feature. If you uploaded audio of multiple instruemnts, the neural network you train will attempt to model all timbres, but will likely associate certain timbres with different f0 and loudness conditions.
# + [markdown] id="YpwQkSIKjEMZ"
# First, let's start up a [TensorBoard](https://www.tensorflow.org/tensorboard) to monitor our loss as training proceeds.
#
# Initially, TensorBoard will report `No dashboards are active for the current data set.`, but once training begins, the dashboards should appear.
# + id="u2lx7yJneUXT"
# %reload_ext tensorboard
import tensorboard as tb
tb.notebook.start('--logdir "{}"'.format(SAVE_DIR))
# + [markdown] id="fT-8Koyvj46w"
# ### We will now begin training.
#
# Note that we specify [gin configuration](https://github.com/google/gin-config) files for the both the model architecture ([solo_instrument.gin](TODO)) and the dataset ([tfrecord.gin](TODO)), which are both predefined in the library. You could also create your own. We then override some of the spefic params for `batch_size` (which is defined in in the model gin file) and the tfrecord path (which is defined in the dataset file).
#
# ### Training Notes:
# * Models typically perform well when the loss drops to the range of ~4.5-5.0.
# * Depending on the dataset this can take anywhere from 5k-30k training steps usually.
# * The default is set to 30k, but you can stop training at any time, and for timbre transfer, it's best to stop before the loss drops too far below ~5.0 to avoid overfitting.
# * On the colab GPU, this can take from around 3-20 hours.
# * We **highly recommend** saving checkpoints directly to your drive account as colab will restart naturally after about 12 hours and you may lose all of your checkpoints.
# * By default, checkpoints will be saved every 300 steps with a maximum of 10 checkpoints (at ~60MB/checkpoint this is ~600MB). Feel free to adjust these numbers depending on the frequency of saves you would like and space on your drive.
# * If you're restarting a session and `DRIVE_DIR` points a directory that was previously used for training, training should resume at the last checkpoint.
# + id="poKO-mZEGYXZ"
# !ddsp_run \
# --mode=train \
# --alsologtostderr \
# --save_dir="$SAVE_DIR" \
# --gin_file=models/solo_instrument.gin \
# --gin_file=datasets/tfrecord.gin \
# --gin_param="TFRecordProvider.file_pattern='$TRAIN_TFRECORD_FILEPATTERN'" \
# --gin_param="batch_size=16" \
# --gin_param="train_util.train.num_steps=30000" \
# --gin_param="train_util.train.steps_per_save=300" \
# --gin_param="trainers.Trainer.checkpoints_to_keep=10"
# + [markdown] id="V95qxVjFzWR6"
# ## Resynthesis
#
# Check how well the model reconstructs the training data
# + id="OQ5PPDZVzgFR"
from ddsp.colab.colab_utils import play, specplot
import ddsp.training
import gin
from matplotlib import pyplot as plt
import numpy as np
data_provider = ddsp.training.data.TFRecordProvider(TRAIN_TFRECORD_FILEPATTERN)
dataset = data_provider.get_batch(batch_size=1, shuffle=False)
try:
batch = next(iter(dataset))
except OutOfRangeError:
raise ValueError(
'TFRecord contains no examples. Please try re-running the pipeline with '
'different audio file(s).')
# Parse the gin config.
gin_file = os.path.join(SAVE_DIR, 'operative_config-0.gin')
gin.parse_config_file(gin_file)
# Load model
model = ddsp.training.models.Autoencoder()
model.restore(SAVE_DIR)
# Resynthesize audio.
outputs = model(batch, training=False)
audio_gen = model.get_audio_from_outputs(outputs)
audio = batch['audio']
print('Original Audio')
specplot(audio)
play(audio)
print('Resynthesis')
specplot(audio_gen)
play(audio_gen)
# + [markdown] id="ZXM2ynLQ2Wl3"
# ## Download Checkpoint
#
# Below you can download the final checkpoint. You are now ready to use it in the [DDSP Timbre Tranfer Colab](https://colab.research.google.com/github/magenta/ddsp/blob/master/ddsp/colab/demos/timbre_transfer.ipynb).
# + id="2WDiCyXP0tNE"
from ddsp.colab import colab_utils
import tensorflow as tf
import os
CHECKPOINT_ZIP = 'my_solo_instrument.zip'
latest_checkpoint_fname = os.path.basename(tf.train.latest_checkpoint(SAVE_DIR))
# !cd "$SAVE_DIR" && zip $CHECKPOINT_ZIP $latest_checkpoint_fname* operative_config-0.gin dataset_statistics.pkl
# !cp "$SAVE_DIR/$CHECKPOINT_ZIP" ./
colab_utils.download(CHECKPOINT_ZIP)
| AI Parts/Music Part/ddsp/ddsp/colab/demos/train_autoencoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''py37'': virtualenv)'
# language: python
# name: python37664bitpy37virtualenv641d9865201944d3b398536765d4c9b3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_ctaea:
# -
# ## C-TAEA
#
#
# This algorithm is implemented based on <cite data-cite="ctaea"></cite> and the authors' [implementation](https://cola-laboratory.github.io/docs/publications/). The algorithm is based on [Reference Directions](../misc/reference_directions.ipynb) which need to be provided when initializing the algorithm object.
#
# C-TAEA follows a two archive approach to balance between convergence (Convergence Archive CA) and diversity (Diversity Archive DA).
# + code="algorithms/usage_moead.py"
from pymoo.algorithms.ctaea import CTAEA
from pymoo.factory import get_problem, get_visualization, get_reference_directions
from pymoo.optimize import minimize
problem = get_problem("c1dtlz1", None, 3, k=5)
ref_dirs = get_reference_directions("das-dennis", 3, n_partitions=12)
algorithm = CTAEA(
ref_dirs,
seed=1
)
res = minimize(problem, algorithm, termination=('n_gen', 400))
plot = get_visualization("scatter", legend=True)
plot.add(problem.pareto_front(ref_dirs), label="PF")
plot.add(res.F).show()
# -
# ### API
# + raw_mimetype="text/restructuredtext" active=""
# .. autoclass:: pymoo.algorithms.ctaea.CTAEA
# :noindex:
# -
# Python implementation by [cyrilpic](https://github.com/cyrilpic) based on the [original C code](https://cola-laboratory.github.io/docs/publications/).
| doc/source/algorithms/ctaea.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to Deep Learning with PyTorch
#
# In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.
#
#
# ## Neural Networks
#
# Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.
#
# <img src="assets/simple_neuron.png" width=400px>
#
# Mathematically this looks like:
#
# $$
# \begin{align}
# y &= f(w_1 x_1 + w_2 x_2 + b) \\
# y &= f\left(\sum_i w_i x_i \right)
# \end{align}
# $$
#
# With vectors this is the dot/inner product of two vectors:
#
# $$
# h = \begin{bmatrix}
# x_1 \, x_2 \cdots x_n
# \end{bmatrix}
# \cdot
# \begin{bmatrix}
# w_1 \\
# w_2 \\
# \vdots \\
# w_n
# \end{bmatrix}
# $$
# ### Stack them up!
#
# We can assemble these unit neurons into layers and stacks, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.
#
# <img src='assets/multilayer_diagram_weights.png' width=450px>
#
# We can express this mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated
#
# $$
# \vec{h} = [h_1 \, h_2] =
# \begin{bmatrix}
# x_1 \, x_2 \cdots \, x_n
# \end{bmatrix}
# \cdot
# \begin{bmatrix}
# w_{11} & w_{12} \\
# w_{21} &w_{22} \\
# \vdots &\vdots \\
# w_{n1} &w_{n2}
# \end{bmatrix}
# $$
#
# The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply
#
# $$
# y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
# $$
# ## Tensors
#
# It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.
#
# <img src="assets/tensor_examples.svg" width=600px>
#
# With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
# +
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
import numpy as np
import torch
#import helper
# -
# First, let's see how we work with PyTorch tensors. These are the fundamental data structures of neural networks and PyTorch, so it's imporatant to understand how these work.
x = torch.rand(3, 2)
x
y = torch.ones(x.size())
y
z = x + y
z
# In general PyTorch tensors behave similar to Numpy arrays. They are zero indexed and support slicing.
z[0]
z[:, 1:]
# Tensors typically have two forms of methods, one method that returns another tensor and another method that performs the operation in place. That is, the values in memory for that tensor are changed without creating a new tensor. In-place functions are always followed by an underscore, for example `z.add()` and `z.add_()`.
# Return a new tensor z + 1
z.add(1)
# z tensor is unchanged
z
# Add 1 and update z tensor in-place
z.add_(1)
# z has been updated
z
# ### Reshaping
#
# Reshaping tensors is a really common operation. First to get the size and shape of a tensor use `.size()`. Then, to reshape a tensor, use `.resize_()`. Notice the underscore, reshaping is an in-place operation.
z.size()
z.resize_(2, 3)
z
# ## Numpy to Torch and back
#
# Converting between Numpy arrays and Torch tensors is super simple and useful. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
# The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
| EX-S02-L03-04-Notebook-Pytorch_tensors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''maz'': conda)'
# language: python
# name: python3
# ---
# +
import sys, os; sys.path.insert(1, os.path.join(sys.path[0], '..'))
from hex.graph_hex_board import GraphHexBoard
import numpy as np
import torch
from matplotlib.cm import ScalarMappable
class GHB(GraphHexBoard):
@property
def cell_colours(self):
sm = ScalarMappable(cmap='viridis')
return sm.to_rgba(self.x)
class AverageMeter(object):
def __init__(self):
self.val = 0.
self.avg = 0.
self.sum = 0.
self.count = 0.
def __repr__(self):
return f'{self.avg:.2e}'
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
# +
from itertools import combinations, product
def secure_2hop(n_nodes, A, edge_index, side_n):
# label the secure 2 hop faces
y = torch.zeros(n_nodes)
y[side_n] = 1.
m = edge_index[0] == side_n
one_hop = edge_index[1, m]
y = A @ (A @ y)
y[side_n] = 0.
y[one_hop] = 0.
return (y >= 2.).nonzero().squeeze().tolist()
def independent_s2h_pairs(s_2hop, edge_index, side_n):
# get the side connection sets for each 2-hop side connections
csets = {}
for n in s_2hop:
a = set(edge_index[1, edge_index[0] == n].tolist())
b = set(edge_index[1, edge_index[0] == side_n].tolist())
csets[n] = a & b
pset = combinations(s_2hop, 2)
#find independent pairs of secure 2-hop side connections
i2hsc = []
for s in pset:
a, b = csets[s[0]], csets[s[1]]
ndups = len(a & b)
if (len(a) + len(b) - ndups) >= 4:
i2hsc.append(s)
return i2hsc
def find_board(ns, find_winner=True):
n_nodes = ns**2 + 4
l_side = n_nodes - 2
r_side = n_nodes - 1
# setup the board
board = GHB.new_vortex_board(ns)
A = torch.sparse_coo_tensor(board.edge_index, torch.ones_like(board.edge_index[0]), dtype=torch.float)
edge_index = board.edge_index
board.x = torch.zeros(n_nodes)
ls_2hop = secure_2hop(n_nodes, A, edge_index, l_side)
rs_2hop = secure_2hop(n_nodes, A, edge_index, r_side)
board.x[ls_2hop + rs_2hop] = 1.
lsp = independent_s2h_pairs(ls_2hop, edge_index, l_side)
rsp = independent_s2h_pairs(rs_2hop, edge_index, r_side)
opp_pairs = product(lsp, rsp)
def connectors(A, ndx):
return (A.to_dense()[ndx].sum(dim=0) == len(ndx)).nonzero().squeeze()
x = torch.zeros(n_nodes)
for scn in opp_pairs:
ndx = list(scn[0]) + list(scn[1])
cn = connectors(A, ndx)
if find_winner:
if cn.numel() > 0:
x[cn] = 1.
return board, x, True
else:
if cn.numel() == 0:
return board, x, False
return None
ns = 5
res = None
while(res is None):
res = find_board(ns, find_winner=True)
board, x, winner = res
board.x = x
board.plot()
# +
from torch.utils.data import TensorDataset
from torch_geometric.utils import get_laplacian
def wi8_features(ns, edge_index, dim):
n_nodes = ns**2 + 4
l_side = n_nodes - 2
r_side = n_nodes - 1
L = get_laplacian(edge_index, normalization='sym')
L = torch.sparse_coo_tensor(L[0], L[1])
# features
n_steps = dim // 2
steps = []
x = torch.zeros_like(board.node_attr[:,0])
x[l_side] = 1.
for i in range(n_steps):
steps.append(x.clone().detach())
if i < n_steps-1:
x = L @ x
x = torch.zeros_like(board.node_attr[:,0])
x[r_side] = 1.
for i in range(n_steps):
steps.append(x.clone().detach())
if i < n_steps-1:
x = L @ x
x = torch.stack(steps, dim=1)
return x
def get_wi8(ns, find_winner):
res = None
while(res is None):
res = find_board(ns, find_winner)
return res
class WI8_DS(TensorDataset):
def __init__(self, sample_boards=1000, nsides=5, dim=20):
self.sample_boards = sample_boards
self.nsides = nsides
self.dim = dim
x, y = [], []
for i in range(sample_boards):
find_winner = i % 2 == 0
board, n, winner = get_wi8(nsides, find_winner)
xs = wi8_features(nsides, board.edge_index, dim)
x.append(xs)
y.append(n)
x = torch.cat(x, dim=0)
y = torch.cat(y, dim=0)
super().__init__(x, y)
# +
nsides=5
dim=20
train_ds = WI8_DS(sample_boards=2000, nsides=nsides, dim=dim)
val_ds = WI8_DS(sample_boards=200, nsides=nsides, dim=dim)
# +
from torch.nn import Linear, Module, Sigmoid, Identity
class VorNet_MLP(Module):
def __init__(self, in_features=20):
super().__init__()
self.lin1 = Linear(in_features=in_features, out_features=64)
self.lin2 = Linear(in_features=64, out_features=1)
self.train_readout = Identity()
self.readout = Sigmoid()
def forward(self, x):
x = self.lin1(x)
x = x.relu()
out = self.lin2(x)
readout = self.train_readout if self.training else self.readout
out = readout(out)
return out
# +
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = VorNet_MLP().to(device)
# +
from torch.utils.data import DataLoader
from torch.nn import BCEWithLogitsLoss
from sklearn.metrics import precision_score, recall_score, roc_auc_score, precision_recall_fscore_support
LR = 1e-5
EPOCHS = 1000
BATCH_SIZE = 2000
criterion = BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=LR)
train_dl = DataLoader(train_ds, batch_size=BATCH_SIZE)
val_dl = DataLoader(val_ds, batch_size=BATCH_SIZE)
for epoch in range(EPOCHS):
train_loss = AverageMeter()
val_loss = AverageMeter()
val_auc = AverageMeter()
val_precision = AverageMeter()
val_recall = AverageMeter()
print("Epoch {}".format(epoch))
model.train()
for x, y in train_dl:
x, y = x.to(device=device), y.to(device=device)
out = model(x).squeeze()
loss = criterion(out, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss.update(loss.item())
print("\ttrain - loss: {}".format(train_loss))
model.eval()
with torch.no_grad():
for x, y in val_dl:
x, y = x.to(device=device), y.to(device=device)
out = model(x).squeeze()
loss = criterion(out, y)
val_loss.update(loss.item())
pred = out.cpu().numpy()
val_auc.update(roc_auc_score(y.cpu().numpy(), pred))
pred = pred > 0.5
precision, recall, _, _ = precision_recall_fscore_support(y.to(torch.int).cpu().numpy(), pred, average='binary', zero_division=0)
val_precision.update(precision)
val_recall.update(recall)
print("\tval - loss: {}, auc: {:.2f}%, precision: {:.2f}%, recall: {:.2f}%".format(val_loss.avg, val_auc.avg*100, val_precision.avg*100, val_recall.avg*100))
| notebooks/win_node_bc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: presidio-research
# language: python
# name: presidio-research
# ---
# Evaluate Flair models for person names, orgs and locations using the Presidio Evaluator framework
#
# Data = `generated_test_November 12 2019`
# + pycharm={"is_executing": false}
from presidio_evaluator.data_generator import read_synth_dataset
from presidio_evaluator.evaluation import ModelError, Evaluator
# %reload_ext autoreload
# %autoreload 2
# -
# Select data for evaluation:
# + pycharm={"is_executing": false}
synth_samples = read_synth_dataset("../../data/synth_dataset.txt")
print(len(synth_samples))
# + [markdown] pycharm={"is_executing": false}
# Map entity types
# +
presidio_entities_map = {
"PERSON": "PER",
"EMAIL_ADDRESS": "O",
"CREDIT_CARD": "O",
"FIRST_NAME": "PER",
"PHONE_NUMBER": "O",
"BIRTHDAY": "O",
"DATE_TIME": "O",
"DOMAIN": "O",
"CITY": "LOC",
"ADDRESS": "LOC",
"NATIONALITY": "LOC",
"LOCATION": "LOC",
"IBAN": "O",
"URL": "O",
"US_SSN": "O",
"IP_ADDRESS": "O",
"ORGANIZATION": "ORG",
"TITLE" : "O", # skipping evaluation of titles
"O": "O",
}
synth_samples = Evaluator.align_entity_types(synth_samples, presidio_entities_map)
# + pycharm={"is_executing": false}
from collections import Counter
entity_counter = Counter()
for sample in synth_samples:
for tag in sample.tags:
entity_counter[tag]+=1
# + pycharm={"is_executing": false}
entity_counter
# -
# + pycharm={"is_executing": false}
#max length sentence
max([len(sample.tokens) for sample in synth_samples])
# -
# Select models for evaluation:
# + pycharm={"is_executing": false}
flair_ner = 'ner'
flair_ner_fast = 'ner-fast'
flair_ontonotes = 'ner-ontonotes-fast'
models = [flair_ner, flair_ner_fast]
# + pycharm={"is_executing": true}
from presidio_evaluator.models import FlairModel
for model in models:
print("-----------------------------------")
print("Evaluating model {}".format(model))
flair_model = FlairModel(model_path=model)
evaluator = Evaluator(model=flair_model)
evaluation_results = evaluator.evaluate_all(synth_samples)
scores = evaluator.calculate_score(evaluation_results)
print("Confusion matrix:")
print(scores.results)
print("Precision and recall")
scores.print()
errors = scores.model_errors
# -
# Custom evaluation
# #### False positives
# 1. Most false positive tokens:
# + pycharm={"is_executing": false}
errors = scores.model_errors
ModelError.most_common_fp_tokens(errors)
# + pycharm={"is_executing": false}
fps_df = ModelError.get_fps_dataframe(errors,entity=['PERSON'])
fps_df[['full_text','token','prediction']]
# -
# 2. False negative examples
# + pycharm={"is_executing": false}
ModelError.most_common_fn_tokens(errors,n=50, entity=['PER'])
# -
# More FN analysis
# + pycharm={"is_executing": false}
fns_df = ModelError.get_fns_dataframe(errors,entity=['PERSON'])
# + pycharm={"is_executing": false, "name": "#%%\n"}
fns_df[['full_text','token','annotation','prediction']]
| notebooks/models/Evaluate flair models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import gym
import numpy as np
from time import sleep
from matplotlib import pyplot as plt
# -
env = gym.envs.make("Breakout-v0")
# +
print("Action space size: {}".format(env.action_space.n))
env.unwrapped.get_action_meanings()
observation = env.reset()
print("Observation space shape: {}".format(observation.shape))
plt.figure()
plt.imshow(env.render(mode='rgb_array'))
[env.step(2) for x in range(1)]
plt.figure()
plt.imshow(env.render(mode='rgb_array'))
sleep(2)
env.close()
# -
# Check out what a cropped image looks like
plt.imshow(observation[34:-16,:,:])
| DQN/Breakout Playground.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sete pecados de regressão
#
# Essa lista dos sete pecados de regressões mostra todas as maneiras pelas quais os modelos de regressão podem estar errados. Todos os sete são discutidos abaixo, embora alguns merecem mais discussão do que outros. Veja também este folheto que lista cada problema, se causa viés, se afeta os erros padrão e possíveis soluções.
#
# ## Viés Variável Omitido
# Como detalhado acima, omitir uma variável de um modelo de regressão pode influenciar as estimativas de inclinação para as variáveis incluídas no modelo. O viés ocorre apenas quando a variável omitida está correlacionada com a variável dependente e com uma das variáveis independentes incluídas.
#
# <img src="img/7sins OVB.png" width="450" />
#
#
# ## Viés de heterogeneidade
# O viés de heterogeneidade (também chamado de diferença de diferença de grupo) ocorre quando há estruturas de grupo naturais nos dados e há diferenças inatas nos grupos que estão correlacionados com a variável de estudo. Um grupo pode ser muitas pessoas em um ou mais períodos de tempo. Um “grupo” também pode ser um indivíduo medido ao longo do tempo.
#
# Neste exemplo (um estudo médico de pressão arterial), uma regressão mostra uma tendência ascendente na pressão sanguínea à medida que a dosagem aumenta.
#
# <img src="img/hetero1.png" width="450" />
#
# Mas quando o modelo considera as diferenças de grupo, vemos que o relacionamento é realmente invertido (isto é, agora vemos uma tendência de queda).
#
# <img src="img/hetero2.png" width="450" />
#
#
# Essencialmente, este é apenas um caso especial de viés de variável omitido, no qual a variável omitida é grupos.
# ## Viés de seleção
# O viés de seleção surge quando os indivíduos entram / saem de grupos de formas não aleatórias. Por exemplo, se você estiver avaliando o efeito de um programa de sexo somente para abstinência na gravidez na adolescência, se os indivíduos escolherem se participar, os resultados serão diferentes de um cenário no qual os indivíduos são aleatoriamente designados para participar ou não. grupos participantes.
#
# Essencialmente, este é apenas outro caso especial de viés de variável omitido no qual a variável omitida é propensão (para participar).
#
#
# Ilustração:
#
# Durante a Segunda Guerra Mundial, a Marinha tentou determinar onde eles precisavam blindar seus aviões para garantir que eles voltassem para casa. Eles fizeram uma análise de onde os aviões tinham sido disparados, e criaram isso.
#
# <img src="img/ww2.jpg" width="450" />
#
#
# Obviamente, os locais que precisavam ser blindados são as pontas das asas, o corpo central e os elevadores. É aí que os aviões foram todos disparados.
#
# <NAME>, um estatístico, discordou. Ele achava que eles deveriam proteger melhor a área do nariz, os motores e o meio do corpo. O que foi uma loucura, claro. Não é onde os aviões estavam sendo atingidos.
#
# Exceto o Sr. Wald percebeu o que os outros não sabiam. Os aviões também estavam sendo alvejados lá, mas eles não estavam chegando em casa. O que a Marinha achava que havia feito era analisar onde as aeronaves estavam sofrendo o maior dano. O que eles realmente fizeram foi analisar onde as aeronaves poderiam sofrer o maior dano sem falhas catastróficas. Todos os lugares que não foram atingidos? Aqueles aviões foram baleados lá e caíram. Eles não estavam olhando para toda a amostra, apenas os sobreviventes ".
#
# ### Seleção de entrada versus saída de seleção
#
# se a propensão a sair de um estudo estiver correlacionada com as variáveis do estudo, então o atrito cria um viés
#
# se atrito é aleatório (não correlacionado), então não é um problema
#
# ver slides para micro-finanças exemplo de apenas pobre deixar o programa e produzir efeito de programa artificial
#
# ## Multicolinearidade
# Como discutido em Regressão Visual , a multicolinearidade ocorre quando as variáveis independentes em um modelo de regressão estão fortemente correlacionadas entre si. Isso torna difícil dizer os efeitos independentes dessas variáveis e também inflaciona os erros padrão de cada inclinação. Abaixo, vemos que quanto maior a multicolinearidade, menor será a área B, e maiores serão os erros padrões. Quando os erros padrão são maiores, os intervalos de confiança são maiores e é menos provável que a inclinação seja estatisticamente significativa.
#
# <img src="img/multicoll.png" width="450" />
#
#
# ## Erro de medição
# Conforme discutido em Regressão Visual , o erro de medição pode influenciar as estimativas de inclinação e alterar os erros padrão na regressão.
#
# Lembre-se que o erro de medição é sempre erro aleatório, não erro sistemático. Erro sistemático sempre empurra os resultados na mesma direção. Um exemplo de erro sistemático seria se você estivesse medindo o peso de indivíduos com uma escala mal calibrada que adiciona cinco libras a cada leitura. Nesse caso, a média será diferente, mas a variação não será; o declive de regressão será idêntico, mas o intercepto será diferente. Assim, a inclinação não é influenciada pelo erro sistemático de medição.
#
# Os efeitos do erro de medição (aleatório) dependem de estar na variável dependente ou independente
# ### Na Variável Dependente
# O erro de medição na variável dependente causa variação adicional em Y. A introdução do erro de medição na variável dependente inflará o erro padrão, mas a estimativa de inclinação será idêntica. O erro padrão inflado, por sua vez, tornará menos provável que a inclinação seja estatisticamente significativa.
#
#
# <img src="img/dv measerror.png" width="450" />
#
#
# ### Na variável independente
# O erro de medição na variável independente causa uma variância adicional em X. A introdução do erro de medição na variável independente causará o encolhimento da estimativa do declive (chamada atenuação) e reduz o erro padrão. A atenuação sempre empurra a inclinação para zero, não importa se a relação é positiva ou negativa.
#
#
# <img src="img/iv measerror.png" width="450" />
#
#
# ## Viés de especificação incorreta
# O viés pode ser introduzido se usarmos uma forma inadequada do modelo de regressão adequado para as variáveis sob análise. Isso pode ser ilustrado pelo quarteto de Anscombe, um grupo de quatro conjuntos de dados muito diferentes que possuem algumas propriedades estatísticas idênticas (média, variância, correlação e resultados de regressão).
# <img src="img/anscombe data.png" width="450" />
#
#
#
# As diferenças são ainda mais impressionantes nos gráficos de dispersão dos quatro conjuntos de dados.
# <img src="img/anscombe graphs.png" width="450" />
#
#
# ## Simultaneidade
# A simultaneidade ocorre quando a estrutura causal forma um ciclo de feedback. Quando este é o caso, é muito difícil separar os efeitos independentes. Tal modelo causal é retratado abaixo.
# <img src="img/simult.png" width="450" />
#
#
| 02-stat-multivariada/08 Requisitos Regressao- Os sete pecados/7pecados_regrossoes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# Theano
# ===
# An optimizing compiler for symbolic math expressions
# + slideshow={"slide_type": "fragment"}
import theano
import theano.tensor as T
# + [markdown] slideshow={"slide_type": "slide"}
# Symbolic variables
# ==========
# + slideshow={"slide_type": "fragment"}
x = T.scalar()
# + slideshow={"slide_type": "fragment"}
x
# + [markdown] slideshow={"slide_type": "slide"}
# Variables can be used in expressions
# + slideshow={"slide_type": "-"}
y = 3*(x**2) + 1
# + [markdown] slideshow={"slide_type": "fragment"}
# Result is symbolic as well
# + slideshow={"slide_type": "-"}
type(y)
# + [markdown] slideshow={"slide_type": "slide"}
# Investigating expressions
# + slideshow={"slide_type": "fragment"}
print(y)
# + slideshow={"slide_type": "fragment"}
theano.pprint(y)
# + slideshow={"slide_type": "fragment"}
theano.printing.debugprint(y)
# + slideshow={"slide_type": "slide"}
from IPython.display import SVG
SVG(theano.printing.pydotprint(y, return_image=True, format='svg'))
# + [markdown] slideshow={"slide_type": "slide"}
# Evaluating expressions
# ============
#
# Supply a `dict` mapping variables to values
# -
y.eval({x: 100})
# + [markdown] slideshow={"slide_type": "slide"}
# Or compile a function
# -
f = theano.function([x], y)
# + slideshow={"slide_type": "fragment"}
f(20)
# + [markdown] slideshow={"slide_type": "slide"}
# Compiled function has been transformed
# + slideshow={"slide_type": "fragment"}
SVG(theano.printing.pydotprint(f, return_image=True, format='svg'))
# + [markdown] slideshow={"slide_type": "slide"}
# Other tensor types
# ==========
# + slideshow={"slide_type": "-"}
X = T.vector()
X = T.matrix()
X = T.tensor3()
X = T.tensor4()
# + [markdown] slideshow={"slide_type": "slide"}
# Numpy style indexing
# ===========
# + slideshow={"slide_type": "-"}
X = T.vector()
# -
X[1:-1:2]
# + slideshow={"slide_type": "fragment"}
X[[1,2,3]]
# + [markdown] slideshow={"slide_type": "slide"}
# Many functions/operations are available through `theano.tensor` or variable methods
# -
y = X.argmax()
y = T.cosh(X)
y = T.outer(X, X)
# But don't try to use numpy functions on Theano variables. Results may vary!
# + [markdown] slideshow={"slide_type": "slide"}
# Automatic differention
# ============
# - Gradients are free!
# -
x = T.scalar()
y = T.log(x)
# + slideshow={"slide_type": "fragment"}
gradient = T.grad(y, x)
gradient.eval({x: 2})
# + [markdown] slideshow={"slide_type": "slide"}
# # Shared Variables
#
# - Symbolic + Storage
# -
import numpy as np
x = theano.shared(np.zeros((2, 3), dtype=theano.config.floatX))
x
# + [markdown] slideshow={"slide_type": "slide"}
# We can get and set the variable's value
# + slideshow={"slide_type": "-"}
values = x.get_value()
print(values.shape)
print(values)
# -
x.set_value(values)
# + [markdown] slideshow={"slide_type": "slide"}
# Shared variables can be used in expressions as well
# + slideshow={"slide_type": "-"}
(x + 2) ** 2
# + [markdown] slideshow={"slide_type": "fragment"}
# Their value is used as input when evaluating
# -
((x + 2) ** 2).eval()
theano.function([], (x + 2) ** 2)()
# + [markdown] slideshow={"slide_type": "slide"}
# # Updates
#
# - Store results of function evalution
# - `dict` mapping shared variables to new values
# + slideshow={"slide_type": "slide"}
count = theano.shared(0)
new_count = count + 1
updates = {count: new_count}
f = theano.function([], count, updates=updates)
# + slideshow={"slide_type": "fragment"}
f()
# + slideshow={"slide_type": "fragment"}
f()
# + slideshow={"slide_type": "fragment"}
f()
| code/Experiments/Tutorials/EbenOlsen_TheanoLasagne/1 - Theano Basics/.ipynb_checkpoints/Theano Basics-checkpoint.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.3.0
# language: julia
# name: julia-1.3
# ---
# # Solve the GPE in a 1D parabolic trap
# ### <NAME>
#
# # Introduction
# In this simple example we start by finding the ground state of the Gross-Pitaevskii equation
# in a harmonic trap.
#
# The mean field order parameter of a Bose-Einstein condensate far below the critical temperatrure for condensation evolves according to the GP-equation
# $$
# i\hbar\frac{\partial \psi(x,t)}{\partial t}=\left(-\frac{\hbar^2\partial_x^2}{2m}+V(x,t)+g|\psi(x,t)|^2\right)\psi(x,t)
# $$
# with potential $V(x,t)=m\omega_x^2 x^2/2$, and positive interaction strength $g$.
#
# We work in harmonic oscillator units, taking length in units of $a_x=\sqrt{\hbar/m\omega_x}$ and time in
# units of $1/\omega_x$.
#
# The equation of motion that we solve numerically is
# $$
# i\frac{\partial \psi(x,t)}{\partial t}=\left(-\frac{\partial_x^2}{2}+\frac{x^2}{2}+g|\psi(x,t)|^2\right)\psi(x,t)
# $$
# where all quantities are now dimensionless.
#
# # Loading the package
# First, we load some useful packages, and set up defaults for `Plots`.
using Plots, LaTeXStrings
gr(fmt="png",legend=false,titlefontsize=12,size=(500,200),grid=false,transpose=true,colorbar=false);
# Now load `FourierGPE`
using FourierGPE
# Let's define a convenient plot function
function showpsi(x,ψ)
p1 = plot(x,abs2.(ψ))
xlabel!(L"x/a_x");ylabel!(L"|\psi|^2")
p2 = plot(x,angle.(ψ))
xlabel!(L"x/a_x");ylabel!(L"\textrm{phase}(\psi)")
p = plot(p1,p2,layout=(2,1),size=(600,400))
return p
end
# Let's set the system size, and number of spatial points and initialize default simulation
L = (40.0,)
N = (512,)
sim = Sim(L,N)
@unpack_Sim sim;
μ = 25.0
# Here we keep most of the default parameters but increase the chemical potential.
#
# ## Declaring the potential
# Let's define the trapping potential.
import FourierGPE.V
V(x,t) = 0.5*x^2
# We only require the definition as a scalar function
# because it will be evaluated on the grid using a broadcasted dot-call.
#
# # Initial condition
# Let's define a useful Thomas-Fermi wavefunction
ψ0(x,μ,g) = sqrt(μ/g)*sqrt(max(1.0-V(x,0.0)/μ,0.0)+im*0.0)
x = X[1];
# The initial state is now created on the grid and all modified variables are scooped up into `sim`:
ψi = ψ0.(x,μ,g)
ϕi = kspace(ψi,sim) #sim uses Fourier transforms that are norm-preserving
@pack_Sim! sim;
sim
# The important points to note here are that we have modified $\mu$ and the initial condition $\phi_i$, and we have left the default damping parameter
# $\gamma=0.5$ which means we are going to find a ground state of the GPE.
#
# ## Default simulation parameters
# The source code defining the simulation type `Sim` sets the default values and
# also has some further explanation of each variable:
@with_kw mutable struct Sim{D} <: Simulation{D} @deftype Float64
# Add more parameters as necessary, or add to params (see examples)
L::NTuple{D,Float64} # box length scales
N::NTuple{D,Int64} # grid points in each dimensions
μ = 15.0 # chemical potential
g = 0.1 # interaction parameter
γ = 0.5; @assert γ >= 0.0 # damping parameter
ti = 0.0 # initial time
tf = 2/γ # final time
Nt::Int64 = 200 # number of saves over (ti,tf)
params::UserParams = Params() # optional user parameters
V0::Array{Float64,D} = zeros(N)
t::LinRange{Float64} = LinRange(ti,tf,Nt) # time of saves
ϕi::Array{Complex{Float64},D} = zeros(N) |> complex # initial condition
alg::OrdinaryDiffEq.OrdinaryDiffEqAdaptiveAlgorithm = Tsit5() # default solver
reltol::Float64 = 1e-6 # default tolerance; may need to use 1e-7 for corner cases
flags::UInt32 = FFTW.MEASURE # choose a plan. PATIENT, NO_TIMELIMIT, EXHAUSTIVE
# === saving
nfiles::Bool = false
path::String = nfiles ? joinpath(@__DIR__,"data") : @__DIR__
filename::String = "save"
# === arrays, transforms, spectral operators
X::NTuple{D,Array{Float64,1}} = xvecs(L,N)
K::NTuple{D,Array{Float64,1}} = kvecs(L,N)
espec::Array{Complex{Float64},D} = 0.5*k2(K)
T::TransformLibrary = makeT(X,K,flags=flags)
end
# # Evolution in k-space
# The `FFTW` library is used to evolve the Gross-Pitaevskii equation in k-space
sol = runsim(sim);
# By default the solver returns all time slices specified by the `t` vector (`t=LinRange(ti,tf,Nt)`) and solution information in a single variable `sol`.
#
# Let's have a look at the final state and verify we have a ground state with the correct chemical potential:
ϕg = sol[end]
ψg = xspace(ϕg,sim)
p=plot(x,g*abs2.(ψg),fill=(0,0.2),size=(500,200),label=L"gn(x)")
plot!(x,one.(x)*μ,label=L"\mu")
plot!(x,V.(x,0.0),label=L"V(x)",legend=:topright)
xlims!(-10,10); ylims!(0,1.3*μ)
title!(L"\textrm{local}\; \mu(x)")
xlabel!(L"x/a_x"); ylabel!(L"\mu(x)/\hbar\omega_x")
plot(p)
# The initial Thomas-Fermi state has been evolved for a default time $t=2/\gamma$ which is
# a characteristic damping time for the dissipative system with dimensionless damping
# $\gamma$. The solution will approch the ground state satisfying $L\psi_0=\mu\psi_0$ on a timescale of order
# $1/\gamma$.
#
# # Dark soliton in harmonically trapped system
# We found a ground state by imaginary time propagation.
# Now we can impose a phase and density imprint consistent with a dark soliton.
# We will use the solution for the homogeneous system, which will be a reasonable
# approximation if we impose it on a state that varies slowly over the scale of the soliton (the healing length $\xi$).
#
# ## Imprinting a dark soliton
ψf = xspace(sol[end],sim)
c = sqrt(μ)
ξ = 1/c
v = 0.5*c
xs = 0.
f = sqrt(1-(v/c)^2)
# Soliton speed is determined by depth and local healing length, and is intialized at `xs=0.0`.
ψs = @. ψf*(f*tanh(f*(x-xs)/ξ)+im*v/c)
showpsi(x,ψs)
xlims!(-10,10)
# ## Initilize Simulation
# We can use the previous parameters in `sim` to define a new simulation, while modifying parameters as required (in this case the damping and simulation timescale):
γ = 0.0
tf = 8*pi/sqrt(2); t = LinRange(ti,tf,Nt)
dt = 0.01π/μ
ϕi = kspace(ψs,sim)
simSoliton = Sim(sim;γ=γ,tf=tf,t=t,ϕi=ϕi) #define a new simulation, using keywords
# @pack_Sim! simSoliton; #we could instead pack everything into simSoliton, since we have made all changes
# ## Solve equation of motion
# As before, we specify the initial condition in momentum space, and evolve
@time sols = runsim(simSoliton);
# ## View the solution using Plots
# Plots allows easy creation of an animated gif, as in the runnable example code below.
# +
ϕf = sols[end-4]
ψf = xspace(ϕf,simSoliton)
showpsi(x,ψf)
anim = @animate for i in 1:length(t)-4 #make it periodic by ending early
ψ = xspace(sols[i],simSoliton)
y = g*abs2.(ψ)
p = plot(x,y,fill=(0,0.2),size=(500,200))
xlims!(-10,10); ylims!(0,1.3*μ)
title!(L"\textrm{local}\; \mu(x)")
xlabel!(L"x/a_x"); ylabel!(L"\mu(x)/\hbar\omega_x")
end
animpath = joinpath(@__DIR__,"media/soliton.gif")
gif(anim,animpath,fps=30)
# -
# The result is visible in the [media folder](../../media/soliton.gif) of this repository.
#
#
# Here we simply plot the final state:
ψ = xspace(sols[end],simSoliton)
y = g*abs2.(ψ)
p=plot(x,y,fill=(0,0.2),size=(500,200))
xlims!(-10,10); ylims!(0,1.3*μ)
title!(L"\textrm{local}\; \mu(x)")
xlabel!(L"x/a_x"); ylabel!(L"\mu(x)/\hbar\omega_x")
plot(p)
# The dark soliton executes simple harmonic motion with amplitude detemined by its depth.
| docs/notebooks/1dharmonic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
print "hello!"
x=7
x.bit_length()
type(x)
x=0.25
type(x)
x.as_integer_ratio()
x="Hello world"
type(x)
x.split()
x= (1,2,3,"text")
x[3]
x = [3,4,5]
print(x)
x.append(23)
x.pop()
print x
x = {"key":"value",5:"tree"}
print x
x[5]
x["key"]
x["cat"] = "a small animal"
print x
for i in [3,5,2,57,8,3,2,5]:
print i
i=0
while(i <5):
print i
i+=1
def f(x):
y = x*x
return y
f(4)
# %%timeit -n1
with open("fibo_input.txt","r") as f:
x=f.read()
| python basic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/huyduong/GNNetworkingChallenge/blob/main/Reliable_ORAN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="3wMvUeFyb7So"
# # Enviroment Setup - Server and VNF Size Control
#
# + id="n_oN7owvakOR" colab={"base_uri": "https://localhost:8080/", "height": 234} outputId="61fcc7b8-1f6e-412c-d9c0-58337df7e8b7"
# importing pandas as pd
import pandas as pd
from itertools import zip_longest
import csv
import itertools
import copy
DEBUG = False
#class for coloring output text
class bcolors:
HEADER = '\033[95m'
OKBLUE = '\033[94m'
OKCYAN = '\033[96m'
OKGREEN = '\033[92m'
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
#Random Seed
random_seed=5
#Server and VNF counts (This can be modified to change the size of the data)
server_counts=12
nearRT_counts=1
oCU_counts=1
oDU_counts=1
constant_delay_threshold=2
delay_Threshold_VnRT_VoCU=2
delay_Threshold_VoCU_VoDU=2
# Server set
S = ['server_'+str(x) for x in range(server_counts)]
p_VnRT=[] #list of all near-RT RIC VNFs
p_VoCU=[] #lisf of all O-CU VNFs
p_VoDU=[] #list of all O-DU VNFs
b_VnRT=[] #list of all backup near-RT RIC VNFs
b_VoCU=[] #lisf of all backup O-CU VNFs
b_VoDU=[] #list of all backup O-DU VNFs
Vp =[] #list of all primary VNFs
Vb =[] #list of all backup VNFs
Vall=[] #all VNFs
#Fill all sets with sequential IDs
# for s in range(server_counts):
# S.append([0] * len(Vp))
for vnrt in range(nearRT_counts):
p_VnRT.append("p_near-RT-"+str(vnrt))
b_VnRT.append("b_near-RT-"+str(vnrt))
for vocu in range(oCU_counts):
p_VoCU.append("p_O-CU-"+str(vocu))
b_VoCU.append("b_O-CU-"+str(vocu))
for vodu in range(oDU_counts):
p_VoDU.append("p_O-DU-"+str(vodu))
b_VoDU.append("b_O-DU-"+str(vodu))
Vp = p_VnRT + p_VoCU + p_VoDU
# Vredundant=VnRT+VoCU+VoDU
Vb = b_VnRT + b_VoCU + b_VoDU
Vall = Vp + Vb
# Vbackup=[]
# for i in range(len(Vp)):
# Vbackup.append(i+len(Vp))
Vr_sets = {}
for i in range(nearRT_counts):
Vr_sets[p_VnRT[i]] = {'p_Near_RT': p_VnRT[i], 'p_OCU': p_VoCU[i], 'p_ODU': p_VoDU[i],
'b_Near_RT': b_VnRT[i], 'b_OCU': b_VoCU[i], 'b_ODU': b_VoDU[i]}
Vdep = {p_VnRT[i]: [p_VoCU[i], b_VoCU[i]],
b_VnRT[i]: [p_VoCU[i], b_VoCU[i]],
p_VoCU[i]: [p_VoDU[i], b_VoDU[i]],
b_VoCU[i]: [p_VoDU[i], b_VoDU[i]]}
print(Vp)
print(Vb)
print("Vall:", Vall)
print(Vr_sets[p_VnRT[0]])
# + id="3zLCdccUset8" colab={"base_uri": "https://localhost:8080/"} outputId="68ae4363-eda8-46ae-d1c5-4de2bcc9788a"
print(Vp)
# + [markdown] id="Mz9BecuRcVcL"
# # Print all VNF Lists
# + colab={"base_uri": "https://localhost:8080/"} id="8JSmQe6OakOU" outputId="b2012603-fede-4e31-b270-a34c4c1d605e"
#Print all sets
print(f"{bcolors.BOLD}Server IDs:{bcolors.ENDC}",S)
print(f"{bcolors.BOLD}primary near-RT RIC VNFs: {bcolors.ENDC}",p_VnRT)
print(f"{bcolors.BOLD}primary O-CU VNFs: {bcolors.ENDC}",p_VoCU)
print(f"{bcolors.BOLD}primary O-DU VNFs: {bcolors.ENDC}",p_VoDU)
print(f"{bcolors.BOLD}Backup near-RT RIC VNFs:{bcolors.ENDC} ",b_VnRT)
print(f"{bcolors.BOLD}Backup O-CU VNFs: {bcolors.ENDC}",b_VoCU)
print(f"{bcolors.BOLD}Backup O-DU VNFs: {bcolors.ENDC}",b_VoDU)
print(f"{bcolors.OKBLUE}All Primary VNFs:{bcolors.ENDC} ",Vp)
print(f"{bcolors.OKBLUE}All Backup VNFs: {bcolors.ENDC}",Vb)
print(f"{bcolors.OKBLUE}All VNFs: {bcolors.ENDC}",Vall)
# + [markdown] id="j7GNLP85cgWd"
# # Export VNF Lists to CSV
# + id="oFYJmh12akOV"
#WRITE ALL DATA TO CSV
d = [S, p_VnRT, p_VoCU, p_VoDU, b_VnRT, b_VoCU, b_VoDU]
export_data = zip_longest(*d, fillvalue = '')
with open('numbers.csv', 'w', encoding="ISO-8859-1", newline='') as myfile:
wr = csv.writer(myfile)
wr.writerow(("S", "VnRT","VoCU","VoDU","b_VnRT","b_VoCU","b_VoDU"))
wr.writerows(export_data)
myfile.close()
# + [markdown] id="gtuJHqzicmdb"
# # Create the Servers' Links and Delays
# + colab={"base_uri": "https://localhost:8080/", "height": 773} id="nDWWhELOakOW" outputId="72ac7c87-7db5-488c-b574-7ce234d195bf"
#Server Infrastructure
import numpy as np
import networkx as nx
import random
import matplotlib.pyplot as plt
import math
random.seed(random_seed)
np.random.seed(random_seed)
Node_Count=server_counts
SFC_Count=5
Delay_Min=100
Delay_Max=1000
CPU_MAX=255
MEM_MAX=64000
nums = np.random.choice([0, 1], size=(Node_Count,Node_Count),p=([.8, .2]))
Arr = nx.from_numpy_array(nums)
nx.draw(Arr, with_labels=True)
ArrMat=nx.adjacency_matrix(Arr, nodelist=None, weight='weight')
Connections=ArrMat[:,:].toarray()
np.savetxt("Connections.csv", Connections, delimiter=",")
#print("Adjacency Matrix")
#print(Connections)
for (u, v) in Arr.edges():
Arr.edges[u,v]['weight'] = random.randint(100,1000)
ArrDelayMat=nx.adjacency_matrix(Arr, nodelist=None, weight='weight')
Delays=ArrDelayMat[:,:].toarray()
print("Delay Matrix")
print(Delays)
inter_server_delay = {}
for s in S:
inter_server_delay[s] = {}
for i in range(len(S)):
for j in range(len(S)):
inter_server_delay[S[i]][S[j]] = Delays[i][j]
print(inter_server_delay[S[i]])
# + [markdown] id="qBJEYqSeePKI"
# # CPU and Mem for VNFs and Servers
# + id="J8RqJ3ePakOX"
#CPU and Mem for each server
# cpu_mem_S = []
# for c in range(len(S)):
# cpu_mem_S.append([random.randint(2,20),random.randint(2,20)])
# #CPU and Mem for primary VNFs
# cpu_mem_Vp = []
# for c in range(len(Vp)):
# cpu_mem_Vp.append([random.randint(2,20),random.randint(2,20)])
random.seed(random_seed)
np.random.seed(random_seed)
#CPU and Mem for servers
cpu_S = {}
for s in S:
cpu_S[s] = random.randint(20,40)
mem_S = {}
for s in S:
mem_S[s] = random.randint(20,40)
#CPU and Mem for primary VNFs
cpu_vnf = {}
mem_vnf = {}
for p_vnf in Vp: # for each primary VNF
b_vnf = p_vnf.replace('p', 'b') # b_vnf is the backup of p_vnf
cpu_vnf[p_vnf] = cpu_vnf[b_vnf] = random.randint(2,20)
mem_vnf[p_vnf] = mem_vnf[b_vnf] = random.randint(2,20)
#Failover of VNF
fo_Vp = {}
for vnf in Vp:
fo_Vp[vnf] = random.randint(2,20)
#Regional VNFs (this list should be filled with the IDs of VNFs in the regional cloud (all NRT RIC))
regional_VNF_set = set()
for vnf in p_VnRT + b_VnRT:
regional_VNF_set.add(vnf)
#Edge VNFs (this list should be filled with the IDs of VNFs in the Edge cloud (all OCU + ODU))
edge_VNF_set = set()
for vnf in p_VoCU + b_VoCU:
edge_VNF_set.add(vnf)
for vnf in p_VoDU + b_VoDU:
edge_VNF_set.add(vnf)
#MUST CHANGE TO MORE REALISTIC APPROACH
# s_regional = []
# for c in range(int(len(S)*0.20)):
# temp= random.randint(0, len(S))
# while temp in s_regional:
# temp=random.randint(0, len(S))
# s_regional.append(temp)
regional_server_set = random.choices(S, k=int(len(S)*0.20))
#MUST CHANGE TO MORE REALISTIC APPROACH
# s_edge = []
# for c in range(int(len(S)*0.80)):
# temp1=random.randint(0,len(S))
# while temp1 in s_regional or temp1 in s_edge:
# temp1=random.randint(0,len(S))
# s_edge.append(temp1)
edge_server_set = [x for x in S if x not in regional_server_set]
# + colab={"base_uri": "https://localhost:8080/"} id="BKCdICuls4Ha" outputId="50abfa22-5dda-4661-f190-0bbe33807232"
print(regional_server_set)
print(edge_server_set)
print(cpu_S)
print(cpu_vnf)
print(mem_vnf)
print(mem_S)
print(fo_Vp)
# + [markdown] id="yck3T3DmeYGv"
# # MTTF and MTTR for VNFs and Servers
# + id="AKF8_KpXakOY"
random.seed(random_seed)
np.random.seed(random_seed)
#mttf and mttr for each server
mttf_S = {}
for s in S:
mttf_S[s] = random.randint(2,20)
mttr_S = {}
for s in S:
mttr_S[s] = random.randint(2,20)
#mttf and mttr for each primary VNF
mttf_Vp = {}
for v in Vp:
mttf_Vp[v] = random.randint(2,20)
mttr_Vp = {}
for v in Vp:
mttr_Vp[v] = random.randint(2,20)
#mttf and mttr for each backup VNF
mttf_Vb = {}
for v in Vb:
mttf_Vp[v] = random.randint(2,20)
mttr_Vb = {}
for v in Vb:
mttr_Vp[v] = random.randint(2,20)
# Availablity of vnf v when it is installed on server s.
# They are parameters 'c' in the model.
c = {}
for vp in Vp:
c[vp] = {}
for s in S:
c[vp][s] = (1/mttf_S[s]+mttf_Vp[vp])/((1/mttf_S[s]+mttf_Vp[vp])+mttr_S[s]+mttr_Vp[vp])
# + [markdown] id="Z4IjvbXReehg"
# # The infamous Gamma Set
# + id="hU-iqPZEakOY"
# Gamma_prime_set=[[[[1,0,0,0,0,0,0,0,0,0,0,0],[0,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,1,0,0,0]],[[1,0,0,0,0,0,0,0,0,0,0,0],[0,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,1,0,0,0]]],
# [[[1,0,0,0,0,0,0,0,0,0,0,0],[0,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,1,0,0,0]]],
# [[[1,0,0,0,0,0,0,0,0,0,0,0],[0,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,1,0,0,0]]],
# [[[1,0,0,0,0,0,0,0,0,0,0,0],[0,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,1,0,0,0]]],]
Gamma_prime_set = {}
class TwoTreeConfiguration:
p_Near_RT = None
p_OCU = None
p_ODU = None
b_Near_RT = None
b_OCU = None
b_ODU = None
# def __init__(self):
# pass
def __init__(self, p_Near_RT=None, p_OCU=None, p_ODU=None,
b_Near_RT=None, b_OCU=None, b_ODU=None):
self.p_Near_RT = p_Near_RT # a pair of (name of vnf, location)
self.p_OCU = p_OCU
self.p_ODU = p_ODU
self.b_Near_RT = b_Near_RT
self.b_OCU = b_OCU
self.b_ODU = b_ODU
def __str__(self):
return "p_Near_RT: {}, p_OCU: {}, p_ODU: {}, b_Near_RT: {}, b_OCU: {}, b_ODU:{}" \
.format(self.p_Near_RT , self.p_OCU, self.p_ODU,
self.b_Near_RT, self.b_OCU, self.b_ODU)
def get_VNF_and_location_pairs(self):
return [self.p_Near_RT, self.p_OCU, self.p_ODU,
self.b_Near_RT, self.b_OCU, self.b_ODU]
def get_primary_VNF_and_location_pairs(self):
return [self.p_Near_RT, self.p_OCU, self.p_ODU]
# + [markdown] id="uLfvVt8_emoT"
# # Master Problem
#
# + [markdown] id="tLhElbYR9uGp"
# Next tasks:
# - add NearRIC infor into gamma
# - correct the right hand side of constraints. Currently, they are all 0
# - read chapter 13, cutting stock. If it's too difficult, refresh with chapter 2, 5, 7.
#
# Idea for extensions:
# - throughput
#
# + id="r9w5EXW5akOa" colab={"base_uri": "https://localhost:8080/"} outputId="dab6c36c-8750-4075-e19e-52dc133a3324"
# %pip install gurobipy # For Colab, Gurobi must be installed
import gurobipy as gp
from gurobipy import GRB
m = gp.Model("Master Problem")
z_var_set = []
const_28={}
const_29={}
const_30={}
# At most one configuration for a near RT
for v in Vp: #(28)
const_28[v]=m.addLConstr(0, GRB.LESS_EQUAL, 1, "const_28_" + str(v))
#CPU Constraint
for s in S: #(29)
const_29[s]=m.addLConstr(0, GRB.LESS_EQUAL, cpu_S[s], "const_29_"+str(s))
#Mem Constraint
for s in S: #(30)
const_30[s]=m.addLConstr(0, GRB.LESS_EQUAL, mem_S[s], "const_30_"+str(s))
exp = gp.LinExpr()
m.setObjective(exp, GRB.MAXIMIZE) # see https://www.gurobi.com/documentation/9.1/refman/py_model_setobjective.html
for near_RT_vnf in Gamma_prime_set.keys():
for config in Gamma_prime_set[near_RT_vnf]: # for every two tree configuration
add_configuration(config)
#########################################################
def add_configuration(config): # add a two-tree config to the master
# theta=Gamma_prime_set[i][n]
col = gp.Column()
Vr_vnf_location_pairs = config.get_VNF_and_location_pairs()
Vr_primary_vnf_location_pairs = config.get_primary_VNF_and_location_pairs()
# objective coefficient of z
Obj_coeff=0
for vp, s in Vr_primary_vnf_location_pairs:
Obj_coeff += c[vp][s]
#28
for p_vnf, s in Vr_primary_vnf_location_pairs:
# temp_const_28 = sum(theta[v])
col.addTerms(1, const_28[p_vnf])
#29
temp_const_29 = {}
for _, s in Vr_vnf_location_pairs:
temp_const_29[s] = 0
# compute the cpu used at each server
for v, s in Vr_vnf_location_pairs:
temp_const_29[s] += cpu_vnf[v]
# then add the column
for _, s in Vr_vnf_location_pairs:
col.addTerms(temp_const_29[s], const_29[s])
#30
temp_const_30 = {} # inside this loop, gamma and s are fixed
for _, s in Vr_vnf_location_pairs:
temp_const_30[s] = 0
# compute the mem used at each server
for v, s in Vr_vnf_location_pairs:
temp_const_30[s] += mem_vnf[v]
# then add the column
for _, s in Vr_vnf_location_pairs:
col.addTerms(temp_const_30[s], const_30[s])
# add variable
z = m.addVar( lb=0.0, ub=GRB.INFINITY, obj=Obj_coeff, vtype=GRB.CONTINUOUS, name="z_"+str(len(z_var_set)), column=col )
z_var_set.append((z, copy.deepcopy(config)))
#########################################################
# our model is not QCP, so we don't need to set the following parameter
# m.setParam(GRB.Param.QCPDual,1) # enable calculating the Dual Values
m.optimize()
m.write("master.lp")
m.write("masterMPS.mps")
# + [markdown] id="vRopnt61HMyX"
# # Dual Values
# + id="2-S0K4pu-JOj" colab={"base_uri": "https://localhost:8080/"} outputId="e2ee0d47-f27d-4df1-db82-aec41cb276e2"
duals = m.getAttr("Pi", m.getConstrs())
#To manually retrive duals
# print(len(duals))
# print(duals)
# uvs_28=duals[0:len(Vp)]
# uvs_29=duals[len(Vp):len(Vp)+len(S)]
# uvs_30=duals[len(Vp)+len(S):len(Vp)+len(S)+len(S)]
uvs_28= m.getAttr("Pi", const_28)
uvs_29= m.getAttr("Pi",const_29)
uvs_30= m.getAttr("Pi",const_30)
print(uvs_28)
print(uvs_29)
print(uvs_30)
print(len(uvs_28))
print(len(uvs_29))
print(len(uvs_30))
# + [markdown] id="et74P38VeqnA"
# # Pricing Problem
# + id="CEg7AlU4HJel"
#Pricing Problem V1
# How to get dual value: https://www.gurobi.com/documentation/9.1/refman/pi.html
def pricing_problem(near_rt_ric):
uv_28= m.getAttr("Pi", const_28)
us_29= m.getAttr("Pi", const_29)
us_30= m.getAttr("Pi", const_30)
pp = gp.Model("Pricing Problem")
Vr=Vr_sets[near_rt_ric].values() #input to the pricing
const_36={}
const_37={}
const_38={}
const_39={}
const_40={}
const_41={}
#CR"HAT"
sum_cvs_theta_vs = gp.LinExpr()
sum_u28_theta_vs = gp.LinExpr()
sum_u29_theta_vs = gp.LinExpr()
sum_u30_theta_vs = gp.LinExpr()
# add theta variables
theta = {}
for v in Vr:
theta[v] = {}
for s in S:
theta[v][s] = pp.addVar(vtype=GRB.BINARY, name="theta_"+v+'_'+s)
for v in Vr:
for s in S:
if v in Vp:
sum_cvs_theta_vs += theta[v][s] * c[v][s]
sum_u28_theta_vs += theta[v][s] * uv_28[v]
sum_u29_theta_vs += theta[v][s] * us_29[s] * cpu_vnf[v]
sum_u30_theta_vs += theta[v][s] * us_30[s] * mem_vnf[v]
c_r = sum_cvs_theta_vs - sum_u28_theta_vs - sum_u29_theta_vs - sum_u30_theta_vs
# print(c_r)
#####################
#36
for v in Vr:
lhs = gp.quicksum([theta[v][s] for s in S])
const_36[v]=pp.addLConstr(lhs, GRB.EQUAL, 1,"const_36_"+v)
#37 how to implement the for alls?
# for v in Vr:
# for v_prime in Vdep[v]:
for v in Vr:
const_37[v] = {}
if v not in Vdep.keys(): # v has no dependency
continue
for v_prime in Vdep[v]:
if v_prime not in const_37[v]:
const_37[v][v_prime] = {}
for s in S:
const_37[v][v_prime][s] = {}
for s_prime in S:
# print('v({}), v_prime({}), s({}), s_prime({})'.format(v, v_prime, s, s_prime))
# todo: MUST CHANGE constant_delay_threshold TO dynamic
exp = gp.LinExpr()
exp = inter_server_delay[s][s_prime] * (theta[v][s]+theta[v_prime][s_prime]-1)
name = "const_37_{}_{}_{}_{}".format(v, v_prime, s, s_prime)
const_37[v][v_prime][s][s_prime] = \
pp.addLConstr(exp, GRB.LESS_EQUAL, constant_delay_threshold, \
name=name)
# pp.addConstr(Delays[s][s_prime]*(theta[v,s]+theta[v_prime,s_prime]-1),GRB.LESS_EQUAL,constant_delay_threshold,)
#38 create a new list Vregional
# https://www.gurobi.com/documentation/9.1/refman/py_quicksum.html#pythonmethod:quicksum
for v in Vr:
if v in regional_VNF_set:
lhs = gp.quicksum([theta[v][s] for s in regional_server_set])
const_38[v] = pp.addLConstr(lhs, GRB.EQUAL, 1, name="const_38_"+str(v))
#39 create a new list Vedge
for v in Vr:
if v in edge_VNF_set:
lhs = gp.quicksum([theta[v][s] for s in edge_server_set])
const_39[v] = pp.addLConstr(lhs, GRB.EQUAL, 1 , name="const_39_"+str(v))
#Failovers #missing for all vRD in model #create Rd and primary
# for v in Vr[r]:
# if vpri in Vprimary and fo_Vp[v] > fo_Vp[???]:
# const_40[vpri]=m.addConstr(theta[][]+theta[][]<1,name="const_40_"+str(vpri))
#40
for p_vnf, s in list(itertools.product(Vr, S)) :
if p_vnf in Vp: # p_vnf is primary VNF
const_40[p_vnf] = {}
b_vnf = p_vnf.replace('p', 'b') # b_vnf is the corresponding backup VNF
const_40[p_vnf][s] = pp.addLConstr(theta[p_vnf][s]+theta[b_vnf][s], GRB.LESS_EQUAL, 1, name="const_40_"+v+'_'+s)
#41
# for v in Vr[r]:
# for v_hat in Vdep[v]:
# for s in range(len(S)):
# if fo_Vp[v] > fo_Vp[v_hat]:
# const_41[s]=pp.addConstr(theta[v,s]+theta[v_hat,s], GRB.LESS_EQUAL,1,name="const_41_"+str(s))
pp.setObjective(c_r, GRB.MAXIMIZE)
pp.Params.OutputFlag = 0
pp.optimize()
pp.write("pp.lp")
pp.write("pp.sol")
return pp, theta
# + id="W89TOgD1o0B3"
test_pp, test_theta = pricing_problem(p_VnRT[0])
# + colab={"base_uri": "https://localhost:8080/"} id="9TYg9zLGni0i" outputId="b550d214-8911-4c33-9a11-fef1d84f001c"
print(p_VnRT)
# + id="rRpUF_G3dg8n"
def get_configuration_from_pricing_problem(r, pp, theta):
# {'p_Near_RT': p_VnRT[0], 'p_OCU': p_VoCU[0], 'p_ODU': p_VoDU[0],
# 'b_Near_RT': b_VnRT[0], 'b_OCU': b_VoCU[0], 'b_ODU': b_VoDU[0]}
config = TwoTreeConfiguration()
Vr = Vr_sets[r]
# primary Near RT
vnf = Vr['p_Near_RT']
assert vnf in theta
for s in S:
if theta[vnf][s].X > 0.9:
config.p_Near_RT = (vnf, s)
# backup Near RT
vnf = Vr['b_Near_RT']
assert vnf in theta
for s in S:
if theta[vnf][s].X > 0.9:
config.b_Near_RT = (vnf, s)
# primary OCU
vnf = Vr['p_OCU']
assert vnf in theta
for s in S:
if theta[vnf][s].X > 0.9:
config.p_OCU = (vnf, s)
# backup OCU
vnf = Vr['b_OCU']
assert vnf in theta
for s in S:
if theta[vnf][s].X > 0.9:
config.b_OCU = (vnf, s)
# primary ODU
vnf = Vr['p_ODU']
assert vnf in theta
for s in S:
if theta[vnf][s].X > 0.9:
config.p_ODU = (vnf, s)
# backup ODU
vnf = Vr['b_ODU']
assert vnf in theta
for s in S:
if theta[vnf][s].X > 0.9:
config.b_ODU = (vnf, s)
return config
# + [markdown] id="irXS79A0fT6t"
# # Main Algorithm's flow
# + id="eMXcCr9d6YbJ" colab={"base_uri": "https://localhost:8080/"} outputId="a165ff9a-0d47-487b-8668-688154e2c349"
# import MasterModel
# master_LP = gp.Model("LP RMP Problem")
# initialize_model(master_LP, initial_configurations) # add variables and contraints
m.Params.OutputFlag = 0 # reduce log printing
m.write('master_init.lp')
new_configutations = [None]
iter_count = 0
while len(new_configutations) > 0:
new_configurations = []
m.optimize()
print(m.getAttr("Pi", const_28))
print(m.getAttr("Pi", const_29))
print(m.getAttr("Pi", const_30))
print('iter_count = {}, master LP = {}'.format(iter_count, m.objVal))
iter_count += 1
for r in p_VnRT: # TODO: check the correctness of the pricing model
pp, theta = pricing_problem(r)
if pp.objVal > 0.00001:
new_configurations.append(get_configuration_from_pricing_problem(r, pp, theta))
print('r = {}, rc = {}'.format(r, pp.objVal))
if len(new_configurations) == 0:
break
for configuration in new_configurations:
add_configuration(configuration) # TODO: check the correctness of the master after we add configuration (column)
if DEBUG:
if iter_count < 10 :
m.write('master_{}.lp'.format(iter_count))
z_star_LP = m.objVal
for z, _ in z_var_set:
z.vtype = GRB.BINARY
#master_ILP = turn_master_LP_into_master_ILP(master_LP)
m.optimize()
z_tilde_ILP = m.objVal
# + colab={"base_uri": "https://localhost:8080/"} id="vL26n6b-Zlni" outputId="0dfc9223-a0a3-4809-ef90-ed31c4f86dcb"
print("z_star_LP = ", z_star_LP)
print("z_tilde_ILP = ", z_tilde_ILP)
gap = (z_star_LP - z_tilde_ILP) / (0.00001+z_tilde_ILP) * 100.0
print("gap = {} (%)".format(gap))
# + colab={"base_uri": "https://localhost:8080/"} id="qKRbYS_yb6Yh" outputId="04714ce3-ee84-4d5b-821e-597914570df2"
if DEBUG:
for _, config in z_var_set:
print(config)
# + id="rF6SufY6wGJx"
#from google.colab import drive
#drive.mount('/content/drive')
# + id="5eBSuqm06jQF"
# for i in range(nearRT_counts):
# print(new_configurations[i])
| Reliable_ORAN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="http://www.codeheroku.com/static/blog/images/pid14_results.png">
#
# Wondered how Google comes up with movies that are similar to the ones you like? After reading this post you will be able to build one such recommendation system for yourself.
#
# It turns out that there are (mostly) three ways to build a recommendation engine:
#
# 1. Popularity based recommendation engine
# 2. Content based recommendation engine
# 3. Collaborative filtering based recommendation engine
#
# Now you might be thinking “That’s interesting. But, what are the differences between these recommendation engines?”. Let me help you out with that.
#
# ### Popularity based recommendation engine:
#
# Perhaps, this is the simplest kind of recommendation engine that you will come across. The trending list you see in YouTube or Netflix is based on this algorithm. It keeps a track of view counts for each movie/video and then lists movies based on views in descending order(highest view count to lowest view count). Pretty simple but, effective. Right?
#
#
# ### Content based recommendation engine:
#
# This type of recommendation systems, takes in a movie that a user currently likes as input. Then it analyzes the contents (storyline, genre, cast, director etc.) of the movie to find out other movies which have similar content. Then it ranks similar movies according to their similarity scores and recommends the most relevant movies to the user.
#
# ### Collaborative filtering based recommendation engine:
#
# This algorithm at first tries to find similar users based on their activities and preferences (for example, both the users watch same type of movies or movies directed by the same director). Now, between these users(say, A and B) if user A has seen a movie that user B has not seen yet, then that movie gets recommended to user B and vice-versa. In other words, the recommendations get filtered based on the collaboration between similar user’s preferences (thus, the name “Collaborative Filtering”). One typical application of this algorithm can be seen in the Amazon e-commerce platform, where you get to see the “Customers who viewed this item also viewed” and “Customers who bought this item also bought” list.
#
# <img src="http://www.codeheroku.com/static/blog/images/pid14_img1.png">
#
# Look at the following picture to get a better intuition over content based and collaborative filtering based recommendation systems-
#
# <img src="http://www.codeheroku.com/static/blog/images/pid14_rs_diff.png">
#
# Another type of recommendation system can be created by mixing properties of two or more types of recommendation systems. This type of recommendation systems are known as hybrid recommendation system.
#
# In this article, we are going to implement a Content based recommendation system using the scikit-learn library.
#
# ### Finding the similarity
#
# We know that our recommendation engine will be content based. So, we need to find similar movies to a given movie and then recommend those similar movies to the user. The logic is pretty straightforward. Right?
#
# But, wait…. How can we find out which movies are similar to the given movie in the first place? How can we find out how much similar(or dissimilar) two movies are?
#
# Let us start with something simple and easy to understand.
#
# Suppose, you are given the following two texts:
#
# Text A: London Paris London
#
# Text B: Paris Paris London
#
# How would you find the similarity between Text A and Text B?
#
# Let’s analyze these texts….
#
# 1. Text A: Contains the word “London” 2 times and the word “Paris” 1 time.
# 2. Text B: Contains the word “London” 1 time and the word “Paris” 2 times.
#
# Now, what will happen if we try to represent these two texts in a 2D plane (with “London” in X axis and “Paris” in Y axis)? Let’s try to do this.
#
# It will look like this-
#
# <img src="http://www.codeheroku.com/static/blog/images/pid14_text_2d_repr.png">
#
# Here, the red vector represents “Text A” and the blue vector represents “Text B”.
#
# Now we have graphically represented these two texts. So, now can we find out the similarity between these two texts?
#
# The answer is “Yes, we can”. But, exactly how?
#
# These two texts are represented as vectors. Right? So, we can say that two vectors are similar if the distance between them is small. By distance, we mean the angular distance between two vectors, which is represented by θ (theta). By thinking further from the machine learning perspective, we can understand that the value of cos θ makes more sense to us rather than the value of θ (theta) because, the cosine(or “cos”) function will map the value of θ in the first quadrant between 0 to 1 (Remember? cos 90° = 0 and cos 0° = 1 ).
#
# And from high school maths, we can remember that there is actually a formula for finding out cos θ between two vectors. See the picture below-
#
# <img src="http://www.codeheroku.com/static/blog/images/pid14_find_cos_theta.png">
#
# Don’t get scared, we don’t need to implement the formula from scratch for finding out cos θ. We have our friend Scikit Learn to calculate that for us :)
#
# Let’s see how we can do that.
#
# At first, we need to have text A and B in our program:
#
text = ["London Paris London","Paris Paris London"]
# Now, we need to find a way to represent these texts as vectors. The `CountVectorizer()` class from `sklearn.feature_extraction.text` library can do this for us. We need to import this library before we can create a new `CountVectorizer()` object.
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
count_matrix = cv.fit_transform(text)
# `count_matrix` gives us a sparse matrix. To make it in human readable form, we need to apply `toarrray()` method over it. And before printing out this `count_matrix`, let us first print out the feature list(or, word list), which have been fed to our `CountVectorizer()` object.
print(cv.get_feature_names())
print(count_matrix.toarray())
# This indicates that the word ‘london’ occurs 2 times in A and 1 time in B. Similarly, the word ‘paris’ occurs 1 time in A and 2 times in B. Makes sense. Right?
#
# Now, we need to find cosine(or “cos”) similarity between these vectors to find out how similar they are from each other. We can calculate this using `cosine_similarity()` function from `sklearn.metrics.pairwise` library.
from sklearn.metrics.pairwise import cosine_similarity
similarity_scores = cosine_similarity(count_matrix)
print(similarity_scores)
# What does this output indicate?
#
# We can interpret this output like this-
#
# 1. Each row of the similarity matrix indicates each sentence of our input. So, row 0 = Text A and row 1 = Text B.
# 2. The same thing applies for columns. To get a better understanding over this, we can say that the output given above is same as the following:
#
# <code>
# Text A: Text B:
# Text A: [[1. 0.8]
# Text B: [0.8 1.]]
# </code>
# <br>
# Interpreting this, says that Text A is similar to Text A(itself) by 100%(position [0,0]) and Text A is similar to Text B by 80%(position [0,1]). And by looking at the kind of output it is giving, we can easily say that this is always going to output a symmetric matrix. Because, if Text A is similar to Text B by 80% then, Text B is also going to be similar to Text A by 80%.
#
#
# Now we know how to find similarity between contents. So, let’s try to apply this knowledge to build a content based movie recommendation engine.
#
# ### Building the recommendation engine:
#
# >The movie dataset that we are going to use in our recommendation engine can be downloaded from [Course Github Repo](https://github.com/codeheroku/Introduction-to-Machine-Learning/blob/master/Building%20a%20Movie%20Recommendation%20Engine/movie_dataset.csv).
#
# After downloading the dataset, we need to import all the required libraries and then read the csv file using `read_csv()` method.
# +
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
df = pd.read_csv("movie_dataset.csv")
# -
# If you visualize the dataset, you will see that it has many extra info about a movie. We don’t need all of them. So, we choose keywords, cast, genres and director column to use as our feature set(the so called “content” of the movie).
features = ['keywords','cast','genres','director']
# Our next task is to create a function for combining the values of these columns into a single string.
def combine_features(row):
return row['keywords']+" "+row['cast']+" "+row['genres']+" "+row['director']
# Now, we need to call this function over each row of our dataframe. But, before doing that, we need to clean and preprocess the data for our use. We will fill all the NaN values with blank string in the dataframe.
# +
for feature in features:
df[feature] = df[feature].fillna('') #filling all NaNs with blank string
df["combined_features"] = df.apply(combine_features,axis=1) #applying combined_features() method over each rows of dataframe and storing the combined string in "combined_features" column
# -
# Now that we have obtained the combined strings, we can now feed these strings to a CountVectorizer() object for getting the count matrix.
cv = CountVectorizer() #creating new CountVectorizer() object
count_matrix = cv.fit_transform(df["combined_features"]) #feeding combined strings(movie contents) to CountVectorizer() object
# At this point, 60% work is done. Now, we need to obtain the cosine similarity matrix from the count matrix.
cosine_sim = cosine_similarity(count_matrix)
# Now, we will define two helper functions to get movie title from movie index and vice-versa.
def get_title_from_index(index):
return df[df.index == index]["title"].values[0]
def get_index_from_title(title):
return df[df.title == title]["index"].values[0]
# Our next step is to get the title of the movie that the user currently likes. Then we will find the index of that movie. After that, we will access the row corresponding to this movie in the similarity matrix. Thus, we will get the similarity scores of all other movies from the current movie. Then we will enumerate through all the similarity scores of that movie to make a tuple of movie index and similarity score. This will convert a row of similarity scores like this- `[1 0.5 0.2 0.9]` to this- `[(0, 1) (1, 0.5) (2, 0.2) (3, 0.9)]` . Here, each item is in this form- (movie index, similarity score).
movie_user_likes = "Avatar"
movie_index = get_index_from_title(movie_user_likes)
similar_movies = list(enumerate(cosine_sim[movie_index])) #accessing the row corresponding to given movie to find all the similarity scores for that movie and then enumerating over it
#
# Now comes the most vital point. We will sort the list `similar_movies` according to similarity scores in descending order. Since the most similar movie to a given movie will be itself, we will discard the first element after sorting the movies.
sorted_similar_movies = sorted(similar_movies,key=lambda x:x[1],reverse=True)[1:6]
# Now, we will run a loop to print first 5 entries from `sorted_similar_movies` list.
i=0
print("Top 5 similar movies to "+movie_user_likes+" are:\n")
for element in sorted_similar_movies:
print(get_title_from_index(element[0]))
i=i+1
if i>5:
break
# # Home Work Solution
# Let's Inspect the vote_average feature and check if there are any null values. Looks like it is clean.
df["vote_average"].unique()
# Now, we will again sort our sorted_similar_movies but this time with respect to vote_average. x[0] has the index of the movie in the data frame.
sort_by_average_vote = sorted(sorted_similar_movies,key=lambda x:df["vote_average"][x[0]],reverse=True)
print(sort_by_average_vote)
i=0
print("Suggesting top 5 movies in order of Average Votes:\n")
for element in sort_by_average_vote:
print(get_title_from_index(element[0]))
i=i+1
if i>5:
break
# And we are done here!
#
# > You can download the Python script and associated datasets from [Course Github Repo](https://github.com/codeheroku/Introduction-to-Machine-Learning/tree/master/Building%20a%20Movie%20Recommendation%20Engine).
#
# After seeing the output, I went one step further to compare it to other recommendation engines.
#
# So, I searched Google for similar movies to “Avatar” and here is what I got-
#
# <img src="http://www.codeheroku.com/static/blog/images/pid14_results.png">
#
# See the output? Our simple movie recommendation engine works pretty good. Right? It’s good as a basic level implementation but, it can be further improved with many other factors. Try to optimize this recommendation engine yourself and let us know your story at <EMAIL>.
#
# >If this article was helpful to you, check out our [Introduction to Machine Learning](http://www.codeheroku.com/course?course_id=1) Course at [Code Heroku](http://www.codeheroku.com/) for a complete guide to Machine Learning.
#
# <br><br>
# <p align="center"><a href="http://www.codeheroku.com/">
# <img src="http://www.codeheroku.com/static/images/logo5.png"></a>
# </p>
#
# <br>
| Building a Movie Recommendation Engine/Assignment Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from scipy.stats import bernoulli
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
plt.style.use('ggplot')
# Let's say you invested $100 in a stock with a mean monthly return of 1%. But there is dispersion around the mean: the actual returns of the stock each month are 1% + 2% = 3% or 1% - 2% = -1%, with equal probability. By simulating many possible ways this scenario could play out over time, let's look at the distribution of ending values of the portfolio over several time horizons.
# We'll model these returns using a _Bernoulli_ random variable, which we can simulate in code using `scipy.stats.bernoulli`. A Bernoulli random variable takes the values 1 or 0 with a probability set by a parameter `p`.
def generate_returns(num_returns):
p = 0.5
return 0.01 + (bernoulli.rvs(p, size=num_returns)-0.5)*0.04
print(generate_returns(6))
# First, let's look at the distribution of ending values of the stock over 6 months.
final_values = [100*np.prod(generate_returns(6)+1) for i in range(1,1000)]
plt.hist(final_values, bins=20)
plt.ylabel('Frequency')
plt.xlabel('Value after 6 months')
plt.show()
# After 6 months, the distribution of possible values looks symmetric and bell-shaped. This is because there are more paths that lead to middle-valued ending prices. Now, let's look at the ending values of the stock over 20 months.
final_values = [100*np.prod(generate_returns(20)+1) for i in range(1,1000)]
plt.hist(final_values, bins=20)
plt.ylabel('Frequency')
plt.xlabel('Value after 20 months')
plt.show()
# Finally, let's look at the ending values of the stock over 100 months.
final_values = [100*np.prod(generate_returns(100)+1) for i in range(1,1000)]
plt.hist(final_values, bins=20)
plt.ylabel('Frequency')
plt.xlabel('Value after 100 months')
plt.show()
# As you can see, the distribution gets less and less normal-looking over time. The upside potential is unlimited—there always exists the possibility that the stock will continue to appreciate over time. The downside potential, however, is limited to zero—you cannot loose more than 100% of your investment. The distribution we see emerging here is distinctly asymmetric—the values are always positive, but there is a long tail on the right-hand side: we say, it is _positively skewed_. The distribution is approaching what's called a _lognormal distribution_. Let's talk more about how this distribution emerges in the next video.
| TradingAI/Quantitative Trading/Lesson 07 - Stock Returns/returns_distributions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="Images/boolean.jpg" alt="Drawing" style="width: 400px;"/>
# # Boolean
#
# Boolean can only have two states "True" or "False".<br>
# You can evaluate any expression in Python, and get one of two answers.
# Examples 1
print(10 > 9)
print(10 == 9)
print(10 < 9)
# +
# Examples 2
print(bool("word"))
print(bool(""))
print(bool(1))
print(bool(0))
print(bool(None))
# -
# ### Example
# +
# Python code to check whether a number
# is even or odd using bool()
# We will learn how to define a function in a later lesson
def check(num):
return(bool(num%2==0))
# Driver Code
num = int(input("Enter a number"))
if(check(num)):
print("Even")
else:
print("Odd")
# -
# ## More Info on Booleans
# https://www.w3schools.com/python/python_booleans.asp
| UE1/02_Boolean.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## Demonstration of Regularized Multivariate Linear Regression
#
# This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [<EMAIL>](mailto:<EMAIL>).
# This notebook demonstrates a regularized multivariate linear regression, the [ridge regression](https://en.wikipedia.org/wiki/Ridge_regression), which is used for [multicollinear](https://en.wikipedia.org/wiki/Multicollinearity) features.
import numpy as np
import matplotlib.pyplot as plt
# ### Generate Dataset
#
# In the following, a synthetic dataset with $N$ examples is generated by implementing a simple two-dimensional linear relationship and additive noise. The features are then lifted into a higher dimensional feature space by a linear mapping. This leads to linear correlations between features.
# +
N = 1000 # total number of examples
F = 6 # dimensionality of lifted feature space
alpha = 1.2 # true intercept
theta = [0.1, 0.25] # true slopes
np.random.seed(123)
X = np.random.uniform(low=-5, high=10, size=(N, 2))
Y = alpha + np.dot(X, theta) + .5 * np.random.normal(size=(N))
# lifting of feature space by linear mapping
A = np.random.uniform(low=-2, high=2, size=(2, F))
A = A * np.random.choice([0, 1], size=(2, F), p=[2./10, 8./10])
XF = np.dot(X, A)
# -
# The condition number of the (unscaled) empirical covariance matrix $\mathbf{X}^T \mathbf{X}$ is used as a measure for the ill-conditioning of the normal equation. The results show that the condition number for the lifted feature space is indeed very high due to the multicollinear features.
# +
kappa_x = np.linalg.cond(X.T @ X)
kappa_xf = np.linalg.cond(XF.T @ XF)
print('Condition number of covariance matrix of \n \t uncorrelated features: {}'.format(kappa_x))
print('\t correlated features: {}'.format(kappa_xf))
# -
# ### Estimate Parameters of Ridge Regression
#
# Lets estimate the parameters of the linear mutivariate regression model using the ridge regression. First some helper functions are defined for estimating the regression coefficients, the prediction model and evaluation of the results.
# +
def ridge_regression(Xt, Y, mu=0):
return np.linalg.inv(Xt.T @ Xt + mu*np.eye(F+1)) @ Xt.T @ Y
def predict(Xt, theta_hat):
return np.dot(Xt, theta_hat)
def evaluate(Y, Y_hat):
e = Y - Y_hat
std_e = np.std(e)
TSS = np.sum((Y - np.mean(Y))**2)
RSS = np.sum((Y-Y_hat)**2)
Rs = 1 - RSS/TSS
return std_e, Rs
# -
# First the ordinary least-squares approach is applied by setting the regularization parameter to $\mu = 0$. The results show that the estimates of the parameters show large errors, which is also reflected by the performance metrics.
# +
Xt = np.concatenate((np.ones((len(XF), 1)), XF), axis=1)
theta_hat = ridge_regression(Xt, Y)
Y_hat = predict(Xt, theta_hat)
std_e, Rs = evaluate(Y, Y_hat)
print('Estimated/true intercept: \t\t {0:.3f} / {1:.3f}'.format(theta_hat[0], alpha))
print('Standard deviation of residual error: \t {0:.3f}'.format(std_e))
print('Coefficient of determination: \t\t {0:.3f}'.format(Rs))
# -
# Now the ridge regression is used with $\mu = 0.001$. The results are much better now, however it is not clear if the regularization parameter has been chosen appropriately.
# +
theta_hat = ridge_regression(Xt, Y, mu=1e-3)
Y_hat = predict(Xt, theta_hat)
std_e, Rs = evaluate(Y, Y_hat)
print('Estimated/true intercept: \t\t {0:.3f} / {1:.3f}'.format(theta_hat[0], alpha))
print('Standard deviation of residual error: \t {0:.3f}'.format(std_e))
print('Coefficient of determination: \t\t {0:.3f}'.format(Rs))
# -
# ### Hyperparameter Search
#
# In order to optimize the regularization parameter $\mu$ for the given dataset, the ridge regression is evaluated for a series of potential regularization parameters. After a first coarse search, the search is refined in a second step. The results of this refined search are computed and plotted in the following. It can be concluded that $\mu = 3 \cdot 10^{-11}$ seems to be a good choice.
# +
results = list()
for n in np.linspace(-12, -10, 100):
mu = 10.0**n
theta_hat = ridge_regression(Xt, Y, mu=mu)
Y_hat = predict(Xt, theta_hat)
std_e, Rs = evaluate(Y, Y_hat)
results.append((mu, std_e, Rs))
results = np.array(results)
# +
fig, ax = plt.subplots()
plt.plot(results[:, 0], results[:, 1], label=r'$\sigma_e$')
plt.plot(results[:, 0], results[:, 2], label=r'$R^2$')
ax.set_xscale('log')
plt.xlabel(r'$\mu$')
plt.ylim([-2, 2])
plt.legend()
plt.grid(True, which="both")
# -
# **Copyright**
#
# This notebook is provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources).
# The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/)
# , the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: <NAME>, Data driven audio signal processing - Lecture supplementals.
| linear_regression/ridge_regression_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_amazonei_mxnet_p36
# language: python
# name: conda_amazonei_mxnet_p36
# ---
# import libraries
import pandas as pd
import numpy as np
import os
# +
csv_file = 'data/file_information.csv'
plagiarism_df = pd.read_csv(csv_file)
# print out the first few rows of data info
plagiarism_df.head()
# +
# Read in a csv file and return a transformed dataframe
def numerical_dataframe(csv_file='data/file_information.csv'):
'''Reads in a csv file which is assumed to have `File`, `Category` and `Task` columns.
This function does two things:
1) converts `Category` column values to numerical values
2) Adds a new, numerical `Class` label column.
The `Class` column will label plagiarized answers as 1 and non-plagiarized as 0.
Source texts have a special label, -1.
:param csv_file: The directory for the file_information.csv file
:return: A dataframe with numerical categories and a new `Class` label column'''
plagiarism_df = pd.read_csv(csv_file)
cat_dct = {"non": 0, "heavy": 1, "light": 2, "cut": 3, "orig": -1}
return plagiarism_df.assign(
Category=lambda x: x["Category"].map(lambda y: cat_dct[y]),
Class=lambda x: x["Category"].map(lambda y: y if y < 1 else 1),
)
# +
# informal testing, print out the results of a called function
# create new `transformed_df`
transformed_df = numerical_dataframe(csv_file ='data/file_information.csv')
# check work
# check that all categories of plagiarism have a class label = 1
transformed_df.head(10)
# +
# test cell that creates `transformed_df`, if tests are passed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# importing tests
import problem_unittests as tests
# test numerical_dataframe function
tests.test_numerical_df(numerical_dataframe)
# if above test is passed, create NEW `transformed_df`
transformed_df = numerical_dataframe(csv_file ='data/file_information.csv')
# check work
print('\nExample data: ')
transformed_df.head()
# +
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import helpers
# create a text column
text_df = helpers.create_text_column(transformed_df)
text_df.head()
# +
random_seed = 1 # can change; set for reproducibility
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import helpers
# create new df with Datatype (train, test, orig) column
# pass in `text_df` from above to create a complete dataframe, with all the information you need
complete_df = helpers.train_test_dataframe(text_df, random_seed=random_seed)
# check results
complete_df.head(10)
# +
from sklearn.feature_extraction.text import CountVectorizer
import toolz as tz
import re
def get_text(df: pd.DataFrame, filename: str) -> str:
return df.loc[lambda x: x["File"]==filename, "Text"].iat[0]
#def get_source_task(filename: str) -> str:
# task = re.search(r'\w*_(\w*)\.txt', filename).group(1)
# return f"orig_{task}.txt"
def get_source_task(filename: str) -> str:
task = re.search(r'\w*_(\w*)\.txt', filename)
if task:
return f"orig_{task.group(1)}.txt"
def make_n_gram_array(a_text: str, s_text: str, n: int) -> np.ndarray:
counts = CountVectorizer(analyzer='word',
ngram_range=(n,n),
#stop_words="english",
)
return counts.fit_transform([a_text, s_text]).toarray()
# Calculate the ngram containment for one answer file/source file pair in a df
def calculate_containment(df, n, answer_filename):
'''Calculates the containment between a given answer text and its associated source text.
This function creates a count of ngrams (of a size, n) for each text file in our data.
Then calculates the containment by finding the ngram count for a given answer text,
and its associated source text, and calculating the normalized intersection of those counts.
:param df: A dataframe with columns,
'File', 'Task', 'Category', 'Class', 'Text', and 'Datatype'
:param n: An integer that defines the ngram size
:param answer_filename: A filename for an answer text in the df, ex. 'g0pB_taskd.txt'
:return: A single containment value that represents the similarity
between an answer text and its source text.
'''
category = df.loc[lambda x: x["File"]==filename, "Category"].iat[0]
print(f"Category is {category}\n")
a_text = get_text(df, answer_filename)
print(a_text)
s_text = get_text(df, get_source_task(answer_filename))
print(s_text)
ngram_array = make_n_gram_array(a_text, s_text, n)
print(n)
#intersection_counts = np.where(ngram_array[0] & ngram_array[1])[0].shape[0]
intersection_counts = np.where(ngram_array[0] & ngram_array[1])[0].shape[0]
print(f"Intersection count is {intersection_counts}\n")
total_n_grams_in_a = np.where(ngram_array[0] > 0)[0].shape[0]
print(f"Total n_grams in a {total_n_grams_in_a}\n")
print(f"Containment is {intersection_counts / total_n_grams_in_a}\n\n")
return intersection_counts / total_n_grams_in_a
def containment(ngram_array: np.ndarray) -> float:
''' Containment is a measure of text similarity. It is the normalized,
intersection of ngram word counts in two texts.
:param ngram_array: an array of ngram counts for an answer and source text.
:return: a normalized containment value.'''
# your code here
#print(np.where(ngram_array[0] & ngram_array[1]))
intersection_counts = np.where(ngram_array[0] & ngram_array[1])[0].shape[0]
#print(intersection_counts)
total_n_grams_in_a = np.where(ngram_array[0] > 0)[0].shape[0]
#print(total_n_grams_in_a)
return intersection_counts / total_n_grams_in_a
def text_to_containment(a_text: str, s_text: str, n: int) -> float:
ngram_array = make_n_gram_array(a_text, s_text, n)
return containment(ngram_array)
def add_source_col(df: pd.DataFrame) -> pd.DataFrame:
return df.assign(Source=lambda x: x["File"].map(get_source_task)).merge(
df[["File", "Text"]], left_on="Source", right_on="File"
)
def add_n_gram_col(df: pd.DataFrame, n: int) -> pd.DataFrame:
return df.assign(
**{
f"score_{n}": lambda y: y.apply(
lambda x: text_to_containment(x["Text_x"], x["Text_y"], n), axis=1
)
}
)
def compare_n_gram_performance(df: pd.DataFrame, n: int, group: str) -> pd.DataFrame:
df = add_n_gram_col(df, n)
return df.groupby(group)[f"score_{n}"].describe()
# -
with_source_txt = add_source_col(complete_df)
with_1_gram_df = add_n_gram_col(with_source_txt, 1)
test_names = ['g0pA_taska.txt', 'g0pA_taskb.txt', 'g0pA_taskc.txt', 'g0pA_taskd.txt']
with_1_gram_df[lambda x: x["File_x"].isin(test_names)]
with_3_gram_df = add_n_gram_col(with_source_txt, 3)
with_3_gram_df[lambda x: x["File_x"].isin(test_names)]
np.isclose
compare_n_gram_performance(with_source_txt, 1, "Task")
compare_n_gram_performance(with_source_txt, 3, "Task")
compare_n_gram_performance(with_source_txt, 1, "Category")
compare_n_gram_performance(with_source_txt, 2, "Category")
# +
from sklearn.feature_extraction.text import CountVectorizer
import toolz as tz
import re
def get_text(df: pd.DataFrame, filename: str) -> str:
return df.loc[lambda x: x["File"]==filename, "Text"].iat[0]
#def get_source_task(filename: str) -> str:
# task = re.search(r'\w*_(\w*)\.txt', filename).group(1)
# return f"orig_{task}.txt"
def get_source_task(filename: str) -> str:
task = re.search(r'\w*_(\w*)\.txt', filename)
if task:
return f"orig_{task.group(1)}.txt"
def make_n_gram_array(a_text: str, s_text: str, n: int) -> np.ndarray:
counts = CountVectorizer(analyzer='word',
ngram_range=(n,n),
#stop_words="english",
)
return counts.fit_transform([a_text, s_text]).toarray()
def containment(ngram_array: np.ndarray) -> float:
''' Containment is a measure of text similarity. It is the normalized,
intersection of ngram word counts in two texts.
:param ngram_array: an array of ngram counts for an answer and source text.
:return: a normalized containment value.'''
# your code here
#print(np.where(ngram_array[0] & ngram_array[1]))
intersection_counts = np.where(ngram_array[0] & ngram_array[1])[0].shape[0]
#print(intersection_counts)
total_n_grams_in_a = np.where(ngram_array[0] > 0)[0].shape[0]
#print(total_n_grams_in_a)
return intersection_counts / total_n_grams_in_a
# Calculate the ngram containment for one answer file/source file pair in a df
def calculate_containment(df, n, answer_filename):
'''Calculates the containment between a given answer text and its associated source text.
This function creates a count of ngrams (of a size, n) for each text file in our data.
Then calculates the containment by finding the ngram count for a given answer text,
and its associated source text, and calculating the normalized intersection of those counts.
:param df: A dataframe with columns,
'File', 'Task', 'Category', 'Class', 'Text', and 'Datatype'
:param n: An integer that defines the ngram size
:param answer_filename: A filename for an answer text in the df, ex. 'g0pB_taskd.txt'
:return: A single containment value that represents the similarity
between an answer text and its source text.
'''
category = df.loc[lambda x: x["File"]==filename, "Category"].iat[0]
print(f"Category is {category}\n")
a_text = get_text(df, answer_filename)
print(a_text)
s_text = get_text(df, get_source_task(answer_filename))
print(s_text)
ngram_array = make_n_gram_array(a_text, s_text, n)
return containment(ngram_array)
def text_to_containment(a_text: str, s_text: str, n: int) -> float:
ngram_array = make_n_gram_array(a_text, s_text, n)
return containment(ngram_array)
def add_source_col(df: pd.DataFrame) -> pd.DataFrame:
return df.assign(Source=lambda x: x["File"].map(get_source_task)).merge(
df[["File", "Text"]], left_on="Source", right_on="File"
)
def add_n_gram_col(df: pd.DataFrame, n: int) -> pd.DataFrame:
return df.assign(
**{
f"score_{n}": lambda y: y.apply(
lambda x: text_to_containment(x["Text_x"], x["Text_y"], n), axis=1
)
}
)
def compare_n_gram_performance(df: pd.DataFrame, n: int, group: str) -> pd.DataFrame:
df = add_n_gram_col(df, n)
return df.groupby(group)[f"score_{n}"].describe()
# -
with_source_txt = complete_df.assign(
Source=lambda x: x["File"].map(get_source_task)
).merge(complete_df[["File", "Text"]], left_on="Source", right_on="File")
compare_n_gram_performance(with_source_txt, 2)
compare_n_gram_performance(with_source_txt, 1)
with_score_3 = with_source_txt.assign(
score_3=lambda y: y.apply(
lambda x: text_to_containment(x["Text_x"], x["Text_y"], 3), axis=1
)
)
with_score_3.groupby("Category")["score_3"].describe()
with_source_txt.head()
with_source_txt.apply(lambda x: text_to_containment(x["Text_x"], x["Text_y"], 3), axis=1)
# + jupyter={"outputs_hidden": true}
with_source_txt
# +
# select a value for n
n = 3
# indices for first few files
test_indices = range(5)
# iterate through files and calculate containment
category_vals = []
containment_vals = []
for i in test_indices:
# get level of plagiarism for a given file index
category_vals.append(complete_df.loc[i, 'Category'])
# calculate containment for given file and n
filename = complete_df.loc[i, 'File']
c = calculate_containment(complete_df, n, filename)
containment_vals.append(c)
# print out result, does it make sense?
print('Original category values: \n', category_vals)
print()
print(str(n)+'-gram containment values: \n', containment_vals)
# -
# run this test cell
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# test containment calculation
# params: complete_df from before, and containment function
tests.test_containment(complete_df, calculate_containment)
| Project_Plagiarism_Detection/2 Containment Scratch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tests using benchmark datasets
#
# Here we provide a walkthrough of running the experimental evaluation using real-world datasets, and visualizing the results.
#
# The contents of this notebook are as follows:
#
# - <a href="#guide">Guide to running the experiments</a>
# - <a href="#visual">Processing and visualizing results</a>
#
# First we give some details describing how to actually run the tests on your machine. The remainder of the demo aids the user in visualizing the test results after the experiments have actually been run.
#
# ___
# <a id="guide"></a>
# ## Guide to running the experiments
#
# A high-level description of the full procedure involved in running these experiments is given in the README file of this repository. We assume the user has already covered sections "<a href="https://github.com/feedbackward/spectral#setup_init">Setup: initial software preparation</a>" and "<a href="https://github.com/feedbackward/spectral#setup_data">Setup: preparing the benchmark data sets</a>" from the main README file. As such, all that remains to be done here is to fill in the details related to experiment parameter settings.
#
# Essentially, the experiments are run by calling `learn_driver.py` with the appropriate options (experimental parameters). Results are stored automatically with path and filename as follows:
#
# ```
# [results_dir]/[data]/[task]-[model]_[algo]-[trial].[descriptor]
# ```
#
# Here `results_dir` is the directory for storing results, specified in `setup_results.py`. The `descriptor` depends on the method of evaluation used, all specified in `setup_eval.py`. For example, the 0-1 training loss is `zero_one_train`, and the spectral risk test error under the logistic loss is `logistic_srisk_test`, and so forth. The rest is fairly self-explanatory, though we note that for our experiments, `task` is used to specify the nature of the feedback used by the learning algorithm (discussed further below).
#
# ### Glossary of experimental parameters
#
# - `--algo-ancillary`: specifies the underlying learning algorithm to use (e.g., SGD); call these the *ancillary iterates*.
# - `--algo-main`: lets us specify an additional procedure to operate on the ancillary iterates (e.g., do nothing or average SGD iterates, etc.); call these the *main iterates*.
# - `--batch-size`: size of mini-batch to use.
# - `--cdf-size`: amount of ancillary data to be used for CDF estimation.
# - `--data`: the name of the data set to be used.
# - `--entropy`: used to ensure consistency across methods.
# - `--fast`: if this flag is used under the spectral risk, then following the naming in our paper, `fast` will be run instead of `default`.
# - `--loss`: the name of the loss function to be used.
# - `--model`: the name of the model to be used.
# - `--no-srisk`: if this flag is used, then it tells the learning algorithm to treat the losses as-is, rather than use our "srisk" (spectral risk) loss wrappers. This amounts to traditional ERM; if this flag is active, the flag `--fast` has no effect.
# - `--num-epochs`: the number of passes to make over the training data.
# - `--num-trials`: the number of randomized trials to be run.
# - `--step-size`: global step-size coefficient (multiplies the step-size computed internally by algorithms).
# - `--task-name`: mostly for clerical purposes, when we want to distinguish results for different setups (typically different settings of the flags `--fast` and `--no-srisk`).
#
#
# ### Detailed parameter settings
#
# The experiments given in our paper (2021/05 arXiv version) are computed using the following experimental parameter settings:
#
# - `--algo-ancillary`: `"SGD"`.
# - `--algo-main`: `"Ave"`.
# - `--batch-size`: `"8"`
# - `--cdf-size`: `"250"`
# - `--data`: (see dataset names listed in the paper).
# - `--loss`: `"logistic"`.
# - `--model`: `"linreg_multi"`.
# - `--num-epochs`: `"50"`.
# - `--num-trials`: `"10"`.
# - `--step-size`: `"1.0"`.
# - `--task-name`: `"off"` (uses `--no-srisk` flag only), `"default"` (no flags used), `"fast"` (uses `--fast` flag only).
#
#
# ### Execution
#
# If the shell script `learn_run.sh` has been appropriately modified to reflect any of the above settings, then running the experiments is a one-line operation:
#
# ```
# (spectral) $ bash learn_run.sh
# ```
#
# If you would rather not do these all manually, feel free to use our `remote_*.sh` scripts for convenience.
#
# Once the tests have been run, a collection of raw experimental results should be written to disk. The following section covers the tasks of processing and visualizing these results.
#
# ___
# <a id="visual"></a>
# ## Processing and visualizing results
# +
## External modules.
from contextlib import ExitStack
import json
from matplotlib import cm
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator, MultipleLocator
import numpy as np
import os
## Internal modules.
from mml.utils import makedir_safe
from setup_data import dataset_paras
from setup_results import img_dir, results_dir, my_fontsize, my_ext, export_legend
# -
## Parameters to be set by the user.
data = "adult" # specify dataset name
model = "linreg_multi" # specify model name
eval_type = "logistic_srisk" # specify evaluation metric (logistic_srisk, zero_one, etc.)
## Helper function definition.
def agg_fn_all(arr, agg_type):
if agg_type == "mean":
return np.mean(arr, axis=0)
elif agg_type == "sd":
return np.std(arr, axis=0)
else:
raise ValueError
# +
## Automated clerical setup.
eval_train = eval_type+"_train"
eval_test = eval_type+"_test"
## Directory setup.
toread_dir = os.path.join(results_dir, data)
towrite_dir = os.path.join(img_dir)
makedir_safe(towrite_dir)
## Colour setup.
mth_cmap = cm.get_cmap("Set1")
mth_colours = []
for i in range(9):
mth_colours += [mth_cmap.colors[i]]
# -
## Set how we aggregate over trials.
agg_mean = lambda array: agg_fn_all(arr=array, agg_type="mean")
agg_sd = lambda array: agg_fn_all(arr=array, agg_type="sd")
## A few lines of code to extract all the method names.
all_files = os.listdir(toread_dir)
names_raw = []
for s in all_files:
split_hyphen = s.split("-")
split_dot = s.split(".")
if split_dot[-1] != "json":
names_raw += ["-".join(split_hyphen[0:-1])]
names_raw = np.array(names_raw)
names_unique = np.unique(names_raw)
print("Unique names found:", names_unique)
to_plot_names = { s: s.split("-")[0] for s in names_unique}
to_plot = [s for s in to_plot_names.keys()]
to_plot_colours = {a: mth_colours[j] for j, a in enumerate(to_plot)}
# +
## Gathering of results.
dict_train = {a: [] for a in to_plot}
dict_test = {a: [] for a in to_plot}
for mth_name in to_plot:
trial = 0
do_gathering = True
while do_gathering:
toread_train = os.path.join(
toread_dir, ".".join([mth_name+"-"+str(trial), eval_train])
)
toread_test = os.path.join(
toread_dir, ".".join([mth_name+"-"+str(trial), eval_test])
)
with ExitStack() as stack:
try:
f_train = stack.enter_context(open(toread_train, mode="r", encoding="ascii"))
yvals = np.loadtxt(fname=f_train, dtype=float,
delimiter=",", ndmin=2)
dict_train[mth_name] += [yvals]
except FileNotFoundError:
do_gathering = False
print("({}) Finished collecting training results.".format(mth_name))
try:
f_test = stack.enter_context(open(toread_test, mode="r", encoding="ascii"))
yvals = np.loadtxt(fname=f_test, dtype=float,
delimiter=",", ndmin=2)
dict_test[mth_name] += [yvals]
except FileNotFoundError:
print("({}) Finished collecting test results.".format(mth_name))
## If the current trial went through, increment to try the next one.
if do_gathering:
trial += 1
dict_train = {a:np.hstack(dict_train[a]).T for a in dict_train.keys()}
dict_test = {a:np.hstack(dict_test[a]).T for a in dict_test.keys()}
# +
## Visualization of results.
fig, ax = plt.subplots(1, 1, figsize=(4,3.5)) # nice for putting in paper.
#fig, ax = plt.subplots(1, 1, figsize=(7,6)) # nice for viewing in notebook.
for j, mth_name in enumerate(dict_train.keys()):
yval_array = dict_train[mth_name]
num_trials, num_epochs = yval_array.shape
xvals = np.arange(num_epochs)
yvals = agg_mean(yval_array)
yvals_err = agg_sd(yval_array)
ax.plot(xvals, yvals,
color=to_plot_colours[mth_name],
label="{} tr".format(to_plot_names[mth_name]),
ls="--")
#ax.fill_between(x=xvals, y1=yvals-yvals_err, y2=yvals+yvals_err,
# alpha=0.2, color=to_plot_colours[mth_name], lw=0)
for j, mth_name in enumerate(dict_test.keys()):
yval_array = dict_test[mth_name]
num_trials, num_epochs = yval_array.shape
xvals = np.arange(num_epochs)
yvals = agg_mean(yval_array)
yvals_err = agg_sd(yval_array)
ax.plot(xvals, yvals,
color=to_plot_colours[mth_name],
label="{} te".format(to_plot_names[mth_name]),
ls="-")
ax.fill_between(x=xvals, y1=yvals-yvals_err, y2=yvals+yvals_err,
alpha=0.2, color=to_plot_colours[mth_name], lw=0)
ax.tick_params(labelsize=my_fontsize)
#ax.legend(loc=0, ncol=2, fontsize=my_fontsize)
ax.set_title("{}".format(data),
size=my_fontsize)
fname = os.path.join(towrite_dir, "{}_{}.{}".format(data, eval_type, my_ext))
ax.xaxis.set_major_locator(MultipleLocator(10))
#ax.yaxis.set_major_formatter("{x:.1f}")
#ax.yaxis.set_major_locator(MaxNLocator(5))
ax.yaxis.grid(linestyle="--")
plt.tight_layout()
plt.savefig(fname=fname)
plt.show()
# +
## Legends.
labels = [to_plot_names[a]+" (train)" for a in to_plot] + [to_plot_names[a]+" (test)" for a in to_plot]
colours = [to_plot_colours[a] for a in to_plot] + [to_plot_colours[a] for a in to_plot]
linestyles = ["--"]*len(to_plot_names) + ["-"]*len(to_plot_names)
f = lambda m,c,l: plt.plot([],[],marker=m, color=c, ls=l)[0]
handles = [f(None, colours[i], linestyles[i]) for i in range(len(labels))]
legend = plt.legend(handles, labels, loc=3, ncol=2, framealpha=1, frameon=True)
fname = os.path.join(towrite_dir, "legend.{}".format(my_ext))
export_legend(legend=legend, filename=fname)
plt.show()
# -
# ___
| spectral/demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + _cell_guid="a3cb0ee3-7bca-4b2b-8a27-be198d18818e" _uuid="075ab0f3fc310e293828b3681f1d80642f88c106" language="html"
# <style>
# .h1_cell, .just_text {
# box-sizing: border-box;
# padding-top:5px;
# padding-bottom:5px;
# font-family: "Times New Roman", Georgia, Serif;
# font-size: 125%;
# line-height: 22px; /* 5px +12px + 5px */
# text-indent: 25px;
# background-color: #fbfbea;
# padding: 10px;
# }
#
# hr {
# display: block;
# margin-top: 0.5em;
# margin-bottom: 0.5em;
# margin-left: auto;
# margin-right: auto;
# border-style: inset;
# border-width: 2px;
# }
# </style>
# -
# <h1>
# <center>
# Module 9
# </center>
# </h1>
# <div class=h1_cell>
# <p>
# Last week we explored how to pull relation triples out of a sentence. This week, let's see if we can do something with those triples.
# <p>
# Your goal is to store the relations you extract from sentences in a "knowledge base". My first thought was to use a pandas dataframe as the knowledge base. Store a relation one per row with 3 columns. But I don't think that is a good idea. Two of the three values are NP subtrees. The subtrees can have structure of their own, e.g., more than one leaf node. I don't see how to easily store the subtree in a dataframe.
# <p>
# Maybe the easiest is to implement the knoweldge base as just a list of relations, where each relation is a triple of NP verb NP.
# <p>
# Before we get going, here is some material from last week as reference.
# </div>
# + active=""
# 1. CC Coordinating conjunction
# 2. CD Cardinal number
# 3. DT Determiner
# 4. EX Existential there
# 5. FW Foreign word
# 6. IN Preposition or subordinating conjunction
# 7. JJ Adjective
# 8. JJR Adjective, comparative
# 9. JJS Adjective, superlative
# 10. LS List item marker
# 11. MD Modal
# 12. NN Noun, singular or mass
# 13. NNS Noun, plural
# 14. NNP Proper noun, singular
# 15. NNPS Proper noun, plural
# 16. PDT Predeterminer
# 17. POS Possessive ending
# 18. PRP Personal pronoun
# 19. PRP$ Possessive pronoun
# 20. RB Adverb
# 21. RBR Adverb, comparative
# 22. RBS Adverb, superlative
# 23. RP Particle
# 24. SYM Symbol
# 25. TO to
# 26. UH Interjection
# 27. VB Verb, base form
# 28. VBD Verb, past tense
# 29. VBG Verb, gerund or present participle
# 30. VBN Verb, past participle
# 31. VBP Verb, non-3rd person singular present
# 32. VBZ Verb, 3rd person singular present
# 33. WDT Wh-determiner
# 34. WP Wh-pronoun
# 35. WP$ Possessive wh-pronoun
# 36. WRB Wh-adverb
# -
sentences = [
'<NAME> builds the creature in his laboratory',
'The creature is 8 feet tall', # tricky
'the monster wanders through the wilderness', # tricky
'He finds brief solace beside a remote cottage inhabited by a family of peasants',
'Eavesdropping, the creature familiarizes himself with their lives and learns to speak', # tricky
"The creature eventually introduces himself to the family's blind father",
'the creature rescues a peasant girl from a river.',
"He finds Frankenstein's journal in the pocket of the jacket he found in the laboratory",
"The monster kills Victor's younger brother William upon learning of the boy's relation to his hated creator.",
"Frankenstein builds a female creature.",
"the monster kills Frankenstein's best friend <NAME>.",
"the monster boards the ship.",
"The monster has also been analogized to an oppressed class",
"the monster is the tragic result of uncontrolled technology."
]
import nltk
from nltk.tree import Tree
def build_relation(text, chunker):
#chunk the text with chunker
chunks = chunker.parse(nltk.pos_tag(nltk.word_tokenize(text)))
#Now re-chunk looking for our triples. Call the chunk REL for relation
chunker2 = nltk.RegexpParser(r'''
REL:
{<NP><VBZ><NP>}
''')
relation_chunk = chunker2.parse(chunks)
for t in relation_chunk:
if type(t) != Tree: continue
if t.label() == 'REL':
return (t[0], t[1], t[2])
return tuple([])
rel_chunker2 = nltk.RegexpParser(r'''
NP:
{<DT>?<JJ>*<NN>} # chunk determiner (optional), adjectives (optional) and noun
{<NNP>+} # chunk sequences of proper nouns
{<NNP>*<NNP>}
{<NNPP><VBZ><NNP>}
{<VBZ><.*>*?<NN>}
{<PRP>|<NNP>*<PRP>|<NNP>}
{<NNP><CD>*<NN>}
{<PRP>*<VBZ><JJ>}
{<CD><NNS><JJ>}
{<RB>?<VBN>*<TO>}
{<RB>}
''')
build_relation(sentences[0], rel_chunker2)
# <h2>
# All the sentences
# </h2>
# <div class=h1_cell>
# <p>
# See how many we can pull relations from. I am showing results prior to your assignment. I assume you now are seeing less empty tuples, i.e., you are matching more sentences.
# </div>
all_relations = [] # will be our knowledge base
for i,s in enumerate(sentences):
relation = build_relation(s, rel_chunker2)
all_relations.append(relation)
print(relation)
print('===============')
# <h2>
# Challenge 1
# </h2>
# <div class=h1_cell>
# <p>
# The goal will be to write a lookup query that looks like "Show me who built things.". Or "Who did the monster kill?"
# Your first thought might be that this is straightforward. Just match "built" or "kill" to the verb in each relation using `==`. But the actual verbs are "builds" and "kills". So won't literally match.
# <p>
# There is something we can use to help. It is called a lemmatizer and nltk has one (actually several). The general idea is that we pass any form of a verb in like "build" and it will always return "build". Let's check it out.
# <p>
# <p>
# BTW: WordNet is kind of interesting. It is an online syllabus of a huge number of English words. It is separate from nltk. However, nltk has a wrapper for it so we can use it as below.
# <p>
# BTW2: for the spelling police out there, see this:
# <pre>
# builded. Verb. (archaic or childish, nonstandard) simple past tense and past participle of build.
# </pre>
# </div>
from nltk.stem import WordNetLemmatizer # using the cool WordNet syllabus
lemmatizer = WordNetLemmatizer() # one of the varietes to choose from in nltk
print(lemmatizer.lemmatize("build", pos="v"))
print(lemmatizer.lemmatize("builds", pos="v"))
print(lemmatizer.lemmatize("built", pos="v"))
print(lemmatizer.lemmatize("builded", pos="v")) # archaic but ok
print(lemmatizer.lemmatize("builted", pos="v")) # bogus
# <div class=h1_cell>
# <p>
# Here are a few more, nouns if no pos parameter.
# </div>
# +
print(lemmatizer.lemmatize("cats")) # default to n or noun
print(lemmatizer.lemmatize("cacti"))
print(lemmatizer.lemmatize("geese"))
print(lemmatizer.lemmatize("rocks"))
print(lemmatizer.lemmatize("python"))
print(lemmatizer.lemmatize("better", pos="a")) # a is adjective
print(lemmatizer.lemmatize("best", pos="a"))
print(lemmatizer.lemmatize("ran"))
print(lemmatizer.lemmatize("ran",'v'))
# -
# <div class=h1_cell>
# <p>
# I think we are in business. If we are trying to match one form of the same verb against another, we can lemmatize both of them first then use `==`. We are almost ready to define a function, verb_match, that takes a verb we are trying to match and a relation we are matching against. It returns True if we get a match after lemmatization. But before that, let's look at a relation in a bit more detail.
# </div>
s0 = all_relations[0] # first relation in our knowledge base
print(s0)
for item in s0:
print((item, type(item)))
# <div class=h1_cell>
# You can see that the relation is a triple of (Tree, tuple, Tree). Since we are only focusing on the verb, we don't have to deal with Tree objects yet. That will come when we want to match against noun-phrases (1st and 3rd components of the triple).
# <p>
# We can see that the verb is a tuple of actual verb and then its pos as seen in table above. With that info, you should be ready to define the function.
# </div>
def verb_match(verb_word, relation):
try:
verbLem = lemmatizer.lemmatize(verb_word, pos="v")
relLem = lemmatizer.lemmatize(relation[1][0], pos="v")
except:
return False
return verbLem == relLem
print(verb_match('built', s0))
print(verb_match('build', s0))
print(verb_match('builds', s0))
print(verb_match('builts', s0))
# <h2>
# Challenge 2
# </h2>
# <div class=h1_cell>
# <p>
# Cool. We have verb matching under control. Now for matching a noun-phrase. A noun-phrase as we have defined it is a Tree object with one or more leaves. A leaf is a tuple of word followed by pos. Before doing anything else, let's define a helper function that will return a list of the words on the leaves.
# </div>
def np_to_word_list(np_tree):
leaves = np_tree.leaves() # Tree method
return [tup[0] for tup in leaves]
# <div class=h1_cell>
# <p>
# You can see I am using a method leaves() that is defined by the Tree class. It will give us a list of tuples. I then use a list comprehension to pull out the words.
# </div>
np1 = s0[0] # first noun-phrase
print((np1, type(np1)))
np_to_word_list(np1)
np2 = s0[2] # second noun-phrase
np_to_word_list(np2)
# <h2>
# Matching strategy
# </h2>
# <div class=h1_cell>
# <p>
# We know we can get a list of words from a noun-phrase. We could easily check for a single word match by using the `in` operator, e.g., 'a' in ['b', 'a', 'c'] returns True. I'd like something a bit more sophisticated. I would like the match words to also be a list. So we are attempting to match a list of words against another list of words. How does this work? Let's call the two word lists target-words and np-words. I would like you to go through each word in target-words, one by one, and find a match in np-words. The tricky part is I would like you to remember where the match occurred in np-words and start the next match from that point. Here are some examples. First list is target-words and second np-words.
# <pre>
#
# ['a', 'b', 'c'] and ['d', 'a', 'b', 'r', 'c', 'f'] match.
#
# ['a', 'b', 'c'] and ['d', 'a', 'c', 'r', 'b', 'f'] no match.
#
# ['a', 'b', 'c'] and ['d', 'a', 'b', 'r', 'b', 'f'] no match.
#
# [] and ['d', 'a', 'b', 'r', 'b', 'f'] wildcard match.
#
# </pre>
#
# Also see the example calls below the function definition.
# <p>
# BTW: I broke out single word matching into a separate function. I did so to make it easier to do more sophisticated matching in the future.
# </div>
def np_word_match(word1, word2):
word1 = word1.lower()
word2 = word2.lower()
#literally equal
return word1 == word2
# assisted by <NAME>
def np_match(np_tree, target_word_list):
indeces = []
np_word_list = np_to_word_list(np_tree)
for word in target_word_list:
for i in range(len(np_word_list)):
if np_word_match(word, np_word_list[i]):
indeces.append(i)
return all(i<j for i,j in zip(indeces, indeces[1:])) and len(indeces) == len(target_word_list)
np_match(s0[0], ['victor'])
np_match(s0[0], ['victor', 'frankenstein'])
np_match(s0[0], [ 'frankenstein', 'victor'])
np_match(s0[0], ['victor', 'victor', 'frankenstein'])
np_match(s0[2], ['the', 'creature'])
np_match(s0[2], []) # empty list is wildcard
# <h2>
# Should we lemmatize matching?
# </h2>
# <div class=h1_cell>
# <p>
# At moment, np_word_match is matching words literally. But we saw for verbs, lemmatization helped as be less strict and match different forms of same verb. How would that work with words in noun phrases? Here is an example:
# <pre>
# (Tree('NP', [('Frankenstein', 'NNP')]), ('builds', 'VBZ'), Tree('NP', [('a', 'DT'), ('female', 'JJ'), ('creature', 'NN')]))
# </pre>
# <p>
# It sounds reasonable to me to match women with female. Will the lemmatizer give us this?
# </div>
lemmatizer.lemmatize("woman",'n')
lemmatizer.lemmatize("female",'n')
# <div class=h1_cell>
# <p>
# Nope. I think we are going to have to try something else. Let's consider a thesaurus based approach. We can get the synonyms of a word and check against that. So if we are trying to match word1 against word2, we could also match word1 against the synonyms of word2 and vice versa. Does nltk give us a thesaurus to use? Yes. More accurately, it gives us access to that large online thesaurus called WordNet. Here is a function that will return the synonyms of a word using WordNet.
# </div>
from nltk.corpus import wordnet
def get_syns(word):
synonyms = []
for syn in wordnet.synsets(word):
for lem in syn.lemmas():
synonyms.append(lem.name())
return list(set(synonyms))
get_syns('female')
get_syns('woman')
# <div class=h1_cell>
# <p>
# Uh. A little on the sexist side if you ask me. And does not give us what we want: a match between 'female' and 'woman': 'female' does not appear in synonyms for 'woman' nor vice versa. Let's check some others.
# </div>
get_syns('monster')
get_syns('creature')
# <div class=h1_cell>
# <p>
# Still no luck. But looking at some of the synonyms, it does open the door to matching 'monster' with useful synonyms and same for 'creature'.
# </div>
'ogre' in get_syns('monster')
'brute' in get_syns('creature')
# <h2>
# Challenge 3
# </h2>
# <div class=h1_cell>
# <p>
# Go ahead and modify np_word_match to now include a match against synonym lists. As before return True if literal match. But also return True if word1 in synonyms of word2 or vice versa.
# <p>
# My guess is you only need to check against one synonym list because of symmetry of synonyms. In particular, my hypothesis is that if you don't find word1 in synonyms of word2, you won't find word2 in synonyms of word1. But I have not had a chance to verify this so check against both lists for now.
# </div>
# +
#improved version
def np_word_match(word1, word2):
word1 = word1.lower()
word2 = word2.lower()
if word1 == word2:
return True
word1Syns = get_syns(word1)
word2Syns = get_syns(word2)
if word1 in word2Syns or word2 in word1Syns:
return True
return False
# -
np_match(s0[2], ['brute'])
# <h2>
# Challenge 4
# </h2>
# <div class=h1_cell>
# <p>
# Ok, now we have some helper functions defined and we can get to the cool stuff. I want to treat our collection of relations as a kind of database (I'll also sometimes use the more high falutin term *knowledge base*). What can you do with a database? You can query it. I'd like you to build the function `who` to get us started. The function will take 3 arguments: (1) the verb to match on, (2) a list of words to match against the 2nd noun-phrase, and (3) the relation to check.
# </div>
#One more helper function if you need it
def np_to_string(np_tree):
words = np_to_word_list(np_tree)
return ' '.join(words)
def who(verb, target_words, relation):
if len(relation) == 0:
return None
if verb_match(verb, relation) and np_match(relation[2], target_words):
return np_to_string(relation[0])
return None
for rel in all_relations:
print(who('built', ['the', 'creature'], rel))
for rel in all_relations:
print(who('rescued', ['girl'], rel))
for rel in all_relations:
print(who('was', ['tragic'], rel))
for rel in all_relations:
print(who('killed', [], rel)) # use of wildcard: Who killed anything?
# <div class=h1_cell>
# <p>
# I'm going to package up the for loop into a function. I'll return a list of answers.
# </div>
def search_for_who(verb, target_words, kb):
who_dunit = []
for rel in kb:
if not rel: continue
p = who(verb, target_words, rel)
if p: who_dunit.append(p)
return who_dunit
search_for_who('built', ['the', 'creature'], all_relations)
search_for_who('built', ['the', 'brute'], all_relations)
search_for_who('rescued', ['girl'], all_relations)
search_for_who('killed', ['frankenstein'], all_relations)
search_for_who('was', ['tragic'], all_relations)
search_for_who('killed', [], all_relations)
search_for_who('built', ['the', 'monster'], all_relations) #seems like it should match but does not
# <h2>
# Challenge 5
# </h2>
# <div class=h1_cell>
# <p>
# Pretty dang cool if you ask me. Let's do another. Define `what_done_by` that only takes 2 arguments: (1) the list of target words to match against the first noun-phrase and (2) the relation. See my example results below.
# </div>
def what_done_by(target_words, relation):
if np_match(relation[0], target_words):
return relation[1][0] + ' ' + np_to_string(relation[2])
else: return None
def search_for_what_done_by(target_words, kb):
what_done = []
for rel in kb:
if not rel: continue
p = what_done_by(target_words, rel)
if p: what_done.append(p)
return what_done
search_for_what_done_by(['victor'], all_relations)
search_for_what_done_by(['monster'], all_relations)
search_for_what_done_by([], all_relations) # wildcard
# <h2>
# Challenge 6
# </h2>
# <div class=h1_cell>
# <p>
# Last one. Define a function `what_happened_to` that takes target words to match against the 2nd noun-phrase.
# </div>
def what_happened_to(target_words, relation):
if np_match(relation[2], target_words):
return np_to_string(relation[0]) + ' ' + relation[1][0] + ' ' + np_to_string(relation[2])
else: return None
def search_for_what_happened_to(target_words, kb):
what_done_to = []
for rel in kb:
if not rel: continue
p = what_happened_to(target_words, rel)
if p: what_done_to.append(p)
return what_done_to
search_for_what_happened_to(['creature'], all_relations)
search_for_what_happened_to(['brute'], all_relations)
search_for_what_happened_to(['tyke'], all_relations)
search_for_what_happened_to([], all_relations)
# <h2>
# Closing Notes
# </h2>
# <div class=h1_cell>
# <p>
# One next step would be to build something closer to an SQL language for querying. Then map that language to our functions.
# <p>
# Another step would be to look for contradictions, e.g., "X killed Y", "Y killed X". Or "X is 8 feet tall", "X is 3 feet tall". One of our PhD students just finished a study like this for medical papers. He tried to find contradictions in different author's findings. And he did! He wrote to the authors and pointed out the contractions. You might even be able to use it to detect fake news. If (a big if) you had a set of relations that you knew were true, you can search the web for text (e.g., tweets, blogs) that contradicted what you knew was true.
# <p>
# A never-ending next step is to improve pattern-matching to extract relations. Deal with the convoluted way English sentences can be written.
# </div>
| UpperDivisionClasses/Data_Science/week9/.ipynb_checkpoints/qa_handout-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="gAqV3EuGoFwC"
# # Rustで整数を5バイトに押し込む簡単なお仕事
# + [markdown] id="_gHh10FaoWtA"
# ## はじめに
# + [markdown] id="9HyiMsGxocK4"
# ビッグデータを処理するような場合、データの読み込みが律速になることが多いです。その場合、データのストレージ上のサイズが小さくなると読み込み時間が減り、全体の処理時間が半分になったりします。そんな時、整数が40bitで十分ならば、40bitで保持したいですね。というわけで、Rustで整数を40bitでストレージに読み書きする方法について調べました。[以前の記事](https://www.soliton-cyber.com/blog/go-uint-40)でGo言語で同じことをやっています。
#
# > ※ インデックス算出の際に時間計測に含めるべきではない演算を行っていたので、それを排除した内容に差し替えました。
# + [markdown] id="Sv9I9Y8HpCAA"
# まずは、Rustをインストールします。
# + colab={"base_uri": "https://localhost:8080/"} id="oT3ikwCLfUWl" outputId="909abc49-f056-426c-f893-778fbffa1977"
# !wget https://static.rust-lang.org/rustup/rustup-init.sh
# !sh rustup-init.sh -y
# !cp /root/.cargo/bin/* /usr/local/bin
# + [markdown] id="n1nXuWvQpHa0"
# 検証用のプロジェクトを作成して、カレントディレクトリを移動します。
# + colab={"base_uri": "https://localhost:8080/"} id="jy0wHUnY79PN" outputId="e4862461-47eb-4997-ce17-54daa2b84c00"
# !cargo new measure
# %cd measure
# + [markdown] id="oyl9Tz_vqnos"
# 環境設定ファイルを生成します。
# + colab={"base_uri": "https://localhost:8080/"} id="pk5YbV2Wa83N" outputId="d689072f-2561-47c2-8189-117b14752bcc"
# %%writefile Cargo.toml
[package]
name = "measure"
version = "0.1.0"
authors = []
edition = "2018"
[dependencies]
rand = "0.8.4"
# + [markdown] id="al2WtjbgpQTR"
# ## 64bit整数の8バイト配列化
#
# > `to_le_bytes()`がおすすめ
# + [markdown] id="Vc_vxFPkp263"
# まずは64bitの整数をバイト配列にする方法を調べます。(以下では整数のエンコーディングはリトルエンディアンにします。特にunsafeがらみではビッグエンディアンのCPUでは間違った動作になりますのでご注意ください。)
#
# 愚直だとこうでしょうか。
# ```
# for j in 0..8 {
# buf[j] = (v >> (8 * j)) as u8;
# }
# ```
# + colab={"base_uri": "https://localhost:8080/"} id="UIRKZZx27hmU" outputId="8477d2d2-7f58-4530-ec92-ac3a239b37dd"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [[u8; 8]; SIZE] = [[0; 8]; SIZE];
// 計測開始
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let v = vs[idx];
let b = &mut buf[idx];
for j in 0..8 {
b[j] = (v >> (8 * j)) as u8;
}
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..8 {
total += buf[i][j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="_r0EbJpi8LMU" outputId="8b8d71f2-03e0-41b0-8558-0443bf59072c"
# !cargo run --release
# + [markdown] id="PyKt18vAqoiO"
# 速い。Go言語では同じ処理で2分以上かかっていました。オプティマイザが優秀そうです。
# + [markdown] id="xLzN1BbsukwF"
# ここで、参考のため、時間測定外の処理をなくして計測してみます。
# + colab={"base_uri": "https://localhost:8080/"} id="7bxXj2zhrGkV" outputId="20bcca1a-f174-4419-c9d4-dda197845db9"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [[u8; 8]; SIZE] = [[0; 8]; SIZE];
// 計測開始
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let v = vs[idx];
let b = &mut buf[idx];
for j in 0..8 {
b[j] = (v >> (8 * j)) as u8;
}
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
}
# + colab={"base_uri": "https://localhost:8080/"} id="aO1MkSywrGb7" outputId="304158ba-d057-469f-ed6d-c03d344385d2"
# !cargo run --release
# + [markdown] id="y3AkR1h2sFD8"
# 計算結果を出力しないと、最適化で処理が削除されてしまうようです。また、計算結果の全てに触らないと部分的に削除されることもあるようなので、念のため全てに触るようにしてあります。
#
# + [markdown] id="mXrbgskt0yPl"
# それでは内側のfor文を展開してみましょう。
# + colab={"base_uri": "https://localhost:8080/"} id="Q-EckxVy8Q_x" outputId="6aa1a7ab-f6ff-41a8-b257-6835b3218628"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [[u8; 8]; SIZE] = [[0; 8]; SIZE];
// 計測開始
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let v = vs[idx];
let b = &mut buf[idx];
b[0] = v as u8;
b[1] = (v >> 8) as u8;
b[2] = (v >> 16) as u8;
b[3] = (v >> 24) as u8;
b[4] = (v >> 32) as u8;
b[5] = (v >> 40) as u8;
b[6] = (v >> 48) as u8;
b[7] = (v >> 56) as u8;
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..8 {
total += buf[i][j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="q7mgMv9F9LfI" outputId="6b22b4f0-60f8-499b-f2a7-71069eb8009a"
# !cargo run --release
# + [markdown] id="C_9_BcrV5S85"
# ほぼ同じです。すでに最適化によりfor文が展開済みなのかもしれません。
#
# + [markdown] id="CLYiac7jsFjt"
# Rustでは整数に対してto_le_bytes()というそのものズバリの関数が用意されていました。それを使用してみます。
# + colab={"base_uri": "https://localhost:8080/"} id="7q_cS4UG_6kc" outputId="bdc03e55-c2db-4d16-8bcd-e235595bae97"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [[u8; 8]; SIZE] = [[0; 8]; SIZE];
// 計測開始
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
buf[idx] = vs[idx].to_le_bytes();
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..8 {
total += buf[i][j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="x_rg_tY5U8Xc" outputId="14965f2a-4730-4f61-d2f7-ace28cfccfb4"
# !cargo run --release
# + [markdown] id="daww7MSN55B5"
# かなり速くなりました。
# + [markdown] id="HvmqvJir6bwo"
# 次に、整数をバイト配列に変換する別の方法が[某書籍](https://www.amazon.co.jp/dp/B087BZQ48R)で紹介されていたので、試してみます。
# + colab={"base_uri": "https://localhost:8080/"} id="ZMMiitSgV4rb" outputId="86cfa918-cd95-4bf2-e441-1697972473d8"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [[u8; 8]; SIZE] = [[0; 8]; SIZE];
// 計測開始
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
unsafe {
buf[idx] = std::mem::transmute::<u64, [u8; 8]>(vs[idx]);
}
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..8 {
total += buf[i][j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="iPhVvcJ2cwVJ" outputId="45c597f6-7272-48c2-f056-d19bc38074b5"
# !cargo run --release
# + [markdown] id="yWwb-f696tOl"
# 処理時間、ほぼ同じです。
# + [markdown] id="B7K7qLv361DZ"
# どうせunsafeを使うなら、最後に、ポインターを使う技を試してみましょう。当初、どう型を定義してよいかわからず、コンパイラーに随分怒られました。
# + colab={"base_uri": "https://localhost:8080/"} id="WC91Cw8je2yC" outputId="b175b06d-0d60-43d5-98d7-4496d6d47097"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let buf: [[u8; 8]; SIZE] = [[0; 8]; SIZE];
// 計測開始
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
unsafe {
let ptr: *mut u64 = buf[idx].as_ptr() as *mut u64;
*ptr = vs[idx];
}
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..8 {
total += buf[i][j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="p3pgrBDAe4fu" outputId="5f49253b-930e-467b-9469-4963ce9a4fb0"
# !cargo run --release
# + [markdown] id="oNNIVrWPoD8P"
# 誤差の範囲内で同じです。`to_le_bytes()`をおすすめします。
# + [markdown] id="TF307BUn9Lql"
# ## 8バイト配列から64bit整数への変換
#
# > `from_le_bytes()`がおすすめ
#
#
# + [markdown] id="WimdVe6YswOl"
# まずは愚直な方法です。
# + colab={"base_uri": "https://localhost:8080/"} id="RKp-knYuabZK" outputId="4d5cd464-7164-4450-a7f9-7114252f4f4b"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 8]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8])
}
// 計測開始
let start = Instant::now();
let mut v: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let b = buf[idx];
for j in 0..8 {
v += (b[j] as u64) << (8 * j);
}
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", v);
}
# + colab={"base_uri": "https://localhost:8080/"} id="JmrjGUDifCRC" outputId="b5133b02-ab69-4b3a-d177-37ff50acd413"
# !cargo run --release
# + [markdown] id="xydf3tbUtUOf"
# 配列への変換に比べるとかなり遅いです。分解するよりも合成する方が計算が必要な分、遅くなるのでしょうか。
# + [markdown] id="_ObBAO2ptwl6"
# 内側のfor文を展開します。
# + colab={"base_uri": "https://localhost:8080/"} id="GniGT1Z_Jbl0" outputId="fa353fc6-833e-43db-e8c5-e093bf8e2cc6"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 8]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8])
}
// 計測開始
let start = Instant::now();
let mut v: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let b = buf[idx];
v += b[0] as u64;
v += (b[1] as u64) << 8;
v += (b[2] as u64) << 16;
v += (b[3] as u64) << 24;
v += (b[4] as u64) << 32;
v += (b[5] as u64) << 40;
v += (b[6] as u64) << 48;
v += (b[7] as u64) << 56;
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", v);
}
# + colab={"base_uri": "https://localhost:8080/"} id="L1Lkwu4KJdQV" outputId="4c629afa-c663-4507-a1c0-4da7ee7d6597"
# !cargo run --release
# + [markdown] id="bHIJgbfjMBpk"
# 展開した方が処理時間が長くなる結果になりました。展開するよりも効率の良い最適化がなされるようになったのかもしれません。
# + [markdown] id="FN8KreF9vXc7"
# 次に、関数を使いましょう。
# + colab={"base_uri": "https://localhost:8080/"} id="sSpvgJK2P24I" outputId="ce990c11-ee91-42a3-9ac3-1f1b71770156"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 8]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8])
}
// 計測開始
let start = Instant::now();
let mut v: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
v += u64::from_le_bytes(buf[idx]);
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", v);
}
# + colab={"base_uri": "https://localhost:8080/"} id="0JEW1aqTP7I7" outputId="7a1753ee-0346-42d3-b984-22969538573a"
# !cargo run --release
# + [markdown] id="b-eYLQ_kuH5T"
# 速くなりました。流石に専用の関数は違いますね。
# + [markdown] id="atuqKCmMvtcL"
# 別の関数による方法。
# + colab={"base_uri": "https://localhost:8080/"} id="FDdfiRYkOjGi" outputId="48ec31df-7087-4db9-e62a-6a7e80e55a12"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 8]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8])
}
// 計測開始
let start = Instant::now();
let mut v: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
unsafe {
v += std::mem::transmute::<[u8; 8], u64>(buf[idx]);
}
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", v);
}
# + colab={"base_uri": "https://localhost:8080/"} id="6fZ5hr1uOmFQ" outputId="bcf83f81-b4ee-43a1-89d3-e5116ca6a5ef"
# !cargo run --release
# + [markdown] id="76yvotv0un4Y"
# 同じです。
# + [markdown] id="4IZBLR0Uv2Of"
# 次はunsafeを使った方法。
# + colab={"base_uri": "https://localhost:8080/"} id="CoNVu8i8R-pj" outputId="d00eb800-c715-4295-ff01-d02e471bc441"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 8]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8])
}
// 計測開始
let start = Instant::now();
let mut v: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
unsafe {
v += *(buf[idx].as_ptr() as *mut u64);
}
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", v);
}
# + colab={"base_uri": "https://localhost:8080/"} id="owDIHiGCR-gx" outputId="eb1aa0bb-6e63-455a-fc62-2142cc0baf65"
# !cargo run --release
# + [markdown] id="hsJYN4apvGvE"
# 特に速くはなりませんでした。`from_le_bytes()`をお勧めします。
# + [markdown] id="N63pszMshMLu"
# ## 40bit整数の5バイト配列化
#
# > `*(buf[idx..].as_ptr() as *mut u64) = v`が最速
# + [markdown] id="mNiC3m9Cwsqk"
# それでは、本題の40bit整数を扱う方法を検討しましょう。
# まずは愚直な方法です。
# + colab={"base_uri": "https://localhost:8080/"} id="Lg-I4lNss8x4" outputId="94984326-74e0-43db-ae56-bdb2f118b0e6"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [[u8; 5]; SIZE] = [[0; 5]; SIZE];
// 計測開始
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let v = vs[idx];
let b = &mut buf[idx];
for j in 0..5 {
b[j] = (v >> (8 * j)) as u8;
}
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..5 {
total += buf[i][j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="KN8H94DetIc8" outputId="f428c896-fa11-4d42-bbea-da68bd1d2c3f"
# !cargo run --release
# + [markdown] id="IIfFsm_OxIPi"
# 驚くことに、内側のforループが8回から5回に減っているにもかかわらず、64bit整数の8バイト配列化よりも処理時間が長くなりました。どうやら8回限定の超絶最適化が存在するようです。
# + [markdown] id="nVlAckzSuNMl"
# 内側のforループを展開するとどうなるでしょうか。
# + colab={"base_uri": "https://localhost:8080/"} id="IEOCv-n1tyG_" outputId="972a945c-5044-43a6-eeba-0475fcd07ba3"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [[u8; 5]; SIZE] = [[0; 5]; SIZE];
// 計測開始
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let v = vs[idx];
let b = &mut buf[idx];
b[0] = v as u8;
b[1] = (v >> 8) as u8;
b[2] = (v >> 16) as u8;
b[3] = (v >> 24) as u8;
b[4] = (v >> 32) as u8;
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..5 {
total += buf[i][j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="zjSI3NZ4t0SB" outputId="9e8b93ad-9eb7-4d27-c38f-9f23d675a708"
# !cargo run --release
# + [markdown] id="YXTSLsXruftI"
# これは変わりませんでした。
# + [markdown] id="4NGRhgwu1mV2"
# 次に関数利用です。
# + colab={"base_uri": "https://localhost:8080/"} id="6_yiRCs5wpaZ" outputId="1a10ac33-963e-4ae7-97a0-93b87e657a10"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [[u8;5]; SIZE] = [[0; 5]; SIZE];
// 計測開始
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let dst = &mut buf[idx];
let src = vs[idx].to_le_bytes();
for j in 0..5 {
dst[j] = src[j];
}
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..5 {
total += buf[i][j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="ebLQVXpLwscc" outputId="f9b39522-7dfe-4750-f0c4-68d87c892d3d"
# !cargo run --release
# + [markdown] id="8aBlJWss119g"
# かなり速くなりました。
# + [markdown] id="YOqNmfXp16QO"
# 別の関数ではどうでしょうか。
# + colab={"base_uri": "https://localhost:8080/"} id="Xc9uX8vHzBtm" outputId="e30ba09a-54d5-45ff-a4f8-0429db3d29a3"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [[u8;5]; SIZE] = [[0; 5]; SIZE];
// 計測開始
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
unsafe {
let dst = &mut buf[idx];
let src = std::mem::transmute::<u64, [u8; 8]>(vs[idx]);
for j in 0..5 {
dst[j] = src[j];
}
}
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..5 {
total += buf[i][j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="5E0bFuc8zDgB" outputId="26187199-6552-4576-d277-3dde543f6dab"
# !cargo run --release
# + [markdown] id="wNYu0Qrp2BvJ"
# 同じでした。
# + [markdown] id="DPAlHYfP3LB1"
# 次はポインター利用です。
# + colab={"base_uri": "https://localhost:8080/"} id="xO9iE3-UXtPF" outputId="d9d2de55-5097-472f-bda1-fec21a553d5a"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [[u8;5]; SIZE] = [[0; 5]; SIZE];
// 計測開始
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let v = vs[idx];
let b = &mut buf[idx];
unsafe {
*(b.as_ptr() as *mut u32) = v as u32;
b[4] = (v >> 32) as u8;
}
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..5 {
total += buf[i][j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="XvehP_8VXsx8" outputId="f5a85b08-7d66-4235-9bd5-773f4f48440f"
# !cargo run --release
# + [markdown] id="vehGfQvb3QV9"
# 同じです。
# + [markdown] id="eVNuSiUZ3sOO"
# 型変換に関数を使ってみます。
# + colab={"base_uri": "https://localhost:8080/"} id="ict6TYO1prQ3" outputId="0203104a-30ba-4373-f8b4-32c7756a63f2"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [[u8; 5]; SIZE] = [[0; 5]; SIZE];
// 計測開始
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let v = vs[idx];
let b = &mut buf[idx];
unsafe {
*(b.as_ptr() as *mut u32) = v as u32;
b[4] = std::mem::transmute::<u64, [u8; 8]>(v)[4];
}
}
}
let end = start.elapsed();
// 計測終了
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..5 {
total += buf[i][j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="XvMclpoQprG_" outputId="f397e7ed-d9d3-46b0-ee0e-f4e058666ef4"
# !cargo run --release
# + [markdown] id="tEyY-FgP36Er"
# 同じでした。
# + [markdown] id="72uw1RL20nS1"
# ## 5バイト配列から40bit整数への変換
#
# > `u64::from_le_bytes() & 0xFF_FFFF_FFFF`がおすすめ
# + [markdown] id="-E5YOKWe5ZZi"
# まずは愚直な方法
# + colab={"base_uri": "https://localhost:8080/"} id="z9XmkVT84e6A" outputId="517f5ec3-71dd-407f-ce46-a633340f97e1"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 5]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8])
}
let start = Instant::now();
let mut total: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let b = buf[idx];
let mut v = 0;
for j in 0..5 {
v += (b[j] as u64) << (8 * j);
}
total += v;
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="NU1JGdH74euD" outputId="38d19692-7b89-41d5-c0ae-190739ab8a1a"
# !cargo run --release
# + [markdown] id="WjmZlLsz5eMy"
# for文の展開
# + colab={"base_uri": "https://localhost:8080/"} id="PYywPApH1c1U" outputId="06d5d0b2-9e9e-4b94-f51c-bc56e0cacf64"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 5]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8])
}
let start = Instant::now();
let mut total: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let b = buf[idx];
let mut v = b[0] as u64;
v += (b[1] as u64) << 8;
v += (b[2] as u64) << 16;
v += (b[3] as u64) << 24;
v += (b[4] as u64) << 32;
total += v;
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="qeQ6QDlY1fJU" outputId="2aaeba6c-904b-4ee4-f3b4-afe27f042796"
# !cargo run --release
# + [markdown] id="mg8WHhWK52Vu"
# こちらはfor文を展開しても処理時間は変わりませんでした。
# + [markdown] id="jPMmBZ2j5_pw"
# 次に関数利用
# + colab={"base_uri": "https://localhost:8080/"} id="ygwYK9pK2DHh" outputId="dd5e5261-c4a5-4670-f4ec-dc69d0e13306"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 8]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
0 as u8, 0 as u8, 0 as u8])
}
let start = Instant::now();
let mut total: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
total += u64::from_le_bytes(buf[idx]) & 0xFF_FFFF_FFFF;
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="jmAqZ3JS2E91" outputId="ec644e70-492a-41dc-ec46-00778599765c"
# !cargo run --release
# + [markdown] id="-DT8WUGz6Exl"
# 速くなりました。
# + [markdown] id="DWlVEjIc6ZBE"
# 40bitに切り詰めるのに別の方法を使ってみます。
# + colab={"base_uri": "https://localhost:8080/"} id="jlsNNUxEtaYK" outputId="b0d1cb58-e3c3-48a1-d806-9b60eb23888b"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 5]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8])
}
let start = Instant::now();
let mut total: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let b = &buf[idx];
total += u64::from_le_bytes([b[0], b[1], b[2], b[3], b[4], 0, 0, 0]);
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="lOTbr4IStaR6" outputId="bf7962e2-b80a-46f4-ec03-601c61244473"
# !cargo run --release
# + [markdown] id="YOBltiO86gFV"
# 遅くなりました。
# + [markdown] id="jewERp1G6lz1"
# さらに別の方法。
# + colab={"base_uri": "https://localhost:8080/"} id="-h3tnX9A2TtO" outputId="49bc0721-0d71-4945-bc2c-07caed9b7d0f"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 5]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8])
}
let start = Instant::now();
let mut total: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let b = buf[idx];
total += u32::from_le_bytes([b[0], b[1], b[2], b[3]]) as u64 + ((b[4] as u64) << 32);
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="bkUahIlU2TY1" outputId="05686f19-851d-415c-e5c3-ed957ed1426c"
# !cargo run --release
# + [markdown] id="MEKnX9F96r97"
# 遅いです。
# + [markdown] id="Yc1TIf5Q6wkW"
# さらに別の方法。
# + colab={"base_uri": "https://localhost:8080/"} id="HGM4GhnnnUkq" outputId="964e9d7b-fd36-496b-8a84-3951b04f3af8"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 5]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8])
}
let start = Instant::now();
let mut total: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let b = buf[idx];
unsafe {
let ptr: *mut u32 = b.as_ptr() as *mut u32;
total += (*ptr as u64) + ((b[4] as u64) << 32);
}
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="2Fw4_aQnnUe6" outputId="a3e7bac4-021d-4a76-81a8-0cddaf373033"
# !cargo run --release
# + [markdown] id="IFL1kJD769FM"
# 遅いです。
# + [markdown] id="eQJmGwgP7GlN"
# ポインターを使った方法を試します。
#
# まずは0xFF_FFFF_FFFFマスクを使った方法。
# + colab={"base_uri": "https://localhost:8080/"} id="mY74CvBtiS3K" outputId="a7c6e97e-73ab-456b-fcdc-8cf3b8cd0488"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 8]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
0 as u8, 0 as u8, 0 as u8]);
}
let start = Instant::now();
let mut total: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let b = buf[idx];
unsafe {
let ptr: *mut u64 = b.as_ptr() as *mut u64;
total += *ptr & 0xFF_FFFF_FFFF;
}
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="awFt-NCAiSuS" outputId="30efcfc2-3244-4ad8-ef24-2231cb314dd0"
# !cargo run --release
# + [markdown] id="XXcV_XVD7aqU"
# 速くなりました。
# + [markdown] id="_SSJtY427gGA"
# 以下、その他の切り詰め方法。
# + colab={"base_uri": "https://localhost:8080/"} id="V1qHUSJKuSyJ" outputId="51f6fbf7-ac09-476f-d666-9c6b834e3d62"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 5]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8]);
}
let start = Instant::now();
let mut total: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let b = buf[idx];
let b = [b[0], b[1], b[2], b[3], b[4], 0, 0, 0];
unsafe {
total += *(b.as_ptr() as *mut u64);
}
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="_hnbvJWJuSsx" outputId="2e850059-2ba0-4f71-b28a-423915e33bf4"
# !cargo run --release
# + [markdown] id="sKvjXoC27t8G"
# 遅い。
# + colab={"base_uri": "https://localhost:8080/"} id="YzWCIZwbsNW0" outputId="6eec8e49-e0f1-4c09-8689-068bf97f3437"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 5]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8])
}
let start = Instant::now();
let mut total: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let b = buf[idx];
total += u64::from_le_bytes([b[0], b[1], b[2], b[3], b[4], 0, 0, 0]);
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="2ydtJHxDsNNd" outputId="9734cb53-95b7-4071-ccb2-37430d00a5ed"
# !cargo run --release
# + [markdown] id="qtJP-31c73gr"
# 遅い。
# + colab={"base_uri": "https://localhost:8080/"} id="zzIKQvFls-tM" outputId="fec977b1-31ea-45bf-9818-905941f106c1"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut buf: Vec<[u8; 5]> = Vec::new();
for _ in 0..SIZE {
buf.push([rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8,
rng.gen_range(0..256) as u8])
}
let start = Instant::now();
let mut total: u64 = 0;
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let b = buf[idx];
total += (b[0] as u64) + ((b[1] as u64) << 8) + ((b[2] as u64) << 16) + ((b[3] as u64) << 24) + ((b[4] as u64) << 32);
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="xJjKlvYps-k3" outputId="fe20f96f-c097-442d-b73a-15fbe90f2304"
# !cargo run --release
# + [markdown] id="jwjUohv677u2"
# 遅い。
# + [markdown] id="UrnHAtF0CfPJ"
# ## 40bit整数の配列の5Nバイト配列化
#
# > `*(buf[idx*5..].as_ptr() as *mut u64) = v`が最速
# + [markdown] id="tLalGcT98IOS"
# 40bit整数の配列を5Nバイト配列に変換するには以下の図の戦略が使えます。
# + [markdown] id="i60NjXX7bTR_"
# <image width=500 src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABAYAAAFeCAYAAAAWvx3kAAAgAElEQVR4Ae29ifMdw/7/HwqpiOJSIglCSGKXUASxE4klsS8XcXEjqyx2EbF85HJFItZCIgjha7nXmpKIoCIEJYK/6Ffzq+e51dGnp2dOn3PendPvmce76l2z9fT069Gv6fPq13S/esAA/iAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCoFYE///wz4x8G6AA6gA6gA+hAezpQK2MBYSEAAQhAAAIQqDYBDMH2DEF4wQsdQAfQAXRAOlBt6wDpIAABCEAAAhCoFQEMXAxcdAAdQAfQAXSgfR2olbGAsBCAAAQgAAEIVJsAxmD7xiDMYIYOoAPoADpQbesA6SAAAQhAAAIQqBUBjFuMW3QAHUAH0AF0oH0dqJWxgLAQgAAEIAABCFSbAMZg+8YgzGCGDqAD6AA6UG3rAOkgAAEIQAACEKgVAYxbjFt0AB1AB9ABdKB9HaiVsYCwEIAABCAAAQhUmwDGYPvGIMxghg6gA+gAOlBt6wDpIAABCEAAAhCoFQGMW4xbdAAdQAfQAXSgfR2olbGAsBCAAAQgAAEIVJsAxmD7xmCdmP36669ZneRFVt4HdAAdCNWBalsHSAcBCEAAAhCAQK0IhBpAZenuueee7LrrrsumT5+ezZo1K5szZ87O/5kzZ2bTpk3Lbrjhhq46mN9991121113ZePHj89GjhyZ7bvvvtmQIUOyU089Nbv++uuzRx55JPvll1+6ekaZjLq2ffv2bNmyZdkdd9yRnX/++dkRRxyRDR48ODvwwAOz0aNHZ2eccUZ27733Zps2bWq7HCnIt3Xr1kb9TZgwITvyyCMbjAcMGNDYHn744Q32Dz30UPbjjz+2LZ/4rV69OpsyZUp27LHHZgcddFCDnbhddNFFDX1Zt25dR/m2qreQ61988cVOnTX6+9JLL7VVnl7Jd9ttt2X6nzFjRjZ79uycHEYes9U7evvtt2dXX311dtZZZ2Xr168PkjMFHY1Zhph5h+igSRNLj6oun/jFYmfqxt3WylhAWAhAAAIQgAAEqk3ANXQ6OT7xxBMzdSDL/gcNGhTU+XCfv2PHjkanZ6+99irNX88eMWJE9tprr3X0HPe57vGHH37Y6PyXyWiu7bbbbtnUqVMzdbTdfNzjFORTR/+mm27a6QgwchRtVZfqhP7+++8t5ZO8oezE7Zprrsm2bdsWlK/LstNj1cHxxx+f0y8xCcmz1/LJSVZUVyHnP/jgg1I5U9DRmGWImXeI/pg0sfSo6vKJXyx2pm6KttW2DpAOAhCAAAQgAIFaESgyeNo5H+IY0BfidvJUWnVYx44d23an54orrshkDLf7vKL08+bNywYOHNh2OQ499NDs22+/LSxHCvKpEx5Sf74O5tlnn5398ccfhfKJ5xNPPJHJkeC7v+jc8OHDszfeeKM036K66uS8RrT4yjJ37tyWZUhBvj333NNbfp9MvnNljoEUdDRmGWLm3Y4uxtKjqsvX6zamVsYCwkIAAhCAAAQgUG0C7RivRWlDOpYadl90v++8Opwa5uzryISc0/QCX77tnnv00Uc7LoPKKTa+Z6Ygn8pw8skndyWfpor45NO5N998MwsZ6eGrz/333z/T8P6ivPvq/JNPPlko/4MPPlj6/BTkUwwIH792zhU5BlLR0VjtQAryxXxPqi5fTHah7Uu1rQOkgwAEIAABCECgVgRCDaCydJdffnmjg3nOOedkF198cWMYvdshHDNmTGkny81/4cKF3g6P5rprnvs777yTrVmzphF3QHP8fR2hl19+ua1numXQ1351UH15a6qAvvJpCKs6iPPnz2/EPPClffXVV3PlSEG+pUuXemVTDIf/+7//yz777LPsp59+yj7//PPs/vvvz4YOHZpLf8ABB2S+AIW6z5defDRdYOXKlQ12itmgeA0+bkVOFbeeOj1+7733GnEOfM/WOcWtKMo7Ffk2bNiQY6d4H9Iv3/+CBQsaMQgUD0T1MHHixExzz31ypqCjMcsQM28fT9+5mHpUdflisvPVle9crYwFhIUABCAAAQhAoNoEfMZOX5xzO9QKOBearzqavs6+Ag9qaKybjzo2xxxzTK6DdMIJJ+TSuveWHSugm9tplMNDDgHffVu2bMkOPvjg3D1XXnllU/pU5BMfVz7Vk6+jL3nlKNhvv/1y96xatapJPqX1dUo0HWPFihW5tEpfNDKj3QCAvnrxnZPTR1MWXPnt48cee8xb1pTkW7t2bU6GTz/9tLDcPha+cynoaMwyxMzbx7PoXKz3pOryiWcsdkV15TtfbesA6SAAAQhAAAIQqBUBn7HTF+dcx4CCu4Xm6/uSrQ75V199VZjHxx9/7I0DoC+qoc910+nLud1R1L6cBW46+9jXwZVDw06TgnxawUF14nb0fZ18u+waEeIyWbJkSZN8Sq+RHW46RcG383L3fXlfcsklpfe4eYQc//bbb9m4ceNy5XPL+/jjjxc+OxX5li9fnpPD5zwL4WKnSUFHY5YhZt42x1b7sfSo6vKJayx2rerMvl4rYwFhIQABCEAAAhCoNgHbyOnL/W4cA5MmTcp1dq666qrCTpopt5a9czt3Dz/8cMv7zP329vvvv8/lpbw3b95cmt9bb72Vu0+jGey8U5DPLo8cLi+88EL2wAMPNJXTTmP2NUy9FWPFBnDTyLGzcePG0vz1pdu9T1/1zbP7aqvglO5ztNSkG2CyyDGQknyLFi1qkkXLZ/YFpxR0NGYZYuYdyj+mHlVdvpjsQutP6aptHSAdBCAAAQhAAAK1ItCOEdRO2m4cA8OGDWvq7KhTGfLl/8UXX2y6T50/OQvaKbdJK8fAnXfe2Vjr/cwzz8wUI+Hoo49umdc999yTK4OMdJOvtinIZ5ennX3f9AN3icjFixfnGKgzHvIc37KBmsIQcm9Imvvuuy9XNq0e4XMEFTkGUpLvjjvuaJJHU1lCOLRKk4KOxixDzLxbsTXXY+pR1eWLyc7UT8i2VsYCwkIAAhCAAAQgUG0CIcZPJ2k6dQz8/vvv2R577NHU2Rk9enRQZ0fBqHbbbbeme/uqoxTCQEH6FIzP/Rptf4nvz/L5nB7qgEgmm8/tt9+eY6Agg3aaov2bbropd69vqkLR/WXnNU3CXdpvn332aQRB1PQCt96KHAMpyaeRNHa5FSNC0eg1peXSSy9tTBeRTuo90KgI8X3mmWdK6yIFHY1Zhph5l+mfey2WHlVdPnGMxc6to1bH1bYOkA4CEIAABCAAgVoRaGX4dHq9U8eAhrXbHR3tT5gwobQjY5fRnXeq4eH29Vj777//fnbQQQflyq6h3V9++eXOMqQu36ZNmxrTJbZv356ps7x+/frs+eefb9SBWy86vvfee3fKZthedtllOQ7r1q3LpTPp7a1vbvSsWbOC7rXzcfclh89p8/TTTzfylryufEWOgZTkO/fcc5vKfcghh3gDcbqynX766YUxO1LQ0ZhliJm3q3dlx7H0qOryiWksdmX15btWK2MBYSEAAQhAAAIQqDYBn7HTF+c6dQz45ui7Uf3LyqdAf24naNu2bV13LMueqeHp7vx0UwY3sn3q8vniNBhZ3O3111/v5eqrA60CUMbQXNPSj6HPMfe02mokiUaduPnagSTbcQykJJ9vaocrZ9GxAk++8soruXpJQUdjliFm3q100b4eS4+qLp8YxmJn10/IfrWtA6SDAAQgAAEIQKBWBEKMn07SdOoY0BdctyMze/bsXOelqEzuF1TlpUBVRem7Oa+v6/ry6pbXHGudeDf/1OULcQzIKH/uuedyshlZR40a1cRk0KBBhWnNPWb70UcfNd0rlm6MBpM2ZKth9T6d0CgUXTN5tOMYSEk+xUcw+ubb6j3UdAnfNZ3TVBB3FYMUdDRmGWLmbfQpZBtLj6oun9jGYhdSb3aaWhkLCAsBCEAAAhCAQLUJ2EZOX+536hjQfHy3E3P//ffv7MC1KuPkyZNz97/33nvB97fK31x/8sknM1dGU27FSJgzZ473manLF+IY+Nvf/tYYyvvNN994ZRwyZEhTHRx44IHedIalvdWQf8PRbM8666zg++28tO+bi6zpJm5nuB3HQCryaS65AnMaTmYrR4ACZ3799dcNbnKAKIBjUd1OnTq1iW8KOhqzDDHzdvWv7DiWHlVdPjGNxa6svnzXqm0dIB0EIAABCEAAArUi4DN2+uKc22lWtPmQfJ944olcR0eR10PuVRrf12HTQQrNoyzdr7/+mqkjZTph7lZzvDUcviiP1OW7+OKLC2VzZZVxroB+rqyHHXZYUx4K+Gd/nXfT28cffvhh0716pttxtdOX7ftYa/j8J598kitzO46BVOTzOVG0vKPOF3GZO3dujq8Y29NtfNx29TsYswwx8y7i7jsfS4+qLp9YxmLnq6eyc7UyFhAWAhCAAAQgAIFqEygzerq51qlj4KWXXsp1XBR5PbQsp5xyStP9+qIa2ilt9YzNmzdnxx13XFP+dmdZHVi7g+XLL2X5VF4t26eO5Q8//JD9/PPPmaZLrF27tjECwjXGJbs62uJiy+qb9/7dd981pbHT2/tr1qzJ8Z0xY0bQvXY+WkLR/ZquFStWrlzpzasdx0AK8knWLVu2NII/3nrrrY0RHFp1QEt22hzcfY0y0NKbtt5q/+233955Xwo6GrMMMfN2eZcdx9KjqssnprHYldWX71q1rQOkgwAEIAABCECgVgR8xk5fnOvUMaDo/m6n5ZxzztnZaWlVNnfOdV8tV6ih574Adiqrot0XdTjd8qYqn1tO37GcBUcffXSufjR9w05/9tln59J88MEHTWns9Pa+ltlz6/+RRx4JutfOx13HXXlqhYibb77Z+3/jjTfmnqul/66++upGp1sxCdQB1zNSkM+Wtd396dOn52QVd5NPCjoaswwx8zYMQ7ax9Kjq8oltLHYh9WanqZWxgLAQgAAEIAABCFSbgG3k9OV+p44BfVl2O4bHHHPMzk5LWRnVeXfvHTt2bNC9Zfnqa/LJJ5+cy1vPUudxw4YNwc9IUb4y2d1rmqvuMtaoATudOtNumlDHiVY6cO/1Rc63n+fb9y1N6Obb7rECTepZKcjnkzn03L/+9a8cY3uFhhR0NGYZYuYdWgcx9ajq8sVk1079KW21rQOkgwAEIAABCECgVgTaNYRC03fqGFD+I0eObOq4aI66vla3evaKFSua7lPHTwHXWt3X6vrEiRNz+SrvCy+8MJPToNX97vXU5HPL1+rYDfwlFnYgQl/HU1/kW+Wr60cddVSOtZwRIffaaWI6BlKQz5a13f0HH3wwx9gN8JmCjsYsQ8y8Q+sjph5VXb6Y7ELrT+lqZSwgLAQgAAEIQAAC1SbQjhHUTtpuHAO+L7Ihw8l9gQcXLlzYdqfSlvOFF17IdaLUEb788ss7jl2QgnwbN27MnnnmmUxfijU3fejQodnq1auDWB100EE5JuvWrdt577fffpu7rikemt9us3X3fYEH1cHfsWNH6X1uPjqO6RhIQT4js5iK2+LFixsjGRQjwlwr2l555ZW5+lFsBzt9Cjoaswwx87Y5lu3H1KOqyxeTXVmdudeqbR0gHQQgAAEIQAACtSLgGjp9ddyNY2DZsmW5jsuIESMawfCKyqeOqUYWqNNu/3/11VdNHZ6i+4vO+75gjx8/vmUntyg/nU9BPl+sgJDRFb75y+LtLv/ni8fw0EMPldaFYknYdaf9TlckmDVrVnbTTTdl//znPzMFL9Tx7NmzC/99yxpecMEFjXunTZvWiDMwb968neXvtXzSIzmntDShzUwyluneF198ke29995N9/jqLwUdjVmGmHmX8XevxdKjqssnjrHYuXVUdlwrYwFhIQABCEAAAhCoNoEyo6eba904BhQN/8ADD8x1XrQigObPuuXSUHOtTW93kLSvDrybtp3jp556KpfnoEGDclH428lTaVOQb8GCBTnZxOzOO+8sHAkh9r6VCXTOZTB//vxc/nLc+EZ+aNUIddzd+tNx2dKP7jO7OfbFp3j88cdzcplnpCDfXXfdlWM2cODArKjccpJp2VCXs6/+UtDRmGWImbfRkZBtLD2qunxiG4tdSL2ZNNW2DpAOAhCAAAQgAIFaETAGTl9v991336YOSGgAQVMOTQFwOzA61ioDmq/+2GOPZZpnqn1Fm3fTamk6fd02+XWyHTduXC5fLYGnpbLK/hWQUEvCaZ6v5uPLmeCOXOi1fBqKq6CBLjcd60vckiVLGkvYyeny8ssvNyLyF6U30fptxuqY+KYcKH9N+VCn9umnn87uuece79JjJp2dZ8z9rVu35ljYkfrdZ6cg308//VQ4ZeK8887LHn744YZjRbE3NGWkqP6ee+4573vSax0V85hliJm3qy9FxzH1qOryxWRXVF/u+VoZCwgLAQhAAAIQgEC1CbiGTl8d68ul3elUZ7OdvPUVWUvE2Xm0sz9z5sy2nueWbdu2bZmcAO08syztJ5980lSeXssnedX5LytzyDU5P3799dcm2QzL119/vWOGw4cPzzZt2uTN1+Tfl9vNmzfnWLSa+pCCfL6AmyH1ZtJceumlhYxT0NGYZYiZdzu6GUuPqi6fGMdiF1p/1bYOkA4CEIAABCAAgVoRCDWA2kmnjqLpeJitvvS3k4fSani3vnyaPEK3N9xwQ9vPcsumIeyhzwtJ5zoGei2fkdc3HDdEHqUZNWpUy1EZy5cv904LKXuGRln4eJkyx9j6lmHUF9dWz0pBPo2eKRoNUMZZo15arfbRy3fQsI9Zhph5m/KHbGPpUdXlE9tY7ELqrVbGAsJCAAIQgAAEIFBtAiHGT7tpFODM7ZBoBEG7+Zj0CqTlWyLPfYaGrstINPd1s9Uwcjf/bo7LOrq9kM9mo6HkRx55ZLC8ih9x7733Bq8WoNgEkyZNCsp/8uTJ2ZYtW/qkDm0ZW+2/+uqrufJdf/31QeVIQT5NVVFMjRAdVfwOjRbRF+VWXMz1XuuoyhGzDDHzNgxbbWPqUdXli8murN6qbR0gHQQgAAEIQAACtSJQZvR0ek1zPzUM2/4vm68d8hzNAV+6dGl2yy23NKYYHHHEEZlWKlAcAA2HXrlyZbZ9+/bgjk6rZyo+gSLV33333ZnWfV+0aFGTPLZs7r7Sq+OsAH/K49prr82++eab0rLtavlc+dVJ1NKMitmgL8liKweAvkQrXoJGboi9OpQywt37Q47Xrl3biC0wZcqUTHEY9AzlreUS58yZk61fv76jfEOe3SqNvqy69Ri6fKPJOwX5VIcK5ChHjFbUUP1ptM5pp52WydEhPZaumTK3s+21jqqsMcsQM+92OMfSo6rLJ8ax2BXVX62MBYSFAAQgAAEIQKDaBIoMHs7/2VHnCW5wQwfQAXSgHjpQbesA6SAAAQhAAAIQqBUBDNh6GLDUM/WMDqAD6EDf6kCtjAWEhQAEIAABCECg2gQwFPvWUIQnPNEBdAAdqIcOVNs6QDoIQAACEIAABGpFAAO2HgYs9Uw9owPoADrQtzpQK2MBYSEAAQhAAAIQqDYBDMW+NRThCU90AB1AB+qhA9W2DpAOAhCAAAQgAIFaEcCArYcBSz1Tz+gAOoAO9K0O1MpYQFgIQAACEIAABKpNAEOxbw1FeMITHUAH0IF66EC1rQOkgwAEIAABCECgVgQwYOthwFLP1DM6gA6gA32rA7UyFhAWAhCAAAQgAIFqE8BQ7FtDEZ7wRAfQAXSgHjpQbesA6SAAAQhAAAIQqBUBDNh6GLDUM/WMDqAD6EDf6kCtjAWEhQAEIAABCECg2gQwFPvWUIQnPNEBdAAdqIcOVNs6QDoIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQKAWBP7888+M<KEY>CEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCECgnxL4888/M/5hgA6gA+gAOoAOtKcD/fRnn2JDAAIQgAAEIACBPAEMwfYMQXjBCx1AB9ABdEA6kP9F5QwEIAABCE<KEY>AAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAgbQJ//vlnxj8M0AF0AB1AB9ABdAAdQAfQAXQAHUAH0IF66EDOS0HF16PiqWfqGR1AB9ABdAAdQAfQAXQAHUAH0AF0QDqAY4AREowQQQfQAXQAHUAH0AF0AB1AB9ABdAAdqLEO4BioceXjHcQ7iA6gA+gAOoAOoAPoADqADqAD6AA6gGMAxwCeQXQAHUAH0AF0AB1AB9ABdAAdQAfQgRrrAI6BGlc+nkE8g+gAOoAOoAPoADqADqAD6AA6gA6gAzgGcAzgGUQH0AF0AB1AB9ABdAAdQAfQAXQAHaixDuAYqHHl4xnEM4gOoAPoADqADqAD6AA6gA6gA+gAOoBjAMcAnkF0AB1AB9ABdAAdQAfQAXQAHUAH0IEa6wCOgRpXfruewR07dmS///47DQY6gw6gA9F04Ndff42Wd7ttHun5eoIOVE8HaGOqV6e8p9QpOtA3OhDFMXDPPfdk1113XTZ9+vRs1qxZ2Zw5c3b+z5w5M5s2bVp2ww03dGX8rV69OpsyZUp27LHHZgcddFA2ePDgbPTo0dlFF13UeNa6deu6yj9Ewf773/9mDzzwQHb11VdnJ510UnbggQdm++67b3booYdmJ5xwQuP8K6+8kqlDHZKfnea7777L7rrrrmz8+PHZyJEjG/kOGTIkO/XUU7Prr78+e+SRR7Jffvml7XztZ7TaX7VqVXbxxRdnJ554YjZs2LBs9913z/bYY48G72OOOSa75pprsrfffrujMqQgX5n8jz766E6dNfrbDu9eyaf34qabbmq8e3rXTNmLtrNnz85mzJiR3XzzzdmkSZMa+2Vc7Gu9fAdpY6rRxmzdurXxGzFhwoTsyCOPbLRzAwYMaGwPP/zwRvv30EMPZT/++GPb7Uyv3kH7HSnb77aNUd69egdvu+22TP9qO9SGFLUv5rzsgNtvv73xm3jWWWdl69evD6rPXskntruijUlBR6tux9DG/GV/631sx46hjZmT7Yq+RMx2IGbeZb9v7rVYbXnV5evFOxjFMaCOpIy7sv9BgwYFGQaucn344YcNB0BZ3rq22267NTqu27Zt6+g57nPtY3X077jjjmzPPfcsldGU8eCDD86ef/75oHIobxlce+21V8u8R4wYkb322mtB+drlb7UvZ8bxxx/f8vlGPjkJ/vOf/wSVIwX5Wsm/cuVKr+whXxl6Ld+8efO8ZTd11Wp7zjnntKzHFN5B2pjm9rW/tTHq6MuBJUdqK53Udf1eqBMaMmKp1+9gq/ZF17tpY3R/r9/B0HorqtsPPvigtJ3ptXxiHLONSUFHVYYq2zG0MZ3bMbQxf/2+xu5LxLL3U2hjYupR1eWLyU55l/33zDGgr/xlBfNde+KJJxoGYpGx4Ts/fPjw7I033mj7Wb7n69xnn33WGKXge1arcwsWLCgth37Ixo4dG2Qo28+64oorOhqV4JPx3//+d2NUgJ1/yL4M9zVr1iQvn09m+9yGDRsaIz9cmTUixU7n20+h/tR5csveznErx0AK76DYhxjttDH5xj8FHZWzNqT+fHp79tlnZ3/88Ufhu5iCfL62wT7XTRujfFJ4B0Od4r461Lkyx0AK8sVsY1LQ0arbMbQxndsxtDF/OQXs9quv+xIx24GYedu/Za32Y7XlVZev1+9gzxwDRxxxRKFx51O2N998M+gruv0im/39998/++KLL9p6nq8M+mKsYf0m33a38jy+9NJL3nLI2NUQy3bzNOk1vcBX5nbOqWzdGHyaSvH99997y5GCfK1YaIjd0Ucf7a2DVp3MVOTTV1ijE51syxwDKbyDpg5DOpa0Mc2OgRR0VGU4+eSTu9JRTZExemBvU5DPLo9vv5s2Rvml8A7qd7CTtsW+p8gxkIJ8pt5itDEp6GjV7RjamM7tGNoYv1PAtF191ZeI2Q7EzNu0jSHbWG151eVL4R2M4hi4/PLLG8afOhmaoz516tRcp37MmDFe486ncD/99FM2dOhQrzGiee4alqmhh8uWLcvOP/98bzr9yPvybuechvyYBsLeHnbYYdn8+fMbw/o/+uij7Nlnn830ZctOY/bHjRvnLcfChQu96TXPVnNs33nnncYXecUdUAfc5GdvX375ZW/eITLqZfM5PTSl4aqrrsrWrl2baS7Pli1bGiMwLrjgAm8Z7rzzTm8Zei1fCAPNsbd52vsaql2WRyryXXLJJU0ySH9UtqJ/1ZeJ+6F7Fy1a5JUzlXfQ1AFtTP9rY1R3S5cubdJP846p7fm///u/xogs6drnn3+e3X///d52/4ADDsh803pSeQeNjvq23bQxqbyDGvFg6s1sFVOoqI3RSDnNbVbMIf1eT5w4sfFb4vJJRT5Trr5uY5RvCjpaZTtGjGljOrdjUnkHq97GxGwHYuZt2sZW25h6VHX5YrJrVW/mehTHgMnc3srTZowIbRU00L5etu9ThIEDB2YrVqzw5qGgTvazzH7R1/qyZ5trGpq299575/JVR7/oK7mv3CrLt99+21RuGbm+zr4CD2rIjCmD2aqDrnn9Ri6zVcBDk6bdrdiYfOytnC1FeSnoop1W+wrC6KZPQT63TO7x3Llzc7LYsimgpHuPOU5JvtNOO61JDhnhppzdbH26vKvfwVblp43562tHim2M6k9tlP1eaV+/Bb6OvtJryPN+++2Xu0eBUW19SOkdtMtl73fTxiifVN5BOYndOvz000+b6sOWO3Q/FfnKyttNG5OCjlbdjlHd0cb89TvgvqdldozYpfIOVrmNidkOxMy7rF10r8XSo6rLl8o72DPHgILbucpUdKyvnm4Dp05pUXqd10gF9x59ES27p+zak08+mctPBmuRU8DkpYbYLYeG2Jjr2vo83PpS/9VXXzWls+/5+OOPM3XM3LzlabXThe5rdQWVVSsPmDx9nXw7P0WWNmnN1vdlPQX57HK7+88991yT3EYWe6tAj+595jgl+TQSxy53X0wxkZwpvIOGd9HWNdppY/6aSpCCjmoYverE7ei7nXy3fn1t+ZIlS5rexxTkc8ttH3fbxiivVN7B5cuXN7Uxam98Dmxb/pD9VOQrK2s3bUwKOlp1O4Y25i/7zbYDzH6ZHUMb89fvpd0G+H5/uulLxGwHYuZtM2m1H6str7p8qbyDyTsGFBvANGpmq07zxo0bmwxDV1H1BcOkN1sFD3HThR5fe+21ufw01LDV/frqb55vtpr6YN/nG16q4ft2Gt++lmY0eZrtww8/3PI+X17mnH5YtTyLRl20WopQXx/Mc83Wxzgl+YycZisHi9tRUZyFM888s0m2sh/UlORTLARTF9pqCK+RtdNtKu9gq/J3arSnIl9d2hjVo5yeL7zwQmO511b1qmHqtk5r323nUnoHXXn6oo1JRUclm6Yb2fUREpjVZeIepySfWzb7uNM2RnmkoKO0Mf7OH23Mn404XPZ7rf1e2Np6V6rcxsRsB2LmbbeDZfsx2/KqyxeTXVmdudeSdwwsXry4yQhRY6Uo/K4gvmPfknsamupL2+qcjNhbb7218eOu6QOHHHJIIzp02X0a9uKLjaD5s/Z9w4YNa5JRjXHIl/8XX3yx6T6xkbPAzjvmvm/6gS+GQqryaX1jxYdwfwy1frU7D7PMMZCKfIoT4QaPfOSRR7rWh1TewVa63KnRnop8tDF+o903NNhdpjWVd9DV0b5qY1LRUcmnJe7sNtM3Sszl0Oo4JfnKytppG6M8U9BR2hjamCL9TukdrHIbE7MdiJl3kd6452PqUdXli8nOraey4+QdA7fffnuTESKDpGzeuy2sL0K7OwTVTt/X+xrGbRtQ2nej22td7j322KMp3ejRo4M6cwpSoZUO7Gf0hZEWwkFTKNRZtp+t/QcffLCp7KnKp3L5RnNo2JjknzZtWpNsRY6BlORT7Am3Pp5//vlMUz5uvvnmbMKECY06k3Gred0aDqfgg2VTVsSiv7yDnRrt/UU+33tZ5TZG8spJ5+q0jAO9d4ZHSu+gKZO2fdXGKK+UdFSj2ew6UVsip6RGmV166aWN6SIKEKnfojPOOCPT7/Azzzyzs75sRmY/JflMmXzbTtuYVHXUJ6N7jjbmf+9yinYabUz/amNitgMx83bbhLLjWG151eUT01jsyurLdy15x8Bll13WZITIINFQd58w7jnffJRZs2YF3evm1c7xjh07Mt+QPZXdjY2gTpltZGlfHbjQ57lzeRR3IPTeVum2b9/eiA6uKQN6KfX16913380URMsXLFGGoNLa+aYq39///vccdwV01CgPlV8dabteihwDKcmnlTnsMmtfq0cMGjQod95Op6kU7tBsuw77yzvYqdHeX+Sz66QqbYxk2rRpU7Z58+ZM7c1vv/3WcGTJoaV20NZTs3/vvffWqo0Ro5R09Nxzz22qF42e8wXDNfVltqeffnqhEzIl+ez3zN3vtI1J6XfClanomDbmr9EFqdZfX9kx0oGU3sGqtjEx9Shm3kVthO98LD2qunwpvYPJOwZ8X3XdiNs+5dQ5BfkzRonZ9lUwtqJnagqAb+irnn/kkUdmP//8c5NR+9Zbb+XKeOWVVzalKXqWzvv4uJ3zsvvLrmnaheHWaitHwSeffJIrd4ry+VatUPnV8BgeoY6BlOTzTe3NliM1AAAREElEQVRoVW/29bPOOqvpS6xh4dOxFN/BTo32/iKfqY8qtTGSyRcrxdZLe9/Xfqf0Dpo66ss2RnmmpKNFv292PRXtywn5yiuv7GxnDa+U5DNl8m07bWNS1FGffOYcbcxfTgExSbH+aGP8qy+k3MbE1KOYeZt2IWQbqy2vunxiG4tdSL3ZaZJ3DIwaNaqpc6qvn7YAZfsfffRR070yVhS8ouyebq6poXYD2RkDSUMrfR3np59+OlfG2bNnB5fR9azqeQpg0Y0c5t4Qx4DkUid6y5Yt3memJp+WwXFXc1BMhzVr1jSVP9QxkJJ8PkPB6J+2GgrpG+lhp9Ga46b+zba/vIOdGu39RT7VR9XaGMkU4hjQD6Yi+xudtLcpvYMqV1+3McozJR31rbRjtyF6D/fZZ5/c75pJo6kg7ioGKcln65a732kbk5qOunLZx7QxzU4BsUmt/mhj+mcbE1OPYuZttw+t9mO15VWXT1xjsWtVZ+715B0DQ4YMaTIw1LFxhSg69i2np6+iRek7Pa+53WeffXZTOY0RpK1iBhQFPdQygXZa7d9///3BZZw8eXLu/vfeey/4/jKZQxwDKq+GkT7xxBPeZ6Ykn0YEuFH7VX5FwHU5hDoGUpLPDdhj9Grs2LGN0TNmmoRGlPzrX//K3HdL6RVh3NVVN12K76Dqr1OjvT/IV9U2RvUW4hj429/+1hjq+s033+Te1ZTewRhtjBiloqOaUiZHqmlbzFaOAMUr+frrrxv1o5gDakeK6nbq1KlN9ZiKfO7vgHvcaRuTko66Mplj2hjaGPM+a9ur3/kqtzEx24GYeZs2ImQbqy2vunxiG4tdSL3ZaZJ3DLhR4xV1XUaHLUTRvm/OtWuQFN0bel5fmt3KtBtXxRowHTJfnupQ2+m1rw6eL63vnG/EgDHOfOnbOedzrLhltY81t8iVNRX5FKjxqKOOyrEuWuEi1DGQinyqV5+TaPr06YW6pFEevlUzNG/R1pPU30FT1k6N9tTlq3Ibo7rzrRNttyv2vtraVatWNelnKu9grDZGjFLRUd9vgpao1XnzHrpbxaSx69Ds21PeUpHPLbt73Gkbk4qOuvKYY9qYv4al08b8j0WvbO0qtzEx24GYeZt2ImQbqy2vunxiG4tdSL3ZaZJ3DPjmM8qzbQtRtK8fO2OEmO2MGTOC7i3K0z7/0EMPeb+e6FkaLqk533Z6375vXriiPvvS+s6dcsopTTLqa06o48SXn3tOX330lU5GnIZ/ak1uRZjWlIzdd9+96dmSWxGo7TxSkE9BlBQd2+iA2Z544omNgGd2ec1+qGMgBflMmVWWOXPmZFqTWUEH5Sgw14q2vuFZChJmp0/5HbTL2anRnrJ8dWhjtMKJjMEffvihEYNFwQg1VFa67P5Q6t3VdC0FKzR1n8I7GLONkZyp6KiciQr+qKV75QhWu6plc01d+Lb6AjhmzJhc+/v222/vvC8V+Xzlt8912sakoKO2HPY+bUx+yWLamP85B3pha1e5jYnZDsTM224vWu3HasurLp+4xmLXqs7c68k7BnxD9D/44IOdBoUrkH3sm3PdF+u66xnKx3Qw3e2FF17YiOBvl6Vo//3338/lc8455wTJpzzd+Z67arlCPVt83eUS5ZiwYymkIJ8MH7eOdKzl+uQA8P0fd9xxuXs0+kMjDCZOnJiddNJJjRUbUpCvSLdCzqtD4w4NVpRx+95U30G7jNrv1GhPVT7amD8bzoKjjz469y7aTq8U3sGYbYx0O1Uddd/BomONXHLbYP1+mPT9Rb5O25gUdNSwtre0MbQxn3/+eeM97C/voK2/9n5/aGNitgMx87Y5t9qPpUdVl09cY7FrVWfu9eQdA1rezzUoVq5cudOgcAWyj33r7/oiItv3hOxrKS0Ns3LLpQ6Wu5RWq/zkkXXz0Zz9Vvfpur7gu/dqPnnIvX2VxjeH1A5gl4J8vrXQXW6dHOuLVwrydVuXI0eObNIjOXv0lc/km+I7aMpmbzs12lOUjzbmrwBgvlgn+qJn6j6FdzBmGyM5U9RRwz9kq5gmbht722237azD/iJfp21MCjrq1hNtDG2M3kkzcqe/vIOuHpvj/tDGxGwHYuZtGIdsY+lR1eUT21jsQurNTpO8Y8D3st944407DQpbGHffN6fcDazm3tPqeN26dd6IyzJUV69eHVQu9xlux0xOBw2rddO5xytWrMgZW+qou+liHvsCgrjz9nstX2yjvdfydVu/bkBGN+hQau9gkbydGu2pyUcb85fBburaF8fFDkTY63cwdhuTmo6aegndPvjgg7nfKjvIbn+Rr9M2Rpx6raN2XdHG0MYYR51xDPSXd9DWY3u/v7QxMduBmHnbrMv2Y+pR1eWLya6sztxryTsGtF66acDMVsPn7S+arlA69gUe1NJ6GjrtSx96zjfEXPl2sxKAz0sUMuXBF3hw4cKFbcungIGa0ytDbcqUKY0gfRMmTAjKx9cYu/f2Wr7YRnuv5bN1V/O1X3jhhUYAyyVLlrSsQ3WuzHtltopbYeeZ2jtol83e79RoT02+KrYxqqeNGzc24pPoS7HmpivwZagz1XVeSVfVuTH13+t3MHYbk5KO6rdXv6+LFy9ufOFQm2PqoWh75ZVX5toZxQAy6VOSz5TJt+20jVFevdZRWx7amLxjgDYmHVu7ym1MzHYgZt52+1G2H7Mtr7p8MdmV1Zl7LXnHgAqs5f5Mp8VsNafTFcY+1jx9k9Zsu12RYPny5bk89XXfjZJtlyNkf9myZbl8R4wY0QjEVXS/jGLfdAYtl1V0T9H5Z599Nvd8DSe3YwUU3eubE6MpHHb6XsunuUnXXHNNI2CWVnyYOXNmNnv27NJ/NwiIDMJZs2Zl6tQouJ/iSGzYsKEhZ6/lE2vpg3TG6Lq2+vLfauSJRnfY92jfrT/ln8o7aOuVu9+N0Z6KfFVtY1RXvlgBISOcfHMLpaeaSmV0oNfvYOw2RnKmoKOXX355bsSc2lJTD77tF198ke299965dsauv1Tk85XfPtdNG9NrHTVy0MbknQK0Mf9jQhuT142+7kvEbAdi5m3aj5BtLD2qunxiG4tdSL2ZNP3CMTB//vycUaFOse+ruiLyqwPndnZ0/Oabb5YaMAaKb6t8R40alctXnURf+nbO/fzzz41OnFtmefU1r8bNS9MhDj/88FxZxo8fn0vr3us73r59e6ZRD+7zNWznyy+/LMzz7rvvzt2jPLSsiP2cXstnlyV0X/Vq81Cnu+jeFOTTqA85Auwya//kk09urCrhK7u++LnpdezWn+5N4R30yWCf68ZoT0G+KrcxqifFHvHp25133lm4koraP9/KBDpn130K76BdnpD9dtoY5ZeCjt511125Ohw4cGD2+OOPN9WHkV+O6uOPPz53j1t/qchnyl207aaNSUFHaWPyHT/amL+Y0Mb8xSJWXyJmOxAz76I20Xc+lh5VXT6xjMXOV09F53aZY2DfffdtMg5CA+yp4FIG3zAvGZkaTi9jRcuuaTin+6XXGKJKVwQh5Pxrr73WVH6Tr6Y16Jll/5JVXiCl3WeffRqR7d1nagqAydPeapUBxVR47LHHMs0/0f7gwYNzafWFX15vN9/Q42nTpuXyVDlk9MmA1fD0Tz/9NHv33XezpUuXZgpyaJfT7Mtw0nAY97m9ls8tT6tjjTAwMmmruiu7JwX5tLybXWazL4eBnGUK2vnGG29kGm1z6qmnetPKiPdN00nhHSzjr2u0MWm3MWoXFIvF6KW9VfuoqS+a7yrH58svv9wY4VOUXsvlufqQwjvolqnsuN02JoV38KeffvI6kVWX5513Xvbwww83HPCKf6PRVUX199xzz+XqLwX5yupL17ppY3R/r3W06nYMbcxfHVvpG21MM49U2piY7UDMvFu1j+Z6TM5Vly8mO1M/rba7zDGgDqZrCLYqnH399ddfzy2rZudXtj98+PBMa2Pb+bW7765rX/a8Vtc0DN19vryTmpvf6t6i6xoe7+bZzvEvv/ySC45U9Kyy8/YSVPbzey2fXZaQfQ2ZteUcNmxYKd8U5Pvtt98y39xRW46y/UGDBpVOH+n1O9iq3mhj/rf2tOo4xTZG9afOf5kOhlzTSCaNkHH1IYV30C1T2XG7bYzySuEd9AW9Dak3k+bSSy/N1Z3hlIJ8piy+bbdtTK91tOp2jOqMNuavzjBtzF8szPucQhsTsx2ImbdhGLKNxbnq8oltLHYh9aY0u8QxICPOGAVmqy/hoYU06TQ3zjdc2uTp2yqadchcefOMoq2G6fvy7+Scz2jXczXnUl9d2s3zhhtuaJulT04FB3OjfoaWRUs1atSBL19zrtfymXKEbN1lGDUSotV9Kci3efPm7PTTT29bhzQKxTc1x5W5l++gWxb7mDbmL6eA3tlU2xjVmW+oXGg7o+lcZSOjUngHbb0s2++kjVF+KbyDGsFWNBqgrC41sq5V3JMU5PPVW1+1Mb3U0TrYMao72pj/dYhpY/KOgVTa0JjtQMy8fW1j0blYbXnV5eu1ju4Sx4CCD7nGgjzvRcpUdl7zwSZNmpTLz81fx5MnT862bNnS0XPcMhxyyCFBz/SVwz1XZLSbZyrAhm95LjcfTa/Qi2fu64vt1q1bG9GT1dF3n1d0fNppp7XlfOmlfKGMxo0bl5N/27ZtQaxTkE/xA0INdwXXKYsl4TLr1TvolsM+po0JcwwYZr3WUQ0lP/LII3PvWFEbI8fcvffeG7yqTK/lM5zLtt20MSm8g4ofENrRlENfX3L1taeMibmWgnymLGbbl22M8uyFjtbFjhFf2pg/M9oYv2NA+pFKGxOzHYiZt2kXW21jcq66fDHZldXbLnEMaM6E5jXb/0VDzssKa1/T8nqKLaDl9Y499thGRPYxY8Y0lsHSXOv169cHGSB2nmX7Cpw1d+7c7L777ssWLVrUJIstl7uvtLpHgfpULg3lk4FU9ixdUwddc/lvueWWxhSDI444oiGjGnoNxdR8cQUNbJVPp9c1ekDlVmwGfaXT0mKKj6Ah9QpopyFq8+bN63hFhl7L14qL5mK6dan5ta3uM9dTkE+Gu2TQKgMy4DWlRh0sxbyQc03TT0KXizNy2dtd/Q7az3b3aWP6XxujTqJilyhuir4kK8Cn9FMOLbXlGj2l9k/tpX4g3TpvdZzCO1hWxm7bGOWdwjuoOlQME7UpRx11VKP+NCJQDmO1PfoNVF2UsSi6loJ8pmwx2phdraN1s2NoY7qzY2hj+r4vYdoTexuzHYiZty1Dq/1YbXnV5RPXWOyK6myXOAaKHs75Ym8mbGCDDqAD6AA6gA6gA+gAOoAOoAPoADqwK3QAx8CfKNquUDSegZ6hA+gAOoAOoAPoADqADqAD6AA6kKYO4BjAMdDREE9e6DRfaOqFekEH0AF0AB1AB9ABdAAdQAfQgXZ1AMcAjgEcA+gAOoAOoAPoADqADqAD6AA6gA6gAzXWARwDNa78dr1IpMfziA6gA+gAOoAOoAPoADqADqAD6ED1dADHAI4BPIPoADqADqAD6AA6gA6gA+gAOoAOoAM11gEcAzWufDx91fP0UafUKTqADqAD6AA6gA6gA+gAOoAOtKsDOAZwDOAZRAfQAXQAHUAH0AF0AB1AB9ABdAAdqLEO4BioceW360UiPZ5HdAAdQAfQAXQAHUAH0AF0AB1AB6qnAzgGcAzgGUQH0AF0AB1AB9ABdAAdQAfQAXQAHaixDuAYqHHl4+mrnqePOqVO0QF0AB1AB9ABdAAdQAfQAXSgXR3IOQY4AQEIQAACEIAABCAAAQhAAAIQgAAEIAABCEAAAhCAAAQgAAEIQAACEIBAjQj8/9qgkyxnHI6/AAAAAElFTkSuQmCC">
# + [markdown] id="QcXWKHLW_KET"
# まずは素朴な実装。
# + colab={"base_uri": "https://localhost:8080/"} id="lC2kY7S29VjH" outputId="c14128dc-7a8d-40d1-d7e3-8708104e7733"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [u8; 5 * SIZE] = [0; 5 * SIZE];
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let v = vs[idx];
let b = &mut buf[(idx*5)..];
for i in 0..5 {
b[i] = (v >> (8 * i)) as u8;
}
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..5 {
total += buf[i*5+j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="QsPDRdJQ9VZ0" outputId="3f3e7207-9fba-4e2f-df69-884e6c52396d"
# !cargo run --release
# + [markdown] id="fHtXVBDP_NVe"
# 遅い。
# + [markdown] id="wMnwezYX_Vmj"
# 内側のforループの展開。
# + colab={"base_uri": "https://localhost:8080/"} id="Opv8hlPmQYR-" outputId="214380c0-46e7-4b3d-a841-4ef5f211ece8"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [u8; 5 * SIZE] = [0; 5 * SIZE];
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let v = vs[idx];
let b = &mut buf[(idx*5)..];
b[0] = v as u8;
b[1] = (v >> 8) as u8;
b[2] = (v >> 16) as u8;
b[3] = (v >> 24) as u8;
b[4] = (v >> 32) as u8;
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..5 {
total += buf[i*5+j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="5y_z_xUyQa-3" outputId="63d4a5e9-160c-4218-90e2-a52d28315dac"
# !cargo run --release
# + [markdown] id="EXN7k502_bcF"
# 展開しないもとの同じです。Go言語よりも遅いです。
# + [markdown] id="_HDZSfqN_h1H"
# 関数利用。
# + colab={"base_uri": "https://localhost:8080/"} id="hEgxaV6lSi7p" outputId="dd506c94-8e78-4488-98a9-384a6cfbc7aa"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let mut buf: [u8; 5 * SIZE] = [0; 5 * SIZE];
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
let src = vs[idx].to_le_bytes();
let b = &mut buf[(idx*5)..];
b[0] = src[0];
b[1] = src[1];
b[2] = src[2];
b[3] = src[3];
b[4] = src[4];
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..5 {
total += buf[i*5+j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="qGyV9iwbSiwN" outputId="51aece0d-553a-42e1-ff72-eb2d8d708b07"
# !cargo run --release
# + [markdown] id="JTQQAjCw_kCF"
# 同じでした。
# + [markdown] id="_2FeMOoe_nmE"
# 最後にポインター利用。
# + colab={"base_uri": "https://localhost:8080/"} id="pnj3PHXhT2lY" outputId="7a8924d8-4308-4b3b-e81e-9912e13f07fa"
# %%writefile src/main.rs
use std::time::Instant;
use rand;
use rand::prelude::*;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut vs: [u64; SIZE] = [0; SIZE];
for i in 0..SIZE {
vs[i] = rng.gen_range(0..(1<<40));
}
let buf: [u8; 5 * SIZE + 3] = [0; 5 * SIZE + 3];
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
unsafe {
*(buf[idx*5..].as_ptr() as *mut u64) = vs[idx];
}
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
// 時間測定外の処理
println!("{:?}", buf[0]);
let mut total: usize = 0;
for i in 0..SIZE {
for j in 0..5 {
total += buf[i*5+j] as usize;
}
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="HJbuB9gUT2eF" outputId="7f5b0290-1432-4fad-9aa4-42a3a3802529"
# !cargo run --release
# + [markdown] id="DY4z67Xz_sG4"
# 速い。やっとRustの面目躍如。ポインターを使ってやっとGo言語を越せました。
# + [markdown] id="2pTZdT2SCfB9"
# ## 5Nバイト配列から40bit整数の配列への変換
#
# > `v = *(ans[idx*5..].as_ptr() as *mut u64) & 0xFF_FFFF_FFFF`が最速
# + [markdown] id="cLBg88nbAwOv"
# まずは素朴な実装。
# + colab={"base_uri": "https://localhost:8080/"} id="c62h9NfsAG01" outputId="dc19b245-827f-4b8a-ab90-4c45249154b2"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut ans:[u8; SIZE*5] = [0; SIZE*5];
for i in 0..SIZE {
for j in 0..5 {
ans[i*5+j] = rng.gen_range(0..256) as u8;
}
}
let mut buf:[u64; SIZE] = [0; SIZE];
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for i in 0..SIZE {
let i5 = i * 5;
let mut v = 0;
for j in 0..5 {
v += (ans[i5+j] as u64) << (8 * j);
}
buf[i] = v;
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
let mut total: u64 = 0;
for i in 0..SIZE {
total += buf[i];
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="g0Z59ThZAGqv" outputId="fa19b1bc-8a06-4738-c697-16c758256a3a"
# !cargo run --release
# + [markdown] id="jEfpVc_JAz6G"
# 内側のforループの展開。
# + colab={"base_uri": "https://localhost:8080/"} id="tyYoKbHKbL4I" outputId="aa68855a-c497-4799-ce41-5f14970c5ce0"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut ans:[u8; SIZE*5] = [0; SIZE*5];
for i in 0..SIZE {
for j in 0..5 {
ans[i*5+j] = rng.gen_range(0..256) as u8;
}
}
let mut buf:[u64; SIZE] = [0; SIZE];
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for i in 0..SIZE {
let i5 = i * 5;
let mut v = ans[i5] as u64;
v += (ans[i5+1] as u64) << 8;
v += (ans[i5+2] as u64) << 16;
v += (ans[i5+3] as u64) << 24;
v += (ans[i5+4] as u64) << 32;
buf[i] = v;
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
let mut total: u64 = 0;
for i in 0..SIZE {
total += buf[i];
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="QenSKVMXbNFN" outputId="3a805343-b46a-4f53-ea8a-c3b8eb470002"
# !cargo run --release
# + [markdown] id="IBjmCX3ZA6PG"
# めずらしくforループの展開が効果がありました。
# + [markdown] id="ZLSAbnf0BLG6"
# 関数利用。
# + colab={"base_uri": "https://localhost:8080/"} id="uqSnxQ-Ne3oS" outputId="bed05071-3024-4804-b6bd-bef915133d93"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut ans:[u8; SIZE*5] = [0; SIZE*5];
for i in 0..SIZE {
for j in 0..5 {
ans[i*5+j] = rng.gen_range(0..256) as u8;
}
}
let mut buf:[u64; SIZE] = [0; SIZE];
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for i in 0..SIZE {
let i5 = i * 5;
buf[i] = u64::from_le_bytes([ans[i5], ans[i5+1], ans[i5+2], ans[i5+3], ans[i5+4], 0, 0, 0]);
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
let mut total: u64 = 0;
for i in 0..SIZE {
total += buf[i];
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="u0yNif6Re3fc" outputId="ad9d67ee-0ba8-43bd-ad01-2f2ff19d006f"
# !cargo run --release
# + [markdown] id="AvNLcUzEBN5J"
# 変わりません。
# + [markdown] id="JGq1iJHLBcyZ"
# 別の集計方法。
# + colab={"base_uri": "https://localhost:8080/"} id="ywiWdivJg2Q7" outputId="90a31172-127f-423b-846a-459c5b250860"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut ans:[u8; SIZE*5] = [0; SIZE*5];
for i in 0..SIZE {
for j in 0..5 {
ans[i*5+j] = rng.gen_range(0..256) as u8;
}
}
let mut buf:[u64; SIZE] = [0; SIZE];
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for i in 0..SIZE {
let i5 = i * 5;
buf[i] = (u32::from_le_bytes([ans[i5], ans[i5+1], ans[i5+2], ans[i5+3]]) as u64) + ((ans[i5+4] as u64) << 32);
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
let mut total: u64 = 0;
for i in 0..SIZE {
total += buf[i];
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="AAiXHP3Gg3BW" outputId="c5f5bd03-f2f9-4725-810f-8b2caa909c85"
# !cargo run --release
# + [markdown] id="QhNQsKezBhpW"
# ほぼ同じです。
# + [markdown] id="i4zqs2MBBmRi"
# 0xFF_FFFF_FFFFのマスクによる集計方法。
# + colab={"base_uri": "https://localhost:8080/"} id="_duxXUqahzsI" outputId="74b9a1de-a77e-4a6a-ef54-89d4c3babc3f"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut ans:[u8; SIZE*5] = [0; SIZE*5];
for i in 0..SIZE {
for j in 0..5 {
ans[i*5+j] = rng.gen_range(0..256) as u8;
}
}
let mut buf:[u64; SIZE] = [0; SIZE];
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
unsafe {
buf[idx] = *(ans[idx*5..].as_ptr() as *mut u64) & 0xFF_FFFF_FFFF;
}
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
let mut total: u64 = 0;
for i in 0..SIZE {
total += buf[i];
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="-Zi_vdM1hzl3" outputId="01790963-858e-4d51-95ba-53453231e446"
# !cargo run --release
# + [markdown] id="rhHBojFMBr_2"
# 速い。これが最速です。
# + [markdown] id="TesGVn2FByrW"
# 念のため、
# + colab={"base_uri": "https://localhost:8080/"} id="QKKxiJmrjrHS" outputId="efd9057d-6c04-4dc6-89a5-1c2f7862bbef"
# %%writefile src/main.rs
use rand;
use rand::prelude::*;
use std::time::Instant;
fn main() {
const COUNT: usize = 10_000_000_000;
const SIZE: usize = 100_000;
let mut rng: rand::rngs::StdRng = rand::SeedableRng::seed_from_u64(0);
let mut ans:[u8; SIZE*5] = [0; SIZE*5];
for i in 0..SIZE {
for j in 0..5 {
ans[i*5+j] = rng.gen_range(0..256) as u8;
}
}
let mut buf:[u64; SIZE] = [0; SIZE];
let start = Instant::now();
for _ in 0..COUNT/SIZE {
for idx in 0..SIZE {
unsafe {
buf[idx] = (*(ans[idx*5..].as_ptr() as *mut u32) as u64) + ((ans[idx*5+4] as u64) << 32);
}
}
}
let end = start.elapsed();
println!(
"{}.{:03} sec",
end.as_secs(),
end.subsec_nanos() / 1_000_000
);
let mut total: u64 = 0;
for i in 0..SIZE {
total += buf[i];
}
println!("{}", total);
}
# + colab={"base_uri": "https://localhost:8080/"} id="SNP2gMz4jrBB" outputId="69e6884f-f840-4157-8229-c10aaba96203"
# !cargo run --release
# + [markdown] id="TQD3fVkrCHWQ"
# これは遅かった。
| RustUint40.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Download this page as a jupyter notebook at [Lab 11-TH](http://172.16.58.3/engr-1330-webroot/8-Labs/Lab11/Lab11-TH.ipynb)
# # <font color=darkred>ES-11: Databases </font>
#
# **LAST NAME, FIRST NAME**
#
# **R00000000**
#
# ENGR 1330 Laboratory 11 - Homework
# Preamble script block to identify host, user, and kernel
import sys
# ! hostname
# ! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
# ## <font color=purple>Pandas Cheat Sheet(s)</font>
# The Pandas library is a preferred tool for data scientists to perform data manipulation and analysis, next to matplotlib for data visualization and NumPy for scientific computing in Python.
#
# The fast, flexible, and expressive Pandas data structures are designed to make real-world data analysis significantly easier, but this might not be immediately the case for those who are just getting started with it. Exactly because there is so much functionality built into this package that the options are overwhelming.
#
# Hence summary sheets will be useful
#
# - A summary sheet: [https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf)
#
# - A different one: [http://datacamp-community-prod.s3.amazonaws.com/f04456d7-8e61-482f-9cc9-da6f7f25fc9b](http://datacamp-community-prod.s3.amazonaws.com/f04456d7-8e61-482f-9cc9-da6f7f25fc9b)
import pandas
import numpy
# # Exercise 1: Reading a File into a Dataframe
#
# Pandas has methods to read common file types, such as `csv`,`xlsx`, and `json`. Ordinary text files are also quite manageable. (We will study these more in Lesson 11)
#
# Here are the steps to follow:
#
# 1. Download the file [CSV_ReadingFile.csv](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab11/CSV_ReadingFile.csv) to your local computer
# 2. Run the cell below - it connects to the file, reads it into the object `readfilecsv'
# 3. Print the contents of the object `readfilecsv'
# download the file (do this before running the script)
readfilecsv = pandas.read_csv('CSV_ReadingFile.csv') # Reading a .csv file
# print the contents of readfilecsv
# +
# How many rows are in the data table? more code here
# How many columns?
# -
# ## Exercise 2
# Now that you have downloaded and read a file, lets do it again, but with feeling!
#
# Download the file named [concreteData.xls](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab11/concreteData.xls) to your local computer.
#
# > The file is an Excel 97-2004 Workbook; you probably cannot inspect it within Anaconda (but maybe yes). File size is about 130K, we are going to rely on Pandas to work here!
#
# Read the file into a dataframe object named **'concreteData'** the method name is
#
# > - object_name = pandas.read_excel(filename)
# > - It should work as above if you replace the correct placeholders
#
# Then perform the following activities.
#
# 1. Read the file into an object
# +
# code here looks like object_name = pandas.read_excel(filename)
# -
# 1. Examine the first few rows of the dataframe and describe the structure (using words) in a markdown cell just after you run the descriptor method
# +
# code here looks like object_name.head()
# -
# 2. Simplify the column names to "Cement", "BlastFurnaceSlag", "FlyAsh", "Water", "Superplasticizer", "CoarseAggregate", "FineAggregate", "Age", "CC_Strength"
# +
# code here
# -
# 3. Determine and report summary statistics for each of the columns.
# +
# code here
# -
# 4. Then run the script below into your notebook (after the summary statistics), describe the output (using words) in a markdown cell.
#
# After concreteData exists, and is non-empty; how do you know?
# then run the code block below -- It takes awhile to render output, give it a minute:
import matplotlib.pyplot
import seaborn
# %matplotlib inline
seaborn.pairplot(concreteData)
matplotlib.pyplot.show()
# +
# specify/summarize the output here!
| 8-Labs/Lab11/Lab11-TH.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dvfx2
# language: python
# name: dvfx2
# ---
# +
import torch
import torch.nn as nn
import torchvision.datasets
import h5py
import zipfile
import imageio
import os
import numpy
from torch.utils.data import Dataset
import matplotlib.pyplot as plt
import random
import pandas
import time
import cv2
#mport pandas, numpy, random
# +
# check if CUDA is available
# if yes, set default tensor type to cuda
if torch.cuda.is_available():
torch.set_default_tensor_type(torch.cuda.FloatTensor)
print("using cuda:", torch.cuda.get_device_name(0))
pass
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
# +
# functions to generate random data
def generate_random_image(size):
random_data = torch.rand(size)
return random_data
def generate_random_seed(size):
random_data = torch.randn(size)
return random_data
# +
# modified from https://github.com/pytorch/vision/issues/720
class View(nn.Module):
def __init__(self, shape):
super().__init__()
self.shape = shape,
def forward(self, x):
return x.view(*self.shape)
# +
# crop (numpy array) image to given width and height
def crop_centre(img, new_width, new_height):
height, width, _ = img.shape
startx = width//2 - new_width//2
starty = height//2 - new_height//2
return img[ starty:starty + new_height, startx:startx + new_width, :]
# +
# dataset class
class CelebADataset(Dataset):
def __init__(self, file):
self.file_object = h5py.File(file, 'r')
self.dataset = self.file_object['img_align_celeba']
pass
def __len__(self):
return len(self.dataset)
def __getitem__(self, index):
if (index >= len(self.dataset)):
raise IndexError()
img = numpy.array(self.dataset[str(index)+'.jpg'])
# crop to 128x128 square
img = crop_centre(img, 128, 128)
return torch.cuda.FloatTensor(img).permute(2,0,1).view(1,3,128,128) / 255.0
def plot_image(self, index):
img = numpy.array(self.dataset[str(index)+'.jpg'])
# crop to 128x128 square
img = crop_centre(img, 128, 128)
plt.imshow(img, interpolation='nearest')
pass
pass
# +
# create Dataset object
celeba_dataset = CelebADataset('./celeba_dataset/celeba_aligned_small.h5py')
# +
# check data contains images
celeba_dataset.plot_image(1000)
# -
class Discriminator(nn.Module):
def __init__(self):
# initialise parent pytorch class
super().__init__()
# define neural network layers
self.model = nn.Sequential(
# expect input of shape (1,3,128,128)
nn.Conv2d(3, 256, kernel_size=8, stride=2),
nn.BatchNorm2d(256),
#nn.LeakyReLU(0.2),
nn.GELU(),
nn.Conv2d(256, 256, kernel_size=8, stride=2),
nn.BatchNorm2d(256),
#nn.LeakyReLU(0.2),
nn.GELU(),
nn.Conv2d(256, 3, kernel_size=8, stride=2),
#nn.LeakyReLU(0.2),
nn.GELU(),
View(3*10*10),
nn.Linear(3*10*10, 1),
nn.Sigmoid()
)
# create loss function
self.loss_function = nn.BCELoss()
# create optimiser, simple stochastic gradient descent
self.optimiser = torch.optim.Adam(self.parameters(), lr=0.0001)
# counter and accumulator for progress
self.counter = 0;
self.progress = []
pass
def forward(self, inputs):
# simply run model
return self.model(inputs)
def train(self, inputs, targets):
# calculate the output of the network
outputs = self.forward(inputs)
# calculate loss
loss = self.loss_function(outputs, targets)
# increase counter and accumulate error every 10
self.counter += 1;
if (self.counter % 10 == 0):
self.progress.append(loss.item())
pass
if (self.counter % 30000 == 0):
#print("counter = ", self.counter)
pass
# zero gradients, perform a backward pass, update weights
self.optimiser.zero_grad()
loss.backward()
self.optimiser.step()
pass
def plot_progress(self):
df = pandas.DataFrame(self.progress, columns=['loss'])
df.plot(ylim=(0), figsize=(16,8), alpha=0.1, marker='.', grid=True, yticks=(0, 0.25, 0.5, 1.0, 5.0))
pass
pass
# +
# %%time
# test discriminator can separate real data from random noise
D = Discriminator()
# move model to cuda device
D.to(device)
for image_data_tensor in celeba_dataset:
# real data
D.train(image_data_tensor, torch.cuda.FloatTensor([1.0]))
# fake data
D.train(generate_random_image((1,3,128,128)), torch.cuda.FloatTensor([0.0]))
pass
# +
# plot discriminator loss
D.plot_progress()
# +
# manually run discriminator to check it can tell real data from fake
for i in range(4):
image_data_tensor = celeba_dataset[random.randint(0,20000)]
print( D.forward( image_data_tensor ).item() )
pass
for i in range(4):
print( D.forward( generate_random_image((1,3,128,128))).item() )
pass
# +
# generator class
class Generator(nn.Module):
def __init__(self):
# initialise parent pytorch class
super().__init__()
# define neural network layers
self.model = nn.Sequential(
# input is a 1d array
nn.Linear(100, 3*11*11),
#nn.LeakyReLU(0.2),
nn.Softsign(),
# reshape to 4d
View((1, 3, 11, 11)),
nn.ConvTranspose2d(3, 256, kernel_size=8, stride=2),
nn.BatchNorm2d(256),
#nn.LeakyReLU(0.2),
nn.Softsign(),
nn.ConvTranspose2d(256, 512, kernel_size=18, stride=1),
nn.BatchNorm2d(512),
#nn.LeakyReLU(0.2),
nn.Softsign(),
nn.ConvTranspose2d(512, 256, kernel_size=18, stride=1),
nn.BatchNorm2d(256),
#nn.LeakyReLU(0.2),
nn.Softsign(),
nn.ConvTranspose2d(256, 3, kernel_size=8, stride=2, padding=1),
nn.BatchNorm2d(3),
# output should be (1,3,128,128)
nn.Sigmoid()
)
# create optimiser, simple stochastic gradient descent
self.optimiser = torch.optim.Adam(self.parameters(), lr=0.0001)
# counter and accumulator for progress
self.counter = 0;
self.progress = []
pass
def forward(self, inputs):
# simply run model
return self.model(inputs)
def train(self, D, inputs, targets):
# calculate the output of the network
g_output = self.forward(inputs)
# pass onto Discriminator
d_output = D.forward(g_output)
# calculate error
loss = D.loss_function(d_output, targets)
# increase counter and accumulate error every 10
self.counter += 1;
if (self.counter % 10 == 0):
self.progress.append(loss.item())
pass
# zero gradients, perform a backward pass, update weights
self.optimiser.zero_grad()
loss.backward()
self.optimiser.step()
pass
def plot_progress(self):
df = pandas.DataFrame(self.progress, columns=['loss'])
df.plot(ylim=(0), figsize=(16,8), alpha=0.1, marker='.', grid=True, yticks=(0, 0.25, 0.5, 1.0, 5.0))
pass
pass
# +
# check the generator output is of the right type and shape
G = Generator()
# move model to cuda device
G.to(device)
output = G.forward(generate_random_seed(100))
img = output.detach().permute(0,2,3,1).view(128,128,3).cpu().numpy()
plt.imshow(img, interpolation='none', cmap='Blues')
# -
D = Discriminator()
G = Generator()
D.to(device)
G.to(device)
# +
# train Discriminator and Generator
epochs = 20
a = time.time()
for epoch in range(epochs):
print ("epoch = ", epoch + 1)
# train Discriminator and Generator
for image_data_tensor in celeba_dataset:
# train discriminator on true
D.train(image_data_tensor, torch.cuda.FloatTensor([1.0]))
# train discriminator on false
# use detach() so gradients in G are not calculated
D.train(G.forward(generate_random_seed(100)).detach(), torch.cuda.FloatTensor([0.0]))
# train generator
G.train(D, generate_random_seed(100), torch.cuda.FloatTensor([1.0]))
# Util Visualizaing
output = G.forward(generate_random_seed(100))
img = output.detach().permute(0,2,3,1).view(128,128,3).cpu().numpy()
cv2.imwrite("v8_vis/face_" + str(epoch+1) + ".jpg", cv2.cvtColor(img*255, cv2.COLOR_BGR2RGB))
b = time.time()
print(b-a)
# +
# plot discr
D.plot_progress()
# +
# plot generator error
G.plot_progress()
# +
# plot several outputs from the trained generator
# plot a 3 column, 2 row array of generated images
f, axarr = plt.subplots(2,3, figsize=(16,8))
for i in range(2):
for j in range(3):
output = G.forward(generate_random_seed(100))
img = output.detach().permute(0,2,3,1).view(128,128,3).cpu().numpy()
axarr[i,j].imshow(img, interpolation='none', cmap='Blues')
pass
pass
# +
# current memory allocated to tensors (in Gb)
torch.cuda.memory_allocated(device) / (1024*1024*1024)
# +
# total memory allocated to tensors during program (in Gb)
torch.cuda.max_memory_allocated(device) / (1024*1024*1024)
| face_generate_model_v8_failure.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:geoviews]
# language: python
# name: conda-env-geoviews-py
# ---
# %matplotlib inline
# +
# import glob, os
# caseid = "HeldSuarez_dt120"
# paths = glob.glob(f"../OUT_3D/{caseid}*.bin3D")
# path = paths[-1][3:]
# path = os.path.splitext(path)[0]
# ncpath = "../" + path + ".nc"
# # !docker run -v $(pwd)/../:/tmp -w /tmp nbren12/sam bin3D2nc {path}.bin3D
# ds = xr.open_dataset(ncpath)
# -
caseid = "NGAqua_ngaqua.dt15.QOBS"
out_3d_path = "/Users/noah/workspace/models/SAMUWgh/OUT_3D"
out_2d_path = "/Users/noah/workspace/models/SAMUWgh/OUT_2D"
true_path = "/Users/noah/workspace/research/uw-machine-learning/data/raw/2/NG_5120x2560x34_4km_10s_QOBS_EQX/coarse/"
ds = xr.open_mfdataset(f"{out_3d_path}/{caseid}_*.nc", autoclose=True)
# # Climate
mean = ds.mean(['time', 'x'])
# +
p = mean.p
tm = mean.TABS
tht = tm * (1000/p)**(2/7)
levels = np.r_[270:350:5]
im = plt.contourf(tht.y, p, tht, levels=levels, extend='both')
plt.colorbar()
plt.clabel(im, colors='black', inline=False, fmt="%.0f")
plt.ylim([1000, 10])
plt.title("Potential Temperature")
plt.xlabel("y")
plt.ylabel("p")
# tht.plot.contourf(levels=levels)
# -
plt.contourf(tm.y, p, tm)
plt.ylim([1000,10])
plt.colorbar()
plt.title("Absolute Temperature")
plt.xlabel("y")
plt.ylabel("p");
# +
im = plt.contourf(mean.y, mean.p, mean.U,
levels=np.r_[-40:45:5], cmap='RdBu_r',
extend='both')
plt.contour(mean.y, mean.p, mean.U, levels=[-10, -8,-6,-4,-2,0], colors='k')
plt.ylim([1000,10])
plt.colorbar(im)
plt.title("Zonal Velocity")
plt.xlabel("y")
plt.ylabel("p");
# -
# # Spin-up
#
# Now I plot the temporal evolution of some different fields at a lower atmosphere height as the simulation spins up
# Here is the zonal velocity
ds.U[::20, 5].plot(col='time', col_wrap=4)
# and the meridional velocity
ds.V[::20, 5].plot(col='time', col_wrap=4)
# and the absolute temperature
ds.TABS[::20, 5].plot(col='time', col_wrap=4)
# We can see that the equilibrium of the coarse-resolution NG-Aqua is very different than the true simulation. The N-S assymetry is especially strange. Why is that happening? Something with the radiation? We can look at the 2D fields to find this out
d2d = xr.open_mfdataset(f"../OUT_2D/{caseid}*.nc")
d2d.SOLIN[5].plot()
# mmmm. there is no meridional variation in SOLIN. I must have configured the run incorrectly. Is the diurnal cycle there?
d2d.SOLIN[:,32, 0].plot()
# No. I will have to make sure the radiation matches. How is the SST?
d2_true = xr.open_dataset(true_path + "2d/all.nc").sortby('time')
d2_true.SST[0,:,0].plot(label='NG-Aqua')
d2d.SST[0,:,0].plot(label='QOBS')
plt.legend()
# That does match either...damn. Maybe I should just initialize it from the netCDF as well? NG-Aqua doesn't look like it's using the QOBS profile weirdly enough. Let's do some reverse engineering.
# +
y = d2_true.y.values
lat = (y-y.mean()) *2.5e-8 * 2 * np.pi
c = np.cos(lat*1.5)
s = np.sin(lat*1.5)
qobs = 273.15 + 27/2 *( 3 * c**2 - c**4)
# qobs = 273.15 + 27/2 * (2 - s**2 - s**4)
# -
plt.plot(y, qobs, label='QOBS')
plt.plot(y, d2_true.SST[0,:,0], label='NGAqus')
plt.legend()
# I have tried to fix these discrepencies.
caseid = "NGAqua_ngaqua.dt30.fixRad.fixQOBS"
ds = xr.open_mfdataset(f"{out_2d_path}/{caseid}_*.nc", autoclose=True)
ds.SST[0,:,0].plot(label='Fixed')
d2_true.SST[0,:,0].plot(label='NG-Aqua')
# Nice...the SST patterns line up nicely. What about the insolation?
# +
y = ds.SOLIN.y
sol_coarse = ds.SOLIN[0,:,0]
sol_true = d2_true.SOLIN[23,:,0]
plt.plot(y, sol_coarse, label='Coarse')
plt.plot(y, sol_true, label='NGAqua')
plt.legend()
# -
float(ds.SOLIN.max())
float(d2_true.SOLIN.max())
# It looks the the solar constant is not the same between these simulations for some reason. Maybe because I did some averaging in the y-direction?
| ext/sam/visualization/1.1-init-ng-aqua.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Protodash: NHANES (CDC) data example
# - This notebook shows an example of how to use the ProtodashExplainer defined in [AIX360](https://github.com/IBM/AIX360/) to generate prototypes from (training/test) data. The notebook uses one of the [NHANES CDC questionnaire dataset](https://wwwn.cdc.gov/nchs/nhanes/search/datapage.aspx?Component=Questionnaire&CycleBeginYear=2013) related to incomes of individuals.
# - ProtodashExplainer is an implementation of the [Protodash algorithm](https://arxiv.org/abs/1707.01212)
# ### Protodash Explainer examples
# #### Import statements
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import OneHotEncoder
from aix360.algorithms.protodash import ProtodashExplainer, get_Gaussian_Data
from aix360.datasets import CDCDataset
# -
# #### Load NHANES dataset from CDC
nhanes = CDCDataset()
nhanes_files = nhanes.get_csv_file_names()
(nhanesinfo, _, _) = nhanes._cdc_files_info()
# <a name="explore"></a>
# #### Explore NHANES Income questionnaire dataset
#
# Now let us explore the income questionnaire dataset and find out the types of responses received in the survey. Each column in this dataset corresponds to a question and each row denotes the answers given by a respondent to those questions. Both column names and answers by respondents are encoded. For example, 'SEQN' denotes the sequence number assigned to a respondent and 'IND235' corresponds to a question about monthly family income. As seen below, in most cases a value of 1 implies "Yes" to the question, while a value of 2 implies "No". More details about the income questionaire and how questions and answers are encoded can be seen [here](https://wwwn.cdc.gov/Nchs/Nhanes/2013-2014/INQ_H.htm)
#
# |Column |Description | Values and Meaning|
# |-------|----------------------------|---------|
# |SEQN | Respondent sequence number |
# |INQ020 | Income from wages/salaries |1->Yes, 2->No, 7->Refused, 9->Don't know|
# |INQ012 | Income from self employment|1->Yes, 2->No, 7->Refused, 9->Don't know|
# |INQ030 | Income from Social Security or RR |1->Yes, 2->No, 7->Refused, 9->Don't know|
# |INQ060 | Income from other disability pension |1->Yes, 2->No, 7->Refused, 9->Don't know|
# |INQ080 | Income from retirement/survivor pension |1->Yes, 2->No, 7->Refused, 9->Don't know|
# |INQ090 | Income from Supplemental Security Income |1->Yes, 2->No, 7->Refused, 9->Don't know|
# |INQ132 | Income from state/county cash assistance |1->Yes, 2->No, 7->Refused, 9->Don't know|
# |INQ140 | Income from interest/dividends or rental |1->Yes, 2->No, 7->Refused, 9->Don't know|
# |INQ150 | Income from other sources |1->Yes, 2->No, 7->Refused, 9->Don't know|
# |IND235 | Monthly family income |1-12->Increasing income brackets, 77->Refused, 99->Don't know|
# |INDFMMPI | Family monthly poverty level index |0-5->Higher value more affluent|
# |INDFMMPC | Family monthly poverty level category |1-3->Increasing INDFMMPI brackets, 7->Refused, 9->Don't know|
# |INQ244 | Family has savings more than $5000 |1->Yes, 2->No, 7->Refused, 9->Don't know|
# |IND247 | Total savings/cash assets for the family |1-6->Increasing savings brackets, 77->Refused, 99->Don't know|
# replace encoded column names by the associated question text.
df_inc = nhanes.get_csv_file('INQ_H.csv')
df_inc.columns[0]
dict_inc = {
'SEQN': 'Respondent sequence number',
'INQ020': 'Income from wages/salaries',
'INQ012': 'Income from self employment',
'INQ030':'Income from Social Security or RR',
'INQ060': 'Income from other disability pension',
'INQ080': 'Income from retirement/survivor pension',
'INQ090': 'Income from Supplemental Security Income',
'INQ132': 'Income from state/county cash assistance',
'INQ140': 'Income from interest/dividends or rental',
'INQ150': 'Income from other sources',
'IND235': 'Monthly family income',
'INDFMMPI': 'Family monthly poverty level index',
'INDFMMPC': 'Family monthly poverty level category',
'INQ244': 'Family has savings more than $5000',
'IND247': 'Total savings/cash assets for the family'
}
qlist = []
for i in range(len(df_inc.columns)):
qlist.append(dict_inc[df_inc.columns[i]])
df_inc.columns = qlist
print("Answers given by some respondents to the income questionnaire:")
df_inc.head(5)
# +
print("Number of respondents to Income questionnaire:", df_inc.shape[0])
print("Distribution of answers to \'monthly family income\' and \'Family savings\' questions:")
fig, axes = plt.subplots(1, 2, figsize=(10,5))
fig.subplots_adjust(wspace=0.5)
hist1 = df_inc['Monthly family income'].value_counts().plot(kind='bar', ax=axes[0])
hist2 = df_inc['Family has savings more than $5000'].value_counts().plot(kind='bar', ax=axes[1])
plt.show()
# -
# <a name="study1"></a>
# #### Summarize NHANES Income Questionnaire dataset using Prototypes
#
# Consider a social scientist who would like to quickly obtain a summary report of this dataset in terms of types of people that span this dataset. Is it possible to summarize this dataset by looking at answers given by a few representative/prototypical respondents?
#
# We now show how the ProtodashExplainer can be used to obtain a few prototypical respondents (about 10 in this example) that span the diverse set of individuals answering the income questionnaire making it easy for the social scientist to summarize the dataset.
# +
# convert pandas dataframe to numpy
data = df_inc.to_numpy()
#sort the rows by sequence numbers in 1st column
idx = np.argsort(data[:, 0])
data = data[idx, :]
# replace nan's (missing values) with 0's
original = data
original[np.isnan(original)] = 0
# delete 1st column (sequence numbers)
original = original[:, 1:]
# one hot encode all features as they are categorical
onehot_encoder = OneHotEncoder(sparse=False)
onehot_encoded = onehot_encoder.fit_transform(original)
explainer = ProtodashExplainer()
# call protodash explainer
# S contains indices of the selected prototypes
# W contains importance weights associated with the selected prototypes
(W, S, _) = explainer.explain(onehot_encoded, onehot_encoded, m=10)
# -
# Display the prototypes along with their computed weights
inc_prototypes = df_inc.iloc[S, :].copy()
# Compute normalized importance weights for prototypes
inc_prototypes["Weights of Prototypes"] = np.around(W/np.sum(W), 2)
inc_prototypes
# #### Explanation:
# The 10 people shown above (i.e. 5 prototypes) are representative of the income questionnaire according to Protodash. Firstly, in the distribution plot for family finance related questions we saw that there roughly were 5 times as many people not having savings in excess of $5000 compared with others. Our prototypes also have a similar spread which is reassuring. Also for monthly family income we get a more even spread over the more commonly occuring categories. This is kind of a spot check to see if our prototypes actually match the distribution of values in the dataset.
#
# Looking at the other questions in the questionnaire and the corresponding answers given by the prototypical people above the social scientist realizes that most people are employeed (3rd question) and work for an organization earning through salary/wages (1st two questions). Most of them are also young (5th question) and fit to work (4th question). However, they don't seem to have much savings (last question). These insights that the social scientist has acquired from studying the prototypes could be conveyed also to the appropriate government authorities that affect future public policy decisions.
# #### Summarize Gaussian (simulated) data using prototypes
# generate normalized gaussian data X, Y with 100 features and 300 & 4000 observations respectively
(X, Y) = get_Gaussian_Data(100, 300, 4000)
print(X.shape, Y.shape)
(W, S, setValues) = explainer.explain(X, Y, m=5, kernelType='Gaussian', sigma=2)
print(S, W)
Y[S, :]
| examples/protodash/Protodash-CDC.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
#
# # R Primitives
#
# ## Vectors
#
# Use the `typeof` function to explore the data structure's low-level structure, and the `class` object to see the object's higher level structure.
#
#
# + Rmd_chunk_options="vectors"
random_norms <- rnorm(100)
typeof(random_norms)
class(random_norms)
some_letters <- letters[1:10]
typeof(some_letters)
class(some_letters)
int_vector <- c(1L, 2L, 3L)
typeof(int_vector)
class(int_vector)
booleans <- int_vector == 1
typeof(booleans)
class(booleans)
# -
#
# can you mix types in a vector?
#
#
# + Rmd_chunk_options="coercion"
combine_char_num <- c(random_norms, some_letters)
typeof(combine_char_num)
# -
#
# ## Lists
#
# to combine types, make list
#
#
# + Rmd_chunk_options="list"
list_char_num <- list(nums = random_norms, chars = some_letters)
typeof(list_char_num)
lapply(list_char_num, typeof)
# -
#
# ## Matrices
#
# matrices, same length, same type
#
#
# + Rmd_chunk_options="matrices"
matrix_num <- matrix(rnorm(10), nrow = 5, ncol = 2)
# reuse
matrix_num_reuse <- matrix(rnorm(11), nrow = 6, ncol = 4)
matrix_mix <- matrix(c(rnorm(10), letters[1:10]), nrow = 10, ncol = 2)
typeof(matrix_mix)
# -
#
#
# ## data.frames
#
# To mix elements in a rectangular object/table, use `data.frames`:
#
#
# + Rmd_chunk_options="dfs"
df_char_num <- data.frame(chars = letters[1:10], nums = rnorm(10))
lapply(df_char_num, typeof)
## data.frames must have same length in each column
df_char_num <- data.frame(chars = letters[1:8], nums = rnorm(10))
# -
#
# ## Helpful Functions
#
#
#
# +
# see your workspace
ls()
# check working directory, change directory
getwd()
setwd(getwd())
# create a sequence of numbers
1:10
seq(1, 10)
seq(1, 10, 2)
# get help
?seq
help(seq)
# type tests
is.character(some_letters)
is.numeric(some_letters)
is.atomic(some_letters)
is.atomic(df_char_num)
# remove object from workspace
rm(some_letters)
# remove all visible objects
rm(list = ls())
# -
#
#
| Student-Resources/Labs/Lab0-data-structures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <font color='blue'>Data Science Academy</font>
# # <font color='blue'>Big Data Real-Time Analytics com Python e Spark</font>
#
# # <font color='blue'>Capítulo 3</font>
# ## Vetorização
# Embora possamos usar list comprehension e função map em arrays numpy, este pode não ser o melhor método e a maneira mais eficiente de se obter o mesmo resultado seria através de vetorização.
#
# Vetorização nos permite aplicar uma função a um array inteiro, ao invés de aplicar a função elemento a elemento (similar ao que fazemos com as funções map() e filter()).
# Ao trabalhar com objetos NumPy e Pandas, existem maneiras mais eficientes de se aplicar uma função a um conjunto de elementos, que serão mais velozes que a aplicação de loops for.
import numpy as np
array1 = np.random.randint(0, 50, 20)
array1
# Criando um função
def calc_func(num):
if num < 10:
return num ** 3
else:
return num ** 2
# Para que a função funcione no objeto array do NumPy, ela precisa ser vetorizada
calc_func(array1)
# ?np.vectorize
# Vetorizando a função
v_calc_func = np.vectorize(calc_func)
type (v_calc_func)
# Aplicando a função vetorizada ao array3 NumPy
v_calc_func(array1)
# Aplicando a função map() sem vetorizar a função
list(map(calc_func, array1))
# Podemos usar list comprehension para obter o mesmo resutado, sem vetorizar a função
[calc_func(x) for x in array1]
# No Python 3, a list comprehension recebeu atualizações e ficou muito mais rápida e eficiente, uma vez que ela é amplamente utilizada em programação Python. Lembre-se sempre de checar a documentação antes de decidir como você irá manipular suas estruturas de dados.
# +
# Função vetorizada
# %timeit v_calc_func(array1)
# List comprehension
# %timeit [calc_func(x) for x in array1]
# Função map()
# %timeit list(map(calc_func, array1))
# -
# Criando um array com valores maiores
array2 = np.random.randint(0, 100, 20 * 10000)
# +
# Função vetorizada
# %timeit v_calc_func(array2)
# List comprehension
# %timeit [calc_func(x) for x in array2]
# Função map()
# %timeit list(map(calc_func, array2))
# -
# Utilizar as versões mais recentes de um software pode trazer problemas de compatibilidade com aplicações existentes, mas é grande a possibilidade trazerem melhorias em performance e novas funcionalidades.
# # Fim
# ### Obrigado - Data Science Academy - <a href="http://facebook.com/dsacademybr">facebook.com/dsacademybr</a>
| code/dsa/Big Data Real-Time Analytics com Python e Spark/8-Arquivos-Cap03/06-Cap03-Vetorizacao-Funcoes-Operacoes-Arrays-NumPy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Nuages de mots
# ## Imports et stopwords
from collections import Counter
from wordcloud import WordCloud
import os
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from IPython.display import Image
# Stopwords (Idem que dans s1)
sw = stopwords.words("french")
sw += ["les", "plus", "cette", "fait", "faire", "être", "deux", "comme", "dont", "tout",
"ils", "bien", "sans", "peut", "tous", "après", "ainsi", "donc", "cet", "sous",
"celle", "entre", "encore", "toutes", "pendant", "moins", "dire", "cela", "non",
"faut", "trois", "aussi", "dit", "avoir", "doit", "contre", "depuis", "autres",
"van", "het", "autre", "jusqu", "ville"]
sw = set(sw)
# ## Créer un fichier contenant le texte de tous les bulletins d'une année donnée
# Choisir une année
year = 1914
# Lister les fichiers de cette année
data_path = '../data'
txt_path = '../data/txt'
txts = [f for f in os.listdir(txt_path) if os.path.isfile(os.path.join(txt_path, f)) and str(year) in f]
txts
# + tags=[]
# Stocker le contenu de ces fichiers dans une liste
content_list = []
for txt in txts:
with open(os.path.join(txt_path, txt), 'r') as f:
content_list.append(f.read())
# -
# Compter le nombre d'éléments (=fichiers) dans la liste
len(content_list)
# Imprimer les 200 premiers caractères du contenu du premier fichier
content_list[0][:200]
# Ecrire tout le contenu dans un fichier temporaire
temp_path = '../data/tmp'
if not os.path.exists(temp_path):
os.mkdir(temp_path)
with open(os.path.join(temp_path, f'{year}.txt'), 'w') as f:
f.write(' '.join(content_list))
# +
# Imprimer le contenu du fichier et constater les "déchets"
with open(os.path.join(temp_path, f'{year}.txt'), 'r') as f:
before = f.read()
before[:500]
# -
# ## Nettoyer le fichier à l'aide d'une fonction de nettoyage
# ### Créer la fonction de nettoyage (à adapter)
def clean_text(year, folder=None):
if folder is None:
input_path = f"{year}.txt"
output_path = f"{year}_clean.txt"
else:
input_path = f"{folder}/{year}.txt"
output_path = f"{folder}/{year}_clean.txt"
output = open(output_path, "w", encoding='utf-8')
with open(input_path, encoding='utf-8') as f:
text = f.read()
words = nltk.wordpunct_tokenize(text)
kept = [w.lower() for w in words if len(w) > 2 and w.isalpha() and w.lower() not in sw]
kept_string = " ".join(kept)
output.write(kept_string)
return f'Output has been written in {output_path}!'
# ### Appliquer la fonction sur le fichier complet de l'année
# + tags=[]
clean_text(year, folder=temp_path)
# +
# Vérifier le résultat
with open(os.path.join(temp_path, f'{year}_clean.txt'), 'r') as f:
after = f.read()
after[:500]
# -
# ## Nuage de mots
# ### Afficher les termes les plus fréquents
#
frequencies = Counter(after.split())
print(frequencies.most_common(10))
# ### Créer, stocker et afficher le nuage de mots
cloud = WordCloud(width=2000, height=1000, background_color='white').generate_from_frequencies(frequencies)
cloud.to_file(os.path.join(temp_path, f"{year}.png"))
Image(filename=os.path.join(temp_path, f"{year}.png"))
| module3/s2_wordcloud.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:ddu_dirty_mnist]
# language: python
# name: conda-env-ddu_dirty_mnist-py
# ---
# # DDU's Dirty-MNIST
#
# > You'll never want to use MNIST again for OOD or AL.
# [](https://arxiv.org/abs/2102.11582)
# [](https://pypi.org/project/ddu-dirty-mnist/)
# [](https://pytorch.org/)
# [](https://github.com/BlackHC/ddu_dirty_mnist/blob/master/LICENSE)
#
# This repository contains the Dirty-MNIST dataset described in [*Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty*](https://arxiv.org/abs/2102.11582).
#
# The official repository for the paper is at https://github.com/omegafragger/DDU.
#
# If the code or the paper has been useful in your research, please add a citation to our work:
#
# ```
# @article{mukhoti2021deterministic,
# title={Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty},
# author={<NAME> and Kirsch, <NAME>, <NAME>, <NAME> and Gal, Yarin},
# journal={arXiv preprint arXiv:2102.11582},
# year={2021}
# }
# ```
#
# DirtyMNIST is a concatenation of MNIST and AmbiguousMNIST, with 60k sample-label pairs each in the training set.
# AmbiguousMNIST contains generated ambiguous MNIST samples with varying entropies: 6k unique samples with 10 labels each.
#
# 
#
# ---
# ## Install
# `pip install ddu_dirty_mnist`
# ## How to use
# After installing, you get a Dirty-MNIST train or test set just like you would for MNIST in PyTorch.
# +
# gpu
import ddu_dirty_mnist
dirty_mnist_train = ddu_dirty_mnist.DirtyMNIST(".", train=True, download=True, device="cuda")
dirty_mnist_test = ddu_dirty_mnist.DirtyMNIST(".", train=False, download=True, device="cuda")
len(dirty_mnist_train), len(dirty_mnist_test)
# -
# Create `torch.utils.data.DataLoader`s with `num_workers=0, pin_memory=False` for maximum throughput, see [the documentation](01_dataloader.ipynb) for details.
# +
# gpu
import torch
dirty_mnist_train_dataloader = torch.utils.data.DataLoader(
dirty_mnist_train,
batch_size=128,
shuffle=True,
num_workers=0,
pin_memory=False,
)
dirty_mnist_test_dataloader = torch.utils.data.DataLoader(
dirty_mnist_test,
batch_size=128,
shuffle=False,
num_workers=0,
pin_memory=False,
)
# -
# ### Ambiguous-MNIST
#
# If you only care about Ambiguous-MNIST, you can use:
# +
# gpu
import ddu_dirty_mnist
ambiguous_mnist_train = ddu_dirty_mnist.AmbiguousMNIST(".", train=True, download=True, device="cuda")
ambiguous_mnist_test = ddu_dirty_mnist.AmbiguousMNIST(".", train=False, download=True, device="cuda")
ambiguous_mnist_train, ambiguous_mnist_test
# -
# Again, create `torch.utils.data.DataLoader`s with `num_workers=0, pin_memory=False` for maximum throughput, see [the documentation](./dataloader.html) for details.
# +
# gpu
import torch
ambiguous_mnist_train_dataloader = torch.utils.data.DataLoader(
ambiguous_mnist_train,
batch_size=128,
shuffle=True,
num_workers=0,
pin_memory=False,
)
ambiguous_mnist_test_dataloader = torch.utils.data.DataLoader(
ambiguous_mnist_test,
batch_size=128,
shuffle=False,
num_workers=0,
pin_memory=False,
)
# -
# ## Additional Guidance
#
# 1. The current AmbiguousMNIST contains 6k unique samples with 10 labels each. This multi-label dataset gets flattened to 60k samples. The assumption is that amibguous samples have multiple "valid" labels as they are ambiguous. MNIST samples are intentionally undersampled (in comparison), which benefits AL acquisition functions that can select unambiguous samples.
# 1. Pick your initial training samples (for warm starting Active Learning) from the MNIST half of DirtyMNIST to avoid starting training with potentially very ambiguous samples, which might add a lot of variance to your experiments.
# 1. Make sure to pick your validation set from the MNIST half as well, for the same reason as above.
# 1. Make sure that your batch acquisition size is >= 10 (probably) given that there are 10 multi-labels per samples in Ambiguous-MNIST.
# 1. By default, Gaussian noise with stddev 0.05 is added to each sample to prevent acquisition functions from cheating by disgarding "duplicates".
# 1. If you want to split Ambiguous-MNIST into subsets (or Dirty-MNIST within the second ambiguous half), make sure to split by multiples of 10 to avoid splits within a flattened multi-label sample.
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from bokeh.plotting import figure
from bokeh.io import output_notebook,show
import numpy as np
output_notebook()
# # Bayes Theorem Mini-Lab
#
# This lab is a chance to work with Bayes Theorem. The underlying dataset is a collection of SMS (text) messages
# that were labelled as either 'junk' or 'real' as part of an attempt to build a classifier that could filter out
# junk text messages.
#
# The full dataset is in the 'complete.tsv' file -- this is a "tab-separated" file, rather than a "comma-separated"
# file. But we won't be using this file directly. Instead, we will just work with the simpler file 'data.csv'
# which is a comma separated file with two columns. The first column is 0 or 1 depending on whether the corresponding
# text is real (0) or junk (1). The second column is the length of the associated text message.
data = np.genfromtxt('data.csv',delimiter=',',skip_header=1)
# There are 5572 messages in the data.
data.shape
# Let's separate out the junk and real messages to compare them. One way to do that is to create
# an index array. This command creates an array of True/False values based on whether that condition
# is true row-by-row.
data[:,0]==0
# Notice that the first two entries in data are real and the third is junk:
data[:3,:]
# Now we use our index array to extract the real rows.
real = data[data[:,0]==0,:]
real
# There are 4825 real messages in the dataset.
real.shape
# The average length of the real messages is computed like this:
real[:,1].mean(axis=0)
# Now use a similar strategy to extract the junk rows and compute the mean length of the junk emails.
# What do you notice?
junk = data[data[:,0]==1,:]
print(junk.shape)
junk[:,1].mean(axis=0)
# One way to use this information about the lengths is to set a threshold value, of say 100 characters, and
# divide the messages into "long" and "short" messages using this threshold. It seems that long messages
# are more likely to be junk. We can use Bayes theorem to try to quantify this.
#
# Think of checking the length of a message like administering a test. Getting a positive result -- finding a long
# message -- should increase the odds that our message is junk.
#
# From the point of view of Bayes Theorem, we are interested in
#
# $$P(junk|long)$$
#
# which we can compute as
#
# $$
# P(junk|long) = \frac{P(long|junk)P(junk)}{P(long)}
# $$
#
# And while we don't really know these probabilities, we can estimate them by looking at the frequency counts in our data. (This approach is called "Naive" Bayes because we are naively assuming that the frequencies of data in our experiment are the real frequencies).
#
# To get started, we need a $2x2$ table of counts like this:
#
# | | long | short | total |
# |---|---|---|---|
# | junk | | | |
# | real | | | |
# | total | | | |
#
# from which we can compute the conditional probabilities.
# These equations compute the number of elements in the (junk, long) and (junk, short) cells, with a threshold of 100 characters defining "Long".
junk_long = junk[junk[:,1]>=100].shape[0]
junk_short=junk[junk[:,1]<100].shape[0]
real_long = real[real[:,1]>=100].shape[0]
real_short = real[real[:,1]<100].shape[0]
# The conditional probability P(junk|long) is the percentage of long texts that are junk.
junk_long/(junk_long+real_long)
# The probability of being junk unconditionally is about 13%.
(junk_long+junk_short)/5572
# This function computes the conditional probability as a function of a threshold, which can vary.
def cp(threshold):
junk_long = junk[junk[:,1]>=threshold].shape[0]
real_long = real[real[:,1]>=threshold].shape[0]
return junk_long/(junk_long+real_long)
# Setting the threshold to 130 makes P(junk|long) maximal.
x=np.arange(200)
y=np.array([cp(i) for i in x])
f=figure()
f.line(x=x,y=y)
show(f)
# Of course we are actually interested in detecting *real* messages. About 85% of our messages our real, so if we just say everything is real, we are right 85% of the time. Suppose we get a short message.
#
# What is the probability that it is real?
| Probability/lab/BayesTheoremLabSolutions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="2muvLzlqdcva"
# 
# + [markdown] id="A2A9se0Bdcvb"
# [](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/6.Clinical_Context_Spell_Checker.ipynb)
# + [markdown] id="orznscn3dcvc"
# <H1> 6. Context Spell Checker - Medical </H1>
#
# + id="U7bQfnRUdcvd"
import json, os
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
# Defining license key-value pairs as local variables
locals().update(license_keys)
# Adding license key-value pairs to environment variables
os.environ.update(license_keys)
# + id="ZXEqkSp9RFkN"
# Installing pyspark and spark-nlp
# ! pip install --upgrade -q pyspark==3.1.2 spark-nlp==$PUBLIC_VERSION
# Installing Spark NLP Healthcare
# ! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET
# + id="H2EWnyIOQZPI" colab={"base_uri": "https://localhost:8080/"} outputId="9cd23166-b609-450a-db73-15f281083538"
import json
import os
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
import sparknlp
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
print ("Spark NLP Version :", sparknlp.version())
print ("Spark NLP_JSL Version :", sparknlp_jsl.version())
# + colab={"base_uri": "https://localhost:8080/"} id="l70_9DOgdcvz" outputId="ddb3b4b3-e6a8-4701-c8cc-d4684cacbdc8"
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = RecursiveTokenizer()\
.setInputCols(["document"])\
.setOutputCol("token")\
.setPrefixes(["\"", "(", "[", "\n"])\
.setSuffixes([".", ",", "?", ")","!", "'s"])
spellModel = ContextSpellCheckerModel\
.pretrained('spellcheck_clinical', 'en', 'clinical/models')\
.setInputCols("token")\
.setOutputCol("checked")
# + id="XyqbEdoPdcv-"
pipeline = Pipeline(
stages = [
documentAssembler,
tokenizer,
spellModel
])
empty_ds = spark.createDataFrame([[""]]).toDF("text")
lp = LightPipeline(pipeline.fit(empty_ds))
# + [markdown] id="49DMo2sQdcwC"
# Ok!, at this point we have our spell checking pipeline as expected. Let's see what we can do with it, see these errors,
#
# _
# __Witth__ the __hell__ of __phisical__ __terapy__ the patient was __imbulated__ and on posoperative, the __impatient__ tolerating a post __curgical__ soft diet._
#
# _With __paint__ __wel__ controlled on __orall__ pain medications, she was discharged __too__ __reihabilitation__ __facilitay__._
#
# _She is to also call the __ofice__ if she has any __ever__ greater than 101, or __leeding__ __form__ the surgical wounds._
#
# _Abdomen is __sort__, nontender, and __nonintended__._
#
# _No __cute__ distress_
#
# Check that some of the errors are valid English words, only by considering the context the right choice can be made.
# + colab={"base_uri": "https://localhost:8080/"} id="K2BuhiZNHGhH" outputId="0bb60a92-e58a-4347-ba60-6502210827f1"
example = ["Witth the hell of phisical terapy the patient was imbulated and on posoperative, the impatient tolerating a post curgical soft diet.",
"With paint wel controlled on orall pain medications, she was discharged too reihabilitation facilitay.",
"She is to also call the ofice if she has any ever greater than 101, or leeding form the surgical wounds.",
"Abdomen is sort, nontender, and nonintended.",
"Patient not showing pain or any wealth problems.",
"No cute distress"
]
for pairs in lp.annotate(example):
print (list(zip(pairs['token'],pairs['checked'])))
| tutorials/Certification_Trainings/Healthcare/6.Clinical_Context_Spell_Checker.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests # to get data from link
from bs4 import BeautifulSoup # to scrap data
url="https://www.goodreads.com/quotes/tag/{}?page={}"
emotions = ['love','romance','happiness','inspirational','faith','motivational', 'religion','spirituality','god','death', 'science', 'books','knowledge']
complete = url.format(emotions[0],1)
complete
def get_quotes(complete):
'''
scraping data and storing it in the form array
'''
data = requests.get(complete)
soup = BeautifulSoup(data.text)
divs = soup.find_all('div', attrs={'class' : 'quoteText'})
quotes = [div.text.strip().split('\n')[0][1:-1] for div in divs]
return quotes
quotes = get_quotes(complete)
quotes[12]
X,y = [], []
# storing scraped data in ythe form of array
for emotion in emotions:
for i in range(1,12):
complete =url.format(emotion, i)
quotes = get_quotes(complete)
X.extend(quotes)
y.extend([emotion] * len(quotes))
print(f'Processed page {i} for {emotion}')
y[400]
len(X)
len(y)
# preparing dataframe out of scraped data
import pandas as pd
df = pd.DataFrame(list(zip(y, X)), columns=['emotion', 'quotes'])
df.to_csv('emotion.csv')
# making dictionary
from sklearn.feature_extraction.text import CountVectorizer
vect = CountVectorizer(max_features=800)
from nltk.tokenize import RegexpTokenizer
from nltk.stem import PorterStemmer
from nltk.corpus import stopwords
tokenizer = RegexpTokenizer('\w+')
sw = set(stopwords.words('english'))
ps = PorterStemmer()
# +
def getStemmedQuote(quote):
quote = quote.lower()
# tokenize
tokens = tokenizer.tokenize(quote)
# remove stopwords
new_tokens = [token for token in tokens if token not in sw]
stemmed_token = [ps.stem(token) for token in new_tokens]
clean_quote = ' '.join(stemmed_token)
return clean_quote
def getStemmedQuotes(quotes):
d = []
for quote in quotes:
d.append(getStemmedQuote(quote))
return d
# -
x = getStemmedQuotes(X)
vect.fit(x)
len(vect.vocabulary_)
x_mod = vect.transform(x).todense() # returns matrix form
x_mod[2]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
... x_mod, y, test_size=0.33, random_state=42)
from sklearn.naive_bayes import BernoulliNB
# naive bayes model
model = BernoulliNB()
model.fit(x_train, y_train)
model.score(x_test, y_test)
import pickle
pickle.dumps(model)
# +
Pkl_Filename = "model.pkl"
with open(Pkl_Filename, 'wb') as file:
pickle.dump(model, file)
# -
# To predict
line = "You're just too good to be true can't take my eyes off you you'd be like heaven to touch I wanna hold you so much I love you baby"
text= "When someone shows you who they are, believe them the first time"
X_vec = vect.transform([text]).todense()
model.predict(X_vec)
| emotion_detector_part/Factor_extraction_model_preparation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from utils import *
import tensorflow as tf
from sklearn.cross_validation import train_test_split
import time
trainset = sklearn.datasets.load_files(container_path = 'data', encoding = 'UTF-8')
trainset.data, trainset.target = separate_dataset(trainset,1.0)
print (trainset.target_names)
print (len(trainset.data))
print (len(trainset.target))
ONEHOT = np.zeros((len(trainset.data),len(trainset.target_names)))
ONEHOT[np.arange(len(trainset.data)),trainset.target] = 1.0
train_X, test_X, train_Y, test_Y, train_onehot, test_onehot = train_test_split(trainset.data,
trainset.target,
ONEHOT, test_size = 0.2)
concat = ' '.join(trainset.data).split()
vocabulary_size = len(list(set(concat)))
data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)
print('vocab from size: %d'%(vocabulary_size))
print('Most common words', count[4:10])
print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])
GO = dictionary['GO']
PAD = dictionary['PAD']
EOS = dictionary['EOS']
UNK = dictionary['UNK']
class Model:
def __init__(self, size_layer, num_layers, embedded_size,
dict_size, dimension_output, maxlen,
grad_clip=5.0, kernel_sizes=[3,3,3]):
n_filters = [25 * k for k in kernel_sizes]
def cells(reuse=False):
return tf.nn.rnn_cell.LSTMCell(size_layer,initializer=tf.orthogonal_initializer(),reuse=reuse)
def add_highway(x, i):
size = sum(n_filters)
reshaped = tf.reshape(x, [-1, size])
H = tf.layers.dense(reshaped, size, tf.nn.relu, name='activation'+str(i))
T = tf.layers.dense(reshaped, size, tf.sigmoid, name='transform_gate'+str(i))
C = tf.subtract(1.0, T)
highway_out = tf.add(tf.multiply(H, T), tf.multiply(reshaped, C))
return tf.reshape(highway_out, [-1, 1, size])
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.float32, [None, dimension_output])
encoder_embeddings = tf.Variable(tf.random_uniform([dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
encoder_embedded = tf.reshape(encoder_embedded,[-1, maxlen, embedded_size])
parallels = []
for i, (n_filter, kernel_size) in enumerate(zip(n_filters, kernel_sizes)):
conv_out = tf.layers.conv1d(inputs = encoder_embedded,
filters = n_filter,
kernel_size = kernel_size,
activation = tf.tanh,
name = 'conv1d'+str(i))
pool_out = tf.layers.max_pooling1d(inputs = conv_out,
pool_size = conv_out.get_shape().as_list()[1],
strides = 1)
parallels.append(tf.reshape(pool_out, [-1, n_filter]))
pointer = tf.concat(parallels,1)
for i in range(2):
pointer = add_highway(pointer, i)
rnn_cells = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)])
outputs, _ = tf.nn.dynamic_rnn(rnn_cells, pointer, dtype = tf.float32)
W = tf.get_variable('w',shape=(size_layer, dimension_output),initializer=tf.orthogonal_initializer())
b = tf.get_variable('b',shape=(dimension_output),initializer=tf.zeros_initializer())
self.logits = tf.matmul(outputs[:, -1], W) + b
self.cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = self.logits, labels = self.Y))
params = tf.trainable_variables()
gradients = tf.gradients(self.cost, params)
clipped_gradients, _ = tf.clip_by_global_norm(gradients, grad_clip)
self.optimizer = tf.train.AdamOptimizer().apply_gradients(zip(clipped_gradients, params))
correct_pred = tf.equal(tf.argmax(self.logits, 1), tf.argmax(self.Y, 1))
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
size_layer = 128
num_layers = 2
embedded_size = 128
dimension_output = len(trainset.target_names)
learning_rate = 1e-3
maxlen = 50
batch_size = 128
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(size_layer,num_layers,embedded_size,vocabulary_size+4,dimension_output,maxlen)
sess.run(tf.global_variables_initializer())
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 5, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n'%(EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
for i in range(0, (len(train_X) // batch_size) * batch_size, batch_size):
batch_x = str_idx(train_X[i:i+batch_size],dictionary,maxlen)
acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer],
feed_dict = {model.X : batch_x, model.Y : train_onehot[i:i+batch_size]})
train_loss += loss
train_acc += acc
for i in range(0, (len(test_X) // batch_size) * batch_size, batch_size):
batch_x = str_idx(test_X[i:i+batch_size],dictionary,maxlen)
acc, loss = sess.run([model.accuracy, model.cost],
feed_dict = {model.X : batch_x, model.Y : test_onehot[i:i+batch_size]})
test_loss += loss
test_acc += acc
train_loss /= (len(train_X) // batch_size)
train_acc /= (len(train_X) // batch_size)
test_loss /= (len(test_X) // batch_size)
test_acc /= (len(test_X) // batch_size)
if test_acc > CURRENT_ACC:
print('epoch: %d, pass acc: %f, current acc: %f'%(EPOCH,CURRENT_ACC, test_acc))
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time()-lasttime)
print('epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'%(EPOCH,train_loss,
train_acc,test_loss,
test_acc))
EPOCH += 1
logits = sess.run(model.logits, feed_dict={model.X:str_idx(test_X,dictionary,maxlen)})
print(metrics.classification_report(test_Y, np.argmax(logits,1), target_names = trainset.target_names))
| text-classification/21.lstm-cnn-rnn-highway.ipynb |