code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: julia-1.1
# kernelspec:
# display_name: Julia 1.2.0
# language: julia
# name: julia-1.2
# ---
# # Introduction
# ## Who are we?
#
# Lecturer: MAJ IMM <NAME> / D30.20 / [<EMAIL>]
#
# Assistant: CPN <NAME> / D30.17 / [<EMAIL>]
# ## Why Modelling and Simulation
#
# - What is modelling?
#
# - What is simulation?
#
# Reality is often too complex to calculate ...
# ## Documentations
#
# All slides can be found on github: [https://github.com/BenLauwens/ES313.jl.git].
# ## Schedule
#
# ### Theory
#
# - 27/08: Cellular Automaton + Game of Life
# - 03/09: Physical Modelling + Self-Organization
# - 17/09: Optimisation Techniques
# - 24/09: Linear Programming
# - 01/10: Applications of Linear Programming
# - 15/10: Introduction to Discrete Event Simulation
# - 22/10: Process Driven DES: SimJulia
# - 29/10: Applications with SimJulia
#
# ### Practice
#
# - 28/08: Visualisation I
# - 04/09: Visualisation II
# - 10/09: Cellular Automaton + Game of Life
# - 11/09: Physical Modelling + Self-Organization I
# - 18/09: Physical Modelling + Self-Organization II
# - 25/09: Optimisation Techniques I
# - 02/10: Optimisation Techniques II
# - 08/10: Linear Programming I
# - 09/10: Linear Programming II
# - 16/10: Linear Programming III
# - 23/10: Introduction to Discrete Event Simulation I
# - 30/10: Introduction to Discrete Event Simulation II
# - 05/11: Process Driven DES: SimJulia
# - 12/11: Applications with SimJulia
#
# ### Project
#
# - 13/11: List of projects available
# - we are available during contact hours
# - 19/11: obligatory meeting: understanding of the problem
# - 10/12: obligatory meeting: progress
# ## Evaluation
#
# Test: 2nd week of Oct - 2Hr
# - Visualisation
# - Cellular Automaton + Game of Life
# - Physical Modelling + Self-Organization
#
# Examen: Project with Oral Defense
# - Optimisation Techniques
# - Linear Programming
# - Discrete Event Simulation
| Lectures/Lecture 0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Iterators
# ## Topics
#
# - Stream larger-than-memory data through a pipeline
# - Composable thanks to the iterator protocol
# My favorite "feature" of pandas is that it's written in Python.
# Python has great language-level features for handling streams of data
# that may not fit in memory.
# This can be a useful pre-processing step to reading the data into a DataFrame or
# NumPy array.
# You can get quite far using just the builtin data structures as <NAME> proves in [this PyData keynote](https://www.youtube.com/watch?v=lyDLAutA88s).
# +
import os
import gzip
from pathlib import Path
from itertools import islice, takewhile
import numpy as np
import pandas as pd
import seaborn as sns
import dask.dataframe as dd
from toolz import partition_all, partitionby
import matplotlib.pyplot as plt
# -
# %matplotlib inline
pd.options.display.max_rows = 10
sns.set(context='talk')
plt.style.use("default")
# ## Beer Reviews Dataset
#
# - A review is a list of lines
# - Each review line is formated like `meta/field: value`
# - Reviews are separated by blank lines (i.e. the line is just `'\n'`)
#
# Stanford has a [dataset on beer reviews](https://snap.stanford.edu/data/web-BeerAdvocate.html). The raw file is too large for me to include, but I split off a couple subsets for us to work with.
#
# Pandas can't read this file natively, but we have Python!
# We'll use Python to parse the raw file and tranform it into a tabular format.
with gzip.open("data/beer-raw-small.txt.gz", "r") as f:
print(f.read(1500).decode('utf-8'))
# The full compressed raw dataset is about 500MB, so reading it all into memory might not be pleasent (we're working with a small subset that would fit in memory, but pretend it didn't).
# Fortunately, Python's iterator protocol and generators make dealing with large streams of data pleasent.
# ## Developing a solution
#
# Let's build a solution together. I'll provide some guidance as we go along.
# Get a handle to the data
f = gzip.open("data/beer-raw-small.txt.gz", "rt")
f.readline()
# ## Parsing Tasks
#
# 1. split the raw text stream into individual reviews
# 2. transform each individual review into a data container
# 3. combine a chunk of transformed individual reviews into a collection
# 4. store the chunk to disk
# **Step 1**: Split the text stream
#
# We'll use `toolz.partitionby`. It takes an iterator like `f`, and splits it according to `func`.
f.seek(0) # Make the cell idempotent
split = partitionby(lambda x: x == '\n', f)
a, b = next(split), next(split)
a
b
# So we've gone from
#
# ```python
# [
# "beer/name: <NAME>\n",
# ...
# "review/text: ...\n",
# "\n",
# "beer/name: Beer 2\n",
# "...",
# "review/text: ...\n",
# "\n",
# ]
# ```
#
# To
#
# ```python
# [
# (
# "beer/name <NAME>\n",
# ...
# "review/text: ...\n"
# ),
# ("\n",),
# (
# "beer/name: Beer 2\n",
# ...
# "review/text: ...\n"
# ),
# ("\n",),
# ...
# ]
# ```
# So we can clean up those newlines with a generator expression:
f.seek(0)
reviews = (x for x in partitionby(lambda x: x == "\n", f)
if x != ("\n",))
reviews
# **Step 2**: Parse each review
#
# Let's grab out the first review, and turn it into something a bit nicer than a tuple of strings.
f.seek(0); # make the cell idempotent
review = next(partitionby(lambda x: x == '\n', f))
review
# <div class="alert alert-success" data-title="Format Review">
# <h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Format Review</h1>
# </div>
# <p>Write a function `format_review` that converts an item like `review` into a dict</p>
# It will have one entry per line, where the are the stuff to the left of the colon and the values are the stuff to the right.
# For example, the first line would be
#
# `'beer/name: <NAME>\n',` => `'beer/name': '<NAME>'`
#
# Make sure to clean up the line endings too.
#
# - Hint: Check out the [python string methods](https://docs.python.org/3/library/stdtypes.html#string-methods)
# You can check your function against `expected` by evaluating the next cell.
# If you get a failure, adjust your `format_review` until it passes.
# +
import unittest
from typing import List, Dict
f.seek(0); # make the cell idempotent
review = next(partitionby(lambda x: x == '\n', f))
def format_review(review: List[str]) -> Dict[str, str]:
"""Your code goes below"""
formatted = ...
return formatted
class TestFormat(unittest.TestCase):
maxDiff = None
def test_format_review(self):
result = format_review(review)
expected = {
'beer/ABV': '5.00',
'beer/beerId': '47986',
'beer/brewerId': '10325',
'beer/name': '<NAME>',
'beer/style': 'Hefeweizen',
'review/appearance': '2.5',
'review/aroma': '2',
'review/overall': '1.5',
'review/palate': '1.5',
'review/profileName': 'stcules',
'review/taste': '1.5',
'review/text': 'A lot of foam. But a lot.\tIn the smell some banana, and then lactic and tart. Not a good start.\tQuite dark orange in color, with a lively carbonation (now visible, under the foam).\tAgain tending to lactic sourness.\tSame for the taste. With some yeast and banana.\t\t',
'review/time': '1234817823'
}
self.assertEqual(result, expected)
suite = unittest.TestLoader().loadTestsFromModule(TestFormat())
unittest.TextTestRunner().run(suite)
# -
# %load solutions/groupby_format_review.py
# Notice that optional argument to split, which controls the number of splits made; If a review text had contained a literal `': '`, we'd be in trouble since it'd get split again.
#
# Make sure you executed the above solution cell twice (first to load, second to execute) as we'll be using that `format_review` function down below
# ## To a DataFrame
#
# Assuming we've processed many reviews into a list, we'll then build up a DataFrame.
# +
r = [format_review(review)] # imagine a list of many reviews
col_names = {
'beer/ABV': 'abv',
'beer/beerId': 'beer_id',
'beer/brewerId': 'brewer_id',
'beer/name': 'beer_name',
'beer/style': 'beer_style',
'review/appearance': 'review_appearance',
'review/aroma': 'review_aroma',
'review/overall': 'review_overall',
'review/palate': 'review_palate',
'review/profileName': 'profile_name',
'review/taste': 'review_taste',
'review/text': 'text',
'review/time': 'time'
}
df = pd.DataFrame(r)
numeric = ['abv', 'review_appearance', 'review_aroma',
'review_overall', 'review_palate', 'review_taste']
df = (df.rename(columns=col_names)
.replace('', np.nan))
df[numeric] = df[numeric].astype(float)
df['time'] = pd.to_datetime(df.time.astype(int), unit='s')
df
# -
# Again, writing that as a function:
def as_dataframe(reviews):
df = pd.DataFrame(list(reviews))
col_names = {
'beer/ABV': 'abv',
'beer/beerId': 'beer_id',
'beer/brewerId': 'brewer_id',
'beer/name': 'beer_name',
'beer/style': 'beer_style',
'review/appearance': 'review_appearance',
'review/aroma': 'review_aroma',
'review/overall': 'review_overall',
'review/palate': 'review_palate',
'review/profileName': 'profile_name',
'review/taste': 'review_taste',
'review/text': 'text',
'review/time': 'time'
}
order = ['brewer_id', 'beer_id', 'beer_name',
'beer_style', 'abv',
'profile_name', 'time',
'review_appearance', 'review_aroma',
'review_palate', 'review_taste',
'review_overall',
'text']
df = df.rename(columns=col_names)[order]
return df
# ## Full pipeline
#
# 1. `file -> review_lines : List[str]`
# 2. `review_lines -> reviews : Dict[str, str]`
# 3. `reviews -> DataFrames`
# 4. `DataFrames -> CSV`
# The full pipeline would look something like:
# +
BATCH_SIZE = 100 # Number of reviews to process per chunk
# Intentionally small for demonstration
p = Path("data/beer-raw-small.txt.gz")
with gzip.open(p, "rt") as f:
review_lines_and_newlines = partitionby(lambda x: x == '\n', f)
# so filter out the newlines
review_lines = (x for x in review_lines_and_newlines if x != ("\n",))
# generator expression to go from List[str] -> Dict[str, str]
reviews = (format_review(x) for x in review_lines)
# `reviews` yields one dict per review.
# Won't fit in memory, so do `BATCH_SIZE` per chunk
chunks = partition_all(BATCH_SIZE, reviews)
dfs = (as_dataframe(chunk) for chunk in chunks)
p.parent.joinpath("beer").mkdir(exist_ok=True)
# the first time we read from disk
for i, df in enumerate(dfs):
df.to_csv("data/beer/chunk_%s.csv.gz" % i, index=False,
compression="gzip")
print(i, end='\r')
# -
#
# This runs comfortably in memory. At any given time, we only have `BATCH_SIZE` reviews in memory.
# ## Brief Aside on [Dask](http://dask.pydata.org/en/latest/)
#
# > Dask is a flexible parallel computing library for analytic computing.
# The original dataset is in random order, but I wanted to select an interesting subset for us to work on: all the reviews by the top 100 reviewers.
#
# You might know enough pandas now to figure out the top-100 reviewers by count.
# Do a `value_counts` , select the top 100, then select the index.
#
# ```python
# top_reviewers = df.profile_name.value_counts().nlargest(100).index
# ```
#
#
# Recall that `value_counts` will be a `Series` where the index is each unique `profile_name` and the values is the count of reviews for that profile.
# We use `nlargets(100)` to get the 100 largest values, and `.index` to get the actual profile names.
#
# With that we could do an `isin` like
#
# ```python
# subset = df[df.profile_name.isin(top_reviewers)
# ```
#
# To get the subset of reviews that came from the 100 most active reviewers.
#
# But that assumes we have a `df` containing the full dataset in memory.
# My laptop can't load the entire dataset though (recall that we're working with a subset today).
#
# It wouldn't be *that* hard to write a custom solution in python or pandas using chunking like we did up above.
# We'd split our task into parts
#
# - read a chunk
# - compute `chunk.profile_name.value_counts()`
# - store that intermediate `value_counts` in a global container
# 
# Once we've processed each chunk, our final steps are to
#
# - merge each `value_counts` chunk by summing
# - filter to the top 100
#
# This pattern of processing chunks independently (map) and combining the results into a smaller output (reduce) is common, and doing it manually gets old.
# Dask can help out here.
# ## Collections, Tasks, Schedulers
#
# 
# Dask has several components, so it can be hard to succinctly describe the library.
# Right now, we'll view it as providing "big dataframes" (one of its "collections").
import dask.dataframe as dd
df = dd.read_csv("data/beer/chunk*.csv.gz", compression="gzip", blocksize=None,
parse_dates=['time'])
df
# That API should look familiar to you, now that you're experienced pandas users.
# We swap out `pd` for `dd`, and stuff mostly just works.
# Occasionally, you'll have a `dask`-specific thing like `blocksize` (number of bytes per smaller dataframe) that don't apply to pandas, which assumes things fit in memory.
# Now that we have our "big dataframe", we can do normal pandas operations like `.value_counts`:
reviews_per_person = df.profile_name.value_counts()
reviews_per_person
# This is a `dask.Series`, the dask analog to a `dask.dataframe`.
# One important point: we haven't actually done any real work yet.
# The operations on a `dask.dataframe` is really just a pandas-like API for
# manipulating the directed acylic graph (DAG) of operations, and a bit of metadata about those operations.
# We can visualize that DAG with the `.visualize` method. I've done it ahead of
# time since it uses `graphviz`, which can be a pain to install on every system.
# ```python
# reviews_per_person.visualize(rankdir="LR")
# ```
#
# 
# Let's get the 100 most active reviewers. There's a couple ways to do this.
#
# 1. Sort `reviews_per_person`, then take the the last 100
# 2. Scan `reviews_per_person`, keeps the 100 largest you've seen
#
# For large datasets, 2 is *much* easier / faster. It's implemented as `.nlargest` on pandas and dask Series.
top_reviewers = reviews_per_person.nlargest(100).index
top_reviewers
# At this point, we still just have a dask object (a DAG of operations, to be computed later). To actually get a concrete value, you hand the DAG off to a *scheduler*, using `compute`.
top_reviewers.compute()
# `dask.dataframe` uses the threaded scheduler, so the actual computation is done in parallel. You could also use the `multiprocessing` scheduler (if you have operations that hold the GIL), or the `distributed` scheduler if you have a cluster handy.
# We can use `top_reviewers` as a boolean mask, just like in regular pandas.
#
# ```python
# >>> df[df.profile_name.isin(top_reviewers.compute())].to_parquet(
# "data/subset.parq", compression='gzip'
# )
# ```
# ## Back to pandas
#
# I've provided the reviews by the top 100 reviewers.
# We'll use it for talking about groupby.
df = pd.read_parquet("data/subset.parq")
df.info()
# ## Aside: Namespaces
#
# Pandas has been expanding its use of namespaces (or accessors) on `DataFrame` to group together related methods. This also limits the number of methods direclty attached to `DataFrame` itself, which can be overwhelming.
#
# Currently, we have these namespaces:
#
# - `.str`: defined on `Series` and `Index`es containing strings (object dtype)
# - `.dt`: defined on `Series` with `datetime` or `timedelta` dtype
# - `.cat`: defined on `Series` and `Indexes` with `category` dtype
# - `.plot`: defined on `Series` and `DataFrames`
# <div class="alert alert-success" data-title="Reviews by Hour">
# <h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Reviews by Hour</h1>
# </div>
#
# <p>Make a barplot of the count of reviews by hour of the day.</p>
# - Hint: Use the `.dt` namespace to get the `hour` component of a `datetime`
# - Hint: We've seen `Series.value_counts` for getting the count of each value
# - Hint: Use `.sort_index` to make sure the data is ordered by hour, not count
# - Hint: Use the [`.plot`](http://pandas.pydata.org/pandas-docs/stable/api.html#plotting) namespace to get a `bar` chart
# %load solutions/groupby_03.py
# <div class="alert alert-success" data-title="Pale Ales">
# <h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Pale Ales</h1>
# </div>
# <p>
# Make a variable `pale_ales` that filters `df` to just rows where `beer_style` contains the string `'pale ale'` (ignoring case)
# </p>
# - Hint: Use the `df.beer_style.str` namespace and find a method for checking whether a string contains another string.
# %load solutions/groupby_04.py
# # Groupby
# Groupby operations come up in a lot of contexts.
# At its root, groupby about doing an operation on many subsets of the data, each of which shares something in common.
# The components of a groupby operation are:
# ## Components of a groupby
#
# 1. **split** a table into groups
# 2. **apply** a function to each group
# 3. **combine** the results into a single DataFrame or Series
# In pandas the `split` step looks like
#
# ```python
# df.groupby( grouper )
# ```
#
# `grouper` can be many things
#
# - Series (or string indicating a column in `df`)
# - function (to be applied on the index)
# - dict : groups by *values*
# - `levels=[ names of levels in a MultiIndex ]`
# ## Split
#
# Break a table into smaller logical tables according to some rule
gr = df.groupby("beer_name")
gr
# We haven't really done any actual work yet, but pandas knows what it needs to know to break the larger `df` into many smaller pieces, one for each distinct `beer_name`.
# ## Apply & Combine
#
# To finish the groupby, we apply a method to the groupby object.
# +
review_cols = ['review_appearance', 'review_aroma', 'review_overall',
'review_palate', 'review_taste']
df.groupby('beer_name')[review_cols].agg('mean')
# -
# In this case, the function we applied was `'mean'`.
# Pandas has implemented cythonized versions of certain common methods like mean, sum, etc.
# You can also pass in regular functions like `np.mean`.
#
# In terms of split, apply, combine, split was `df.groupby('beer_name')`.
# We apply the `mean` function by passing in `'mean'`.
# Finally, by using the `.agg` method (for aggregate) we tell pandas to combine the results with one output row per group.
#
# You can also pass in regular functions like `np.mean`.
df.groupby('beer_name')[review_cols].agg(np.mean).head()
# Finally, [certain methods](http://pandas.pydata.org/pandas-docs/stable/api.html#id35) have been attached to `Groupby` objects.
df.groupby('beer_name')[review_cols].mean()
# <div class="alert alert-success" data-title="Highest Variance">
# <h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Highest Variance</h1>
# </div>
#
# <p>Find the `beer_style`s with the greatest variance in `abv`.</p>
#
# - hint: `.var` calculates the varaince and is available on `GroupBy` objects like `gr.abv`.
# - hint: use `.sort_values` to sort a Series by the values (it took us a while to come up with that name)
# %load solutions/groupby_abv.py
# ## `.agg` output shape
#
# The output shape is determined by the grouper, data, and aggregation
#
# - Grouper: Controls the output index
# * single grouper -> Index
# * array-like grouper -> MultiIndex
# - Subject (Groupee): Controls the output data values
# * single column -> Series (or DataFrame if multiple aggregations)
# * multiple columns -> DataFrame
# - Aggregation: Controls the output columns
# * single aggfunc -> Index in the colums
# * multiple aggfuncs -> MultiIndex in the columns (Or 1-D Index if groupee is 1-D)
#
# We'll go into MultiIndexes in a bit, but for know, think of them as regular Indexes with multiple levels (columns).
# single grouper, single groupee, single aggregation
df.groupby('beer_style').review_overall.agg('mean')
# multiple groupers, multiple groupee, single aggregation
df.groupby(['brewer_id', 'beer_name'])[review_cols].agg(['mean', 'min', 'max', 'std', 'count'])
# <div class="alert alert-success" data-title="Rating by length">
# <h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Rating by length</h1>
# </div>
#
# <p>Plot the relationship between review length (number of characters) and average `reveiw_overall`.</p>
#
# - Hint: use `.plot(style='k.')`
# - We've grouped by columns so far, you can also group by any series with the same length
# %load solutions/groupby_00.py
# <div class="alert alert-success" data-title="Reviews by Length">
# <h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Reviews by Length</h1>
# </div>
#
# <p>Find the relationship between review length (number of **words** and average `reveiw_overall`.)</p>
#
# - Hint: You can pass a [regular expression](https://docs.python.org/3/howto/regex.html#matching-characters) to any of the `.str` methods.
# %load solutions/groupby_00b.py
# <div class="alert alert-success" data-title="Rating by number of Reviews">
# <h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Rating by number of Reviews</h1>
# </div>
#
# <p>Find the relationship between the number of reviews for a beer and the average `review_overall`.</p>
#
# %load solutions/groupby_01.py
# ## Transform
#
# A *transform* is a function whose output is the same shape as the input.
# Recall that a groupby has three steps: split, apply, combine.
# So far, all of the functions we've applied have been *aggregations*: the rule for "combine" is one row per group.
#
# You can use `Groupby.transform` when you have an operation that should be done *groupwise*, but the result should be the same shape.
# For example, suppose we wanted to de-mean each reviewer's scores by their average score.
# Define demean(v: array) -> array
def demean(v):
return v - v.mean()
# Just calling `demean` on the entire Series will noramilze by the *global* average.
demean(df.review_overall)
# Now, let's de-mean each individual's reviews by their own average.
# This could be useful if, for example, you were building a recommendation system.
# A rating of 4 from someone's whose average is 2 is in some sense more meaningful that a 4 from someone who always gives 4s.
normalized = df.groupby("profile_name")[review_cols].transform(demean)
normalized.head()
# We used `.transform` because the desired output was the same shape as the input.
# Just like `.agg` informs pandas that you want `1 input group → 1 output row`, the `.transform` method informs pandas that you want `1 input row → 1 output row`.
#
# `.transform` operates on each column independently.
# <div class="alert alert-success" data-title="Personal Trend?">
# <h1><i class="fa fa-tasks" aria-hidden="true"></i> Exercise: Personal Trend?</h1>
# </div>
#
# <p>Do reviewer's `review_overall` trend over a person's time reviewing?</p>
#
# Hint: Need an indictor that tracks which review this is for that person. That is, we need a cumulative count of reviews per person.
#
# Implement `cumcount` to match the example
# +
def cumcount(s):
"""Returns an array with counting up to the length of 's'
Examples
--------
>>> cumcount([1, 2, 2, 1, 2])
array([0, 1, 2, 4])
"""
return ...
cumcount([1, 2, 2, 1, 2])
# -
# Now make a variable `order` that has which review it was for that person.
# For example, if the raw reviews were like
#
# <table>
# <thead>
# <th>Reviewer</th>
# <th>Review Overall</th>
# </thead>
# <tbody>
# <tr>
# <td>Alice</td>
# <td>3</td>
# </tr>
# <tr>
# <td>Alice</td>
# <td>3</td>
# </tr>
# <tr>
# <td>Bob</td>
# <td>2</td>
# </tr>
# <tr>
# <td>Alice</td>
# <td>4</td>
# </tr>
# <tr>
# <td>Bob</td>
# <td>5</td>
# </tr>
# </tbody>
# </table>
#
# The `order` table would be
#
#
# <table>
# <thead>
# <th>Reviewer</th>
# <th>Order</th>
# </thead>
# <tbody>
# <tr>
# <td>Alice</td>
# <td>0</td>
# </tr>
# <tr>
# <td>Alice</td>
# <td>1</td>
# </tr>
# <tr>
# <td>Bob</td>
# <td>0</td>
# </tr>
# <tr>
# <td>Alice</td>
# <td>2</td>
# </tr>
# <tr>
# <td>Bob</td>
# <td>2</td>
# </tr>
# </tbody>
#
# </table>
order = df.groupby("profile_name").review_overall.transform(...)
order
# Now, what do we do with `order`? Hint: It's the same shape as `df` and we
# want to compute the average `review_overall` for all people with `order=0`,
# and all people with `order=1`, and `order=2`...
# %load solutions/groupby_02.py
# ## General `.apply`
#
# We've seen `.agg` for outputting 1 row per group, and `.transform` for outputting 1 row per input row.
#
# The final kind of function application is `.apply`.
# This can do pretty much whatever you want.
# We'll see an example in a later notebook.
# ## Summary
#
# - We used Python's iterator protocol to transform the raw data to a table
# - We saw how Dask could handle larger-than-memory data with a familiar API
# - We used groupby to analyze data by subsets
| 03-Iterators-Groupby.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import soundfile as sf
from IPython.display import Audio
import matplotlib.pyplot as plt
# %matplotlib inline
# # PART 1
#
# - Fill the skeleton functions below with the correct code
# - Find the math functions for the square, triangular and sawtooth waves on the assignment pdf
# - Hint: some of this has been covered in class
#
#
# ### NOTE:
# When selecting odd overtones make sure we process a correct amount of odd overtones corresponding to 'number_overtones' instead of stopping the function at the overtone count equal to 'number_overtones'.
#
# E.g. if we want 3 odd overtones, we need to stop the loop at overtone 7 (f0 + f3 + f5 + f7 ), Not at overtone 3 ( f0 + f3 ).
def sinewave(fs,duration,f0, phase):
time_vector = np.arange(0, duration, 1/fs)
signal = np.sin(2 * np.pi * f0 * time_vector + phase)
return time_vector, signal
def cosinewave(fs,duration,f0, phase):
time_vector = np.arange(0, duration, 1/fs)
signal = np.cos(2 * np.pi * f0 * time_vector + phase)
return time_vector, signal
def squarewave(fs,duration,f0,number_overtones, phase):
time_vector = np.arange(0, duration, 1/fs)
square_wave = np.zeros(duration*fs) #initialize
anti_alias = 0
for k in range(1,(number_overtones+1)*2,2):
freq = 2*np.pi*f0*k
square_wave = square_wave + 1/k * np.sin(freq*time_vector + phase)
return time_vector, square_wave
def triangularwave(fs,duration,f0,number_overtones,phase):
time_vector = np.arange(0, duration, 1/fs)
triangle_wave = np.zeros(duration*fs) #initialize
anti_alias = 0
for k in range(1,(number_overtones+1)*2,2):
freq = 2*np.pi*f0*k
triangle_wave = triangle_wave + ((-1)**((k-1)/2)) * 1/(k**2) * np.sin(freq*time_vector + phase)
return time_vector, triangle_wave
def sawtoothwave(fs,duration,f0,number_overtones,phase):
time_vector = np.arange(0, duration, 1/fs)
saw_wave = np.zeros(duration*fs) #initialize
anti_alias = 0
for k in range(1,(number_overtones+1)):
freq = 2*np.pi*f0*k
saw_wave = saw_wave + 1/k * np.sin(freq*time_vector + phase)
saw_wave = -saw_wave
return time_vector, saw_wave
# # PART 2
#
# - Use the functions above to plot and display audio for each waveform
# - Remember to plot against time (use the time_vector) and label the plot
# - Use plt.xlim to diplay only 2 periods of the waveform
# +
fs = 44100
f0 = 500
duration = 1 # 1 second
phase = 0
# SINE
(time_vector, sine_1) = sinewave(fs,duration,f0, phase)
plt.plot(sine_1)
plt.ylabel('Amplitude')
plt.xlabel('Samples')
plt.title('Sinewave')
plt.xlim([0,(2*fs/f0)])
Audio(sine_1, rate = fs)
# +
# COSINE
(time_vector, cos_1) = cosinewave(fs,duration,f0, phase)
plt.plot(cos_1)
plt.ylabel('Amplitude')
plt.xlabel('Samples')
plt.title('Cosinewave')
plt.xlim([0,(2*fs/f0)])
Audio(cos_1, rate = fs)
# +
# SQUARE WAVE
number_overtones = 1000 # Number of Overtones.
(time_vector, square_1) = squarewave(fs,duration,f0,number_overtones, phase)
plt.plot(square_1)
plt.ylabel('Amplitude')
plt.xlabel('Samples')
plt.title('Squarewave')
plt.xlim([0,(2*fs/f0)])
Audio(square_1, rate = fs)
# +
# TRIANGLE
(time_vector, triangle_1) = triangularwave(fs,duration,f0,number_overtones,phase)
plt.plot(triangle_1)
plt.ylabel('Amplitude')
plt.xlabel('Samples')
plt.title('Triangularwave')
plt.xlim([0,(2*fs/f0)])
Audio(triangle_1, rate = fs)
# +
# SAWTOOTH
(time_vector, saw_1) = sawtoothwave(fs,duration,f0,number_overtones,phase)
plt.plot(saw_1)
plt.ylabel('Amplitude')
plt.xlabel('Samples')
plt.title('Sawtoothwave')
plt.xlim([0,(2*fs/f0)])
Audio(saw_1, rate = fs)
# -
# # Part 3
# # Extra Credit - Noise Generator
#
# - Fill the function, plot and display
# +
def noise_gen(fs,duration):
time_vector = np.arange(0, duration, 1/fs)
values = 2* np.random.rand(len(time_vector)) - 1
return time_vector, values
# +
# NOISE
fs = 44100
duration = 1 # 1 sec
(time_vector, noise_1) = noise_gen (fs,duration)
plt.plot(noise_1)
plt.ylabel('Amplitude')
plt.xlabel('Samples')
plt.title('Noise')
plt.xlim([0,(2*fs/f0)])
Audio(noise_1, rate = fs)
# -
# ## Unit Test (for graders -- do not edit or delete)
# +
time_vector, sine_test = sinewave(4,1,1,0)
assert np.allclose(sine_test, np.array([0, 1, 0, -1]))
assert np.allclose(time_vector, np.array([0, 0.25, 0.5, 0.75] ))
print('Sinewave OK!')
# +
time_vector, cosine_test = cosinewave(4,1,1,0)
assert np.allclose(cosine_test, np.array([1, 0, -1, 0]))
assert np.allclose(time_vector, np.array([0, 0.25, 0.5, 0.75] ))
print('Cosinewave OK!')
# +
time_vector, squarewave_test = squarewave(4, 1, 1, 10, 0)
squarewave_test = squarewave_test/np.max(np.abs(squarewave_test))
assert np.allclose(squarewave_test, np.array([0, 1, 0, -1]))
assert np.allclose(time_vector, np.array([0, 0.25, 0.5, 0.75] ))
print('Squarewave OK!')
# +
time_vector, triangularwave_test = triangularwave(4, 1, 1, 10, 0)
triangularwave_test = triangularwave_test/np.max(np.abs(triangularwave_test))
assert np.allclose(triangularwave_test, np.array([0, 1, 0, -1]))
assert np.allclose(time_vector, np.array([0, 0.25, 0.5, 0.75] ))
print('Triang OK!')
# +
time_vector, sawtoothwave_test = sawtoothwave(4, 1, 1, 10, 0)
sawtoothwave_test = sawtoothwave_test/np.max(np.abs(sawtoothwave_test))
assert np.allclose(sawtoothwave_test, np.array([0, -1, 0, 1]))
assert np.allclose(time_vector, np.array([0, 0.25, 0.5, 0.75] ))
print('Sawtooth OK!')
# -
| HW2-ElenaGeorgieva.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="zqcFCOmzHdDu"
# # Inference and Validation
#
# Now that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training.
# + [markdown] colab_type="text" id="bmWozIIC46NB"
# ## Import Resources
# -
import warnings
warnings.filterwarnings('ignore')
# + colab={} colab_type="code" id="FVP2jRaGJ9qu"
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
# -
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
# + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="hYwKhqk_h6-y" outputId="72f54d14-8abc-4fc9-95bf-dd5c2cc1b56b"
print('Using:')
print('\t\u2022 TensorFlow version:', tf.__version__)
print('\t\u2022 tf.keras version:', tf.keras.__version__)
print('\t\u2022 Running on GPU' if tf.test.is_gpu_available() else '\t\u2022 GPU device not found. Running on CPU')
# + [markdown] colab_type="text" id="tPFaSI1S5CM3"
# ## Load the Dataset
#
# We are now going to load the Fashion-MNIST dataset using tensorflow_datasets as we've done before. In this case, however, we are going to define how to split the dataset ourselves. We are going to split the dataset such that 60\% of the data will be used for training, 20\% of the data will be used for validation, and the remaining 20\% of the data will be used for testing.
#
# To do this, we are going to do two things in succession. First we are going to combine (merge) the `train` and `test` splits. After the splits are merged into a single set, we are going to sub-split it into three sets, where the first set has 60\% of the data, the second set has 20\% of the data, and the third set has the remaining 20\% of the data.
#
# To merge all the splits of a dataset together, we can use `split=tfds.Split.ALL`. For example,
#
# ```python
# dataset = tfds.load('fashion_mnist', split=tfds.Split.ALL)
# ```
#
# will return a `dataset` that contains a single set with 70,000 examples. This is because the Fashion-MNIST dataset has a `train` split with 60,000 examples and a `test` split with 10,000 examples. The `tfds.Split.ALL` keyword merged both splits into a single set containing the combined data from both splits.
#
# After we have merged the splits into a single set, we need to sub-split it. We can sub-split our dataset by using the `.subsplit()` method. There are various ways in which we can use the `.subsplit()` method. Here we are going to sub-split the data by providing the percentage of data we want in each set. To do this we just pass a list with percentages we want in each set. For example,
#
# ```python
# split=tfds.Split.ALL.subsplit([60,20,20])
# ```
#
# will sub-split our dataset into three sets, where the first set has 60\% of the data, the second set has 20\% of the data, and the third set has the remaining 20\% of the data. A word of **caution**, TensorFlow Datasets does not guarantee the reproducibility of the sub-split operations. That means, that two different users working on the same version of a dataset and using the same sub-split operations could end-up with two different sets of examples. Also, if a user regenerates the data, the sub-splits may no longer be the same. To learn more about `subsplit` and other ways to sub-split your data visit the [Split Documentation](https://www.tensorflow.org/datasets/splits#subsplit).
# + colab={"base_uri": "https://localhost:8080/", "height": 207} colab_type="code" id="bgIzJ4oRLQpd" outputId="a79abec7-96bb-4b4b-cf34-399447bb9814"
train_split = 60
test_val_split = 20
splits = tfds.Split.ALL.subsplit([train_split, test_val_split, test_val_split])
dataset, dataset_info = tfds.load('fashion_mnist', split=splits, as_supervised=True, with_info=True)
training_set, validation_set, test_set = dataset
# + [markdown] colab_type="text" id="jNJA3Xe-A4q_"
# When we use `split=tfds.Split.ALL.subsplit([60,20,20])`, `tensorflow_datasets` returns a tuple with our sub-splits. Since we divided our dataset into 3 sets, then, in this case, `dataset` should be a tuple with 3 elements.
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="3OBvm_yf5ijj" outputId="d239ea33-ce86-4817-fe04-2e4785d7fd5d"
# Check that dataset is a tuple
print('dataset has type:', type(dataset))
# Print the number of elements in dataset
print('dataset has {:,} elements '.format(len(dataset)))
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="MGwZVYsj6OXe" outputId="605bf42a-34cd-4a36-9907-ec2c16beeb11"
# Display dataset
dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 598} colab_type="code" id="2HL4BL_lnz2z" outputId="a7558de0-93b8-4f40-80e9-e378ce248fc4"
# Display dataset_info
dataset_info
# + [markdown] colab_type="text" id="14QBpBlU5sOm"
# ## Explore the Dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="_GnHgnh-eSuf" outputId="156da5c6-bc6c-4d8a-8768-26d4f48c2e13"
total_examples = dataset_info.splits['train'].num_examples + dataset_info.splits['test'].num_examples
num_training_examples = (total_examples * train_split) // 100
num_validation_examples = (total_examples * test_val_split) // 100
num_test_examples = num_validation_examples
print('There are {:,} images in the training set'.format(num_training_examples))
print('There are {:,} images in the validation set'.format(num_validation_examples))
print('There are {:,} images in the test set'.format(num_test_examples))
# + colab={} colab_type="code" id="4WMKWKxPcgOU"
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# + [markdown] colab_type="text" id="xlHOpMIq5yYa"
# ## Create Pipeline
# + colab={} colab_type="code" id="mBAzrt_nUfNZ"
def normalize(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
batch_size = 64
training_batches = training_set.cache().shuffle(num_training_examples//4).batch(batch_size).map(normalize).prefetch(1)
validation_batches = validation_set.cache().batch(batch_size).map(normalize).prefetch(1)
testing_batches = test_set.cache().batch(batch_size).map(normalize).prefetch(1)
# + [markdown] colab_type="text" id="39MO_CpdneIY"
# ## Build the Model
#
# Here we'll build and compile our model as usual.
# + colab={} colab_type="code" id="agzupDJxnekW"
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28,1)),
tf.keras.layers.Dense(256, activation = 'relu'),
tf.keras.layers.Dense(128, activation = 'relu'),
tf.keras.layers.Dense(64, activation = 'relu'),
tf.keras.layers.Dense(10, activation = 'softmax')
])
# + colab={} colab_type="code" id="uI0kZt-cpbXO"
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# + [markdown] colab_type="text" id="-cyd0DQSoazb"
# ## Evaluate Loss and Accuracy on the Test Set
#
# The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. Let's see how the model performs on our test set.
# + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="P3kE7BEAobKs" outputId="7e140d18-5cdc-404c-f3da-21310456965e"
loss, accuracy = model.evaluate(testing_batches)
print('\nLoss on the TEST Set: {:,.3f}'.format(loss))
print('Accuracy on the TEST Set: {:.3%}'.format(accuracy))
# + [markdown] colab_type="text" id="mx52hCxlp27g"
# The network is untrained so it's making random guesses and we should see an accuracy around 10%.
# + [markdown] colab_type="text" id="ziyVd9R76H25"
# ## Train the Model with the Validation Set
#
# Now let's train our network as usual, but this time we are also going to incorporate our validation set into the training process.
#
# During training, the model will only use the training set in order to decide how to modify its weights and biases. Then, after every training epoch we calculate the loss on the training and validation sets. These metrics tell us how well our model is "learning" because it they show how well the model generalizes to data that is not used for training. It's important to remember that the model does not use any part of the validation set to tune its weights and biases, therefore it can tell us if we're overfitting the training set.
#
# We can incorporate our validation set into the training process by including the `validation_data=validation_batches` argument in the `.fit` method.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="GFmdnOz1pNoa" outputId="538cb017-140c-4ece-e515-08083b3eaf29"
EPOCHS = 30
history = model.fit(training_batches,
epochs = EPOCHS,
validation_data=validation_batches)
# + [markdown] colab_type="text" id="CMQnPRTwPZbU"
# ## Loss and Validation Plots
#
# If we look at the training and validation losses achieved on epoch 30 above, we see that the loss on the training set is much lower than that achieved on the validation set. This is a clear sign of overfitting. In other words, our model has "memorized" the training set so it performs really well on it, but when tested on data that it wasn't trained on (*i.e.* the validation dataset) it performs poorly.
#
# Let's take a look at the model's loss and accuracy values obtained during training on both the training set and the validation set. This will allow us to see how well or how bad our model is "learning". We can do this easily by using the `history` object returned by the `.fit` method. The `history.history` attribute is a **dictionary** with a record of training accuracy and loss values at successive epochs, as well as validation accuracy and loss values when applicable.
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="EYjSIPoO6hXC" outputId="bc4ff436-db55-4d55-f6d4-82003b9ecd50"
# Check that history.history is a dictionary
print('history.history has type:', type(history.history))
# Print the keys of the history.history dictionary
print('\nThe keys of history.history are:', list(history.history.keys()))
# + [markdown] colab_type="text" id="QyUfyPzD9hLA"
# Let's use the `history.history` dictionary to plot our model's loss and accuracy values obtained during training.
# + colab={"base_uri": "https://localhost:8080/", "height": 498} colab_type="code" id="wDFZCZnArx1T" outputId="1f696a01-ceaf-4a65-b04a-cb33ddc8d09a"
training_accuracy = history.history['accuracy']
validation_accuracy = history.history['val_accuracy']
training_loss = history.history['loss']
validation_loss = history.history['val_loss']
epochs_range=range(EPOCHS)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, training_accuracy, label='Training Accuracy')
plt.plot(epochs_range, validation_accuracy, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, training_loss, label='Training Loss')
plt.plot(epochs_range, validation_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# + [markdown] colab_type="text" id="MzkpAw5SP4cI"
# ## Early Stopping
#
# If we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.
# This happens when our model performs really well on the training data but it fails to generalize well enough to also perform well on the validation set. We can tell that this is happening because when we finished training the validation loss is higher than the training loss.
#
# One way to prevent our model from overfitting is by stopping training when we achieve the lowest validation loss. If we take a look at the plots we can see that at the beginning of training the validation loss starts decreasing, then after some epochs it levels off, and then it just starts increasing. Therefore, we can stop training our model when the validation loss levels off, such that our network is accurate but it's not overfitting.
#
# This strategy is called **early-stopping**. We can implement early stopping in `tf.keras` by using a **callback**. A callback is a set of functions to be applied at given stages of the training process. You can pass a list of callbacks to the `.fit()` method by using the `callbacks` keyword argument.
#
# To implement early-stopping during training we will use the callback:
#
#
# ```python
# tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)
# ```
#
# The `monitor` argument specifies the quantity we want to monitor during training to determine when to stop training. The `patience` argument determines the number of consecutive epochs with no significant improvement after which training will be stopped. We can also specify the minimum change in the monitored quantity to qualify as an improvement, by specifying the `min_delta` argument. For more information on the early-stopping callback check out the [EarlyStopping
# documentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/callbacks/EarlyStopping#class_earlystopping).
# + colab={"base_uri": "https://localhost:8080/", "height": 425} colab_type="code" id="1Ch_iMqOQ6K3" outputId="5717abb0-17ca-48cc-b4f5-b1bca74af9c0"
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28,1)),
tf.keras.layers.Dense(256, activation = 'relu'),
tf.keras.layers.Dense(128, activation = 'relu'),
tf.keras.layers.Dense(64, activation = 'relu'),
tf.keras.layers.Dense(10, activation = 'softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Stop training when there is no improvement in the validation loss for 5 consecutive epochs
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)
history = model.fit(training_batches,
epochs = 100,
validation_data=validation_batches,
callbacks=[early_stopping])
# + colab={"base_uri": "https://localhost:8080/", "height": 498} colab_type="code" id="mcyx8WQHVxEW" outputId="07ce0c7e-4263-4af9-91ba-cb5d82dd694f"
training_accuracy = history.history['accuracy']
validation_accuracy = history.history['val_accuracy']
training_loss = history.history['loss']
validation_loss = history.history['val_loss']
epochs_range=range(len(training_accuracy))
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, training_accuracy, label='Training Accuracy')
plt.plot(epochs_range, validation_accuracy, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, training_loss, label='Training Loss')
plt.plot(epochs_range, validation_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# + [markdown] colab_type="text" id="dSNjBbarspdj"
# ## Dropout
#
# Another common method to reduce overfitting is called **dropout**, where we randomly drop neurons in our model during training. This forces the network to share information between weights, increasing its ability to generalize to new data. We can implement dropout in `tf.keras` by adding `tf.keras.layers.Dropout()` layers to our models.
#
# ```python
# tf.keras.layers.Dropout(rate)
# ```
# randomly sets a fraction `rate` of the dropout layer's input units to 0 at each update during training. The `rate` argument is a float between 0 and 1, that determines the fraction of neurons from the previous layer that should be turned off. For example, `rate =0.5` will drop 50\% of the neurons.
#
# It's important to note that we should never apply dropout to the input layer of our network. Also, remember that during training we want to use dropout to prevent overfitting, but during inference we want to use all the neurons in the network. `tf.keras` is designed to care of this automatically, so it uses the dropout layers during training, but automatically ignores them during inference.
# + [markdown] colab_type="text" id="ABj6x-zss0I1"
# > **Exercise:** Add 3 dropout layers with a `rate=0.2` to our previous `model` and train it on Fashion-MNIST again. See if you can get a lower validation loss.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="IvSOnFdBsfbL" outputId="83adf78f-bbaf-456f-ce94-2449834e5864"
## Solution
# + [markdown] colab_type="text" id="Eqc6YFpFvwIq"
# ## Inference
#
# Now that the model is trained, we can use it to perform inference. Here we are going to perform inference on 30 images and print the labels in green if our model's prediction matches the true label. On the other hand, if our model's prediction doesn't match the true label, we print the label in red.
# + colab={"base_uri": "https://localhost:8080/", "height": 858} colab_type="code" id="_bA9AnH9vq2m" outputId="f4a0767d-a17a-4ddf-facd-4caa5aa3aac9"
for image_batch, label_batch in testing_batches.take(1):
ps = model.predict(image_batch)
images = image_batch.numpy().squeeze()
labels = label_batch.numpy()
plt.figure(figsize=(10,15))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(images[n], cmap = plt.cm.binary)
color = 'green' if np.argmax(ps[n]) == labels[n] else 'red'
plt.title(class_names[np.argmax(ps[n])], color=color)
plt.axis('off')
# + [markdown] colab_type="text" id="DcBmIg4Sdri_"
# ## Next Up!
#
# In the next lesson, we'll see how to save our trained models. In general, you won't want to train a model every time you need it. Instead, you'll train once, save it, then load the model when you want to train more or use it for inference.
# + colab={} colab_type="code" id="nQdG3m0N9yDl"
| tensorflow/lessons/intro-to-tensorflow/part_5_inference_and_validation_exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
df = pd.DataFrame({'Experiencia':[1,3,5],'Profesional':[1,0,1],'Sueldo':[470000,450000,950000]})
df
# +
from sklearn.model_selection import *
X = np.array(df.drop(['Sueldo'],axis=1))
y = np.array(df.Sueldo)
# -
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.2)
print(X_train)
print("")
print(X_test)
# +
from sklearn.linear_model import *
LR = LinearRegression().fit(X_train,y_train)
y_pred = LR.predict(X_test)
print(y_pred)
print("")
print(y_test)
# +
### ejemplo clase
# +
from sklearn.datasets import *
X, y = load_diabetes(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.2)
LR = LinearRegression().fit(X_train,y_train)
y_pred = LR.predict(X_test)
print(y_pred)
print("")
print(y_test)
# -
| Junio/Clase unidad 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Activity 02 - Part 2
from sklearn.neural_network import MLPClassifier
import pickle
import os
path = os.getcwd()+'/final_model.pkl'
file = open(path, 'rb')
model = pickle.load(file)
pred = model.predict([[42,2,0,0,1,2,1,0,5,8,380,1,-1,0]])
print(pred)
| Activity02/Activity02_Part2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python in 5 minutes...
# While its absurd to think we can learn Python in 5 minutes, it's useful to at least introduce a few basic concepts of the language before we dive in. After covering these, you'll at least get past that first speed bump and can start interacting on your own.
# ## Variable assignment and data types
# Python assigns values to variables with `=`:
zip_code = 27705
myName = "John"
voltage = 9.25
# Really the only rules in assigning variables is that the names can't have spaces or begin with numbers. Oh, and there are a few reserved words you can't use, e.g. `class`...
# You can check the value of a variable with the `print` command:
print(myName)
# When a variable is assigned (or re-assigned) a value, it assumes the **data type** of that value. The most common data types are <u>integers</u>, <u>strings</u>, and <u>float</u>. The data type of a variable can be revealed with the `type` command:
type(myName)
type(zip_code)
# <font color=purple>► What's the type of the `voltage` variable?</font>
type()
# Data types are importamt because it defines what you can do with that variable. This is because Python is an **object oriented language** meaning everything in Python is an object, and every object has a set of **properties** and **methods** (or defined actions).
# ---
# ## ♦ Python syntax
# ### Comments
# One important concept in Python syntax is how to write comments in your code, i.e., code that included in your script, but that is not run. In Python any code written after a `#` is ignored by the kernel. In the example below, the first line is a comment; it is ingored, but is useful to humans reading the script.
#Assign my name to the variable myName
myName = 'John'
# ### Whitespace (indenting) and code blocks
# As a scripting language, it's useful to string together multiple statements and run them together. A simple script is just a sequence of commands run chronologically, but you'll likely want to compose more complex scripts that include **conditional execution** (e.g. `if`...`then`... statements) or **loops**, and these require the language to be able to define specific **blocks** of code. In Python, code blocks are identified via **whitespace** or indenting lines of code. Additionally, a colon (`:`) is often used in the line preceeding a code block.<br>Here are a few examples:
for i in [0,1,2,3,4]:
x = 2 ** i
print(x)
print("Finished!")
x = 100
if x < 10:
print("It's under 10")
elif x < 100:
print("It's under 100")
else:
print("It's greater than or equal to 100")
# ---
# ## ♦ Python collections
# In analyzing data, we deal with multiple related values. Here are typical Python data types for dealing with collections of values.
# ### Lists
# A Python **list** is a collection of values, and the values can vary in their data type. Lists are termed **vectors** because the order of the items it important: it's the only way we can identify specific elements in a list. Python lists are created using brackets: `[`...`]`.
#Create a list
myList = ['Ralph',30,'canoeing',45.3]
#Add one more item to our list
myList.append('Telephone')
# To extract elements from a list, use it's **index**. The one catch here is that **Python indices always start at zero, not 1**
#Print the 3rd item in the list
print (myList[2])
# ### Tuples
# A **tuple** is exactly like a list with the one exception is that, once created, it is **immutable**, meaning we can't add or remove items to the list. Python tuples are created using parentheses: `(`...`)`.
#Create the tuple
myTuple = ('Ralph',30,'canoeing',45.3)
#Extract the 1st item from the tuple
myTuple[]
# ### Dictionaries
# A **dictionary** is a collection of items (of any data type) just like a list or a tuple. However, in a dictionary, values are not referenced by their index, but by a **key** we assign to the value when the dictionary is created or when a value is added to the dictionary. Dictionaries are created using curly braces: `{`<u>key</u>:<u>value</u>`}`.
#Create a dictionary, using names as keys
myDict = {'John':21,'Lauren':20,'Martin':30}
print (myDict['Martin'])
myDict['Martin'] = 45
print (myDict['Martin'])
# ---
# ## ♦ Python Libraries (for data analytics...)
# Python can be extended by importing **libraries** (aka **packages** or **modules**). You may see various forms of importing a library:
# * `import numpy`: imports the entire *NumPy* module; NumPy specific statements are preceded with `numpy.`, e.g. `numpy.ndarray(2)`
# * `import numpy as np`: as above, but references to Numpy functions are made with `np.`, e.g. `np.ndarray(2)`
# * `from numpy import *`: imports the entire *Numpy* module, but functions don't require a prefix: `ndarray(2)`
import numpy as np
np.ndarray(2)
# Importing libraries add new **data types**, i.e. new object definitions, and thus new **properties** and **methods** (functions) to our script. The two big data analytics libraries in Python are **NumPy** *(**Num**eric **Py**thon)* and **Pandas** *(**Pan**el **da**ta)*.
# ### <u>NumPy</u> and the *n-dimensional array*
# Briefly, *NumPy* brings multidimensional analysis to Python with it's key data type: the **n-dimensional array**. This is another collection object, **but** all values in the collection must be the same data type (e.g. all integers or all floats), and indices can assume multiple dimensions. These *ndarrays* offer very speedy calculations and manipulations.
# ### <u>Pandas</u> and the *dataframe*
# The other big libraru in data analytics, Pandas, actually builds off of NumPy's objects, and introduces the **Dataframe**, which is so central to data analysis and this workshop that we'll examine this on its own.
# ## More info
# A great book on Numpy and Pandas is available online!<b>
# https://jakevdp.github.io/PythonDataScienceHandbook/index.html
| 02a-Python-in-5-minutes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %pylab
import PyDSTool as dst
import numpy as np
# +
#theta_dot_i = 2*pi*vi + SUM(w_ij * sin(theta_j - theta_i - phi_ij)
DSargs = dst.args()
DSargs.name = 'CPG Test'
DSargs.pars = {'v_i': 1,
'v_j': 1,
'w_ij': 1,
'w_ji': 1,
'phi_ij': 0.1,
'phi_ji': 0.1}
DSargs.varspecs = {'theta_i': '(2*pi*v_i) + (w_ij * sin(theta_j - theta_i - phi_ij))',
'theta_j': '(2*pi*v_j) + (w_ji * sin(theta_i - theta_j - phi_ji))'}
DSargs.ics = {'theta_i': 0,
'theta_j': np.pi}
DSargs.tdomain = [0,1]
solver = dst.Generator.Vode_ODEsystem(DSargs)
traj = solver.compute('CPG')
pts = traj.sample(dt=0.01)
# +
x_i = (1 + np.cos(pts['theta_i']))
x_j = (1 + np.cos(pts['theta_j']))
#plt.plot(pts['t'], pts['theta_i'])
#plt.plot(pts['t'], pts['theta_j'])
plt.plot(pts['t'], x_i)
plt.plot(pts['t'], x_j)
plt.xlabel('time')
plt.title(solver.name)
# -
print solver.compute('test')
solver
| Sin CPG Test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This notebook was prepared by [<NAME>](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# # Challenge Notebook
# ## Problem: Given a knapsack with a total weight capacity and a list of items with weight w(i) and value v(i), determine the max total value you can carry.
#
# * [Constraints](#Constraints)
# * [Test Cases](#Test-Cases)
# * [Algorithm](#Algorithm)
# * [Code](#Code)
# * [Unit Test](#Unit-Test)
# * [Solution Notebook](#Solution-Notebook)
# ## Constraints
#
# * Can we replace the items once they are placed in the knapsack?
# * Yes, this is the unbounded knapsack problem
# * Can we split an item?
# * No
# * Can we get an input item with weight of 0 or value of 0?
# * No
# * Do we need to return the items that make up the max total value?
# * No, just the total value
# * Can we assume the inputs are valid?
# * No
# * Are the inputs in sorted order by val/weight?
# * Yes
# * Can we assume this fits memory?
# * Yes
# ## Test Cases
#
# * items or total weight is None -> Exception
# * items or total weight is 0 -> 0
# * General case
#
# <pre>
# total_weight = 8
# items
# v | w
# 0 | 0
# a 1 | 1
# b 3 | 2
# c 7 | 4
#
# max value = 14
# </pre>
# ## Algorithm
#
# Refer to the [Solution Notebook](). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
# ## Code
class Item(object):
def __init__(self, label, value, weight):
self.label = label
self.value = value
self.weight = weight
def __repr__(self):
return self.label + ' v:' + str(self.value) + ' w:' + str(self.weight)
class Knapsack(object):
def fill_knapsack(self, input_items, total_weight):
# TODO: Implement me
pass
# ## Unit Test
# **The following unit test is expected to fail until you solve the challenge.**
# +
# # %load test_knapsack_unbounded.py
from nose.tools import assert_equal, assert_raises
class TestKnapsack(object):
def test_knapsack(self):
knapsack = Knapsack()
assert_raises(TypeError, knapsack.fill_knapsack, None, None)
assert_equal(knapsack.fill_knapsack(0, 0), 0)
items = []
items.append(Item(label='a', value=1, weight=1))
items.append(Item(label='b', value=3, weight=2))
items.append(Item(label='c', value=7, weight=4))
total_weight = 8
expected_value = 14
results = knapsack.fill_knapsack(items, total_weight)
total_weight = 7
expected_value = 11
results = knapsack.fill_knapsack(items, total_weight)
assert_equal(results, expected_value)
print('Success: test_knapsack')
def main():
test = TestKnapsack()
test.test_knapsack()
if __name__ == '__main__':
main()
# -
# ## Solution Notebook
#
# Review the [Solution Notebook]() for a discussion on algorithms and code solutions.
| recursion_dynamic/knapsack_unbounded/knapsack_unbounded_challenge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="QRcqbpLpFK5o"
# # **Análise Exploratória de Dados de Logística**
# + [markdown] id="6-CvdKwqFPiW"
# ## Contexto
# + [markdown] id="XRURE1uUFXGw"
# Análise de comportamento do operador logístico **Loggi** na região de Brasília.
#
# + [markdown] id="QxukLHaqFnkU"
# ## Pacotes e bibliotecas
# + id="VXUEW0VrF7XW"
import json
import matplotlib.pyplot as plt
import numpy as np
import geopy
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent="ebac_geocoder")
from geopy.extra.rate_limiter import RateLimiter
# !pip3 install geopandas;
import seaborn as sns
import pandas as pd
import geopandas
# + [markdown] id="irQxHW1zGkdZ"
# ## Exploração de dados
# + id="lxLj8e0GHAnr"
# !wget -q "https://raw.githubusercontent.com/andre-marcos-perez/ebac-course-utils/main/dataset/deliveries.json" -O deliveries.json
# !wget -q "https://raw.githubusercontent.com/andre-marcos-perez/ebac-course-utils/main/dataset/deliveries-geodata.csv" -O deliveries-geodata.csv
# !wget -q "https://geoftp.ibge.gov.br/cartas_e_mapas/bases_cartograficas_continuas/bc100/go_df/versao2016/shapefile/bc100_go_df_shp.zip" -O distrito-federal.zip
# !unzip -q distrito-federal.zip -d ./maps
# !cp ./maps/LIM_Unidade_Federacao_A.shp ./distrito-federal.shp
# !cp ./maps/LIM_Unidade_Federacao_A.shx ./distrito-federal.shx
# + id="HuhBCNWq-p-_"
# dado bruto em um dict:
with open('deliveries.json', mode='r', encoding='utf8') as file:
data = json.load(file)
# + id="ruVeUsmoXIF5"
# dado bruto no pandas:
deliveries_df = pd.DataFrame(data)
deliveries_df
# + id="W7hW0kXIXKsZ"
# trabalhando na coluna "origin":
hub_origin_df = pd.json_normalize(deliveries_df["origin"])
deliveries_df = pd.merge(left=deliveries_df,
right=hub_origin_df,
how='inner',
left_index=True,
right_index=True)
deliveries_df = deliveries_df.drop("origin", axis=1)
deliveries_df = deliveries_df[["name",
"region",
"lng",
"lat",
"vehicle_capacity",
"deliveries"]]
deliveries_df.rename(columns={"lng": "hub_lng",
"lat": "hub_lat"},
inplace=True)
# + id="V0eAZRTDXOQ6"
#trabalhando na coluna "deliveries":
deliveries_exploded_df = deliveries_df[["deliveries"]].explode("deliveries")
deliveries_normalized_df = pd.concat([
pd.DataFrame(deliveries_exploded_df["deliveries"]
.apply(lambda record: record["size"]))
.rename(columns={"deliveries": "delivery_size"}),
pd.DataFrame(deliveries_exploded_df["deliveries"]
.apply(lambda record: record["point"]["lng"]))
.rename(columns={"deliveries": "delivery_lng"}),
pd.DataFrame(deliveries_exploded_df["deliveries"]
.apply(lambda record: record["point"]["lat"]))
.rename(columns={"deliveries": "delivery_lat"}),
], axis= 1)
deliveries_df = deliveries_df.drop("deliveries", axis=1)
deliveries_df = pd.merge(left=deliveries_df,
right=deliveries_normalized_df,
how='right',
left_index=True,
right_index=True)
deliveries_df.reset_index(inplace=True, drop=True)
# + id="4kNVdoskXSP1"
# DataFrame
deliveries_df
# + [markdown] id="98hexQTyJS9I"
# ## Manipulação
# + id="DXU4Ee0QJS9Q"
# Trasformando as coordenadas em endereços:
hub_df = deliveries_df[["region", "hub_lng", "hub_lat"]]
hub_df = hub_df.drop_duplicates().sort_values(by="region").reset_index(drop=True)
geocoder = RateLimiter(geolocator.reverse, min_delay_seconds=1)
hub_df["coordinates"] = hub_df["hub_lat"].astype(str) + ", " + hub_df["hub_lng"].astype(str)
hub_df["geodata"] = hub_df["coordinates"].apply(geocoder)
hub_geodata_df = pd.json_normalize(hub_df["geodata"]
.apply(lambda data: data.raw))
hub_geodata_df = hub_geodata_df[["address.town",
"address.suburb",
"address.city"]]
hub_geodata_df.rename(columns={"address.town": "hub_town",
"address.suburb": "hub_suburb",
"address.city": "hub_city"},
inplace=True)
hub_geodata_df["hub_city"] = np.where(hub_geodata_df["hub_city"].notna(),
hub_geodata_df["hub_city"],
hub_geodata_df["hub_town"])
hub_geodata_df["hub_suburb"] = np.where(hub_geodata_df["hub_suburb"].notna(),
hub_geodata_df["hub_suburb"],
hub_geodata_df["hub_city"])
hub_geodata_df = hub_geodata_df.drop("hub_town", axis=1)
# + id="UTj-OxZfXrnA"
# Combinando os DataFrames:
hub_df = pd.merge(left=hub_df,
right=hub_geodata_df,
left_index=True,
right_index=True)
hub_df = hub_df[["region", "hub_suburb", "hub_city"]]
deliveries_df = pd.merge(left=deliveries_df,
right=hub_df, how="inner", on="region")
deliveries_df = deliveries_df[["name",
"region", "hub_lng", "hub_lat", "hub_city",
"hub_suburb", "vehicle_capacity",
"delivery_size", "delivery_lng",
"delivery_lat"]]
# + id="IQJYqkd8Xtxk"
# Fazendo a geocodificação reversa da entrega:
deliveries_geodata_df = pd.read_csv("deliveries-geodata.csv")
deliveries_df = pd.merge(left=deliveries_df,
right=deliveries_geodata_df[["delivery_city",
"delivery_suburb"]], how="inner", left_index=True,
right_index=True)
# + id="L_J0FGg7XzPP"
# DataFrame enriquecido
deliveries_df
# + id="31E1rq9cFp7Z"
# controle de qualidade(coluna com tipo de dados):
deliveries_df.info()
# + id="raLXRBoIF684"
# controle de qualidade(dados faltantes):
deliveries_df.isna().any()
# + [markdown] id="KSgjP--1JS9R"
# ## Visualização
# + id="Jlj3ACWCJS9R" colab={"base_uri": "https://localhost:8080/", "height": 684} outputId="62dc5d31-5cbf-4ae3-f094-506e9c850d51"
# Mapa da região de Brasília:
mapa = geopandas.read_file("distrito-federal.shp")
mapa = mapa.loc[[0]]
hub_df = deliveries_df[["region","hub_lng",
"hub_lat"]].drop_duplicates().reset_index(drop=True)
geo_hub_df = geopandas.GeoDataFrame(hub_df,
geometry=geopandas.points_from_xy(hub_df["hub_lng"],
hub_df["hub_lat"]))
geo_deliveries_df = geopandas.GeoDataFrame(deliveries_df,
geometry=geopandas.points_from_xy(deliveries_df["delivery_lng"],
deliveries_df["delivery_lat"]))
fig, ax = plt.subplots(figsize = (50/2.54, 50/2.54))
mapa.plot(ax=ax, alpha=0.4, color="lightgrey")
geo_deliveries_df.query("region == 'df-0'").plot(ax=ax,
markersize=1,
color="blue",
label="df-0")
geo_deliveries_df.query("region == 'df-1'").plot(ax=ax, markersize=1,
color="red", label="df-1")
geo_deliveries_df.query("region == 'df-2'").plot(ax=ax,
markersize=1,
color="seagreen",
label="df-2")
geo_hub_df.plot(ax=ax, markersize=30, marker="x", color="black", label="hub")
plt.title("Entregas no Distrito Federal por Região", fontdict={"fontsize": 16})
lgnd = plt.legend(prop={"size": 15})
for handle in lgnd.legendHandles:
handle.set_sizes([50])
# + [markdown] id="X_Q3uF-DIb3V"
# **Insights**:
#
# 1. As entregas estão corretamente alocadas aos seus respectivos hubs;
#
# 2. Os hubs das regiões 0 e 2 fazem entregas em locais distantes do centro e entre si, o que pode gerar um tempo e preço de entrega maior.
# + colab={"base_uri": "https://localhost:8080/", "height": 294} id="cJWb85r3IrIn" outputId="b20ab4d5-cfc2-41fa-a24e-ea2095b4c302"
# Comportamento das entregas:
# Gráfico das proporções de entrega:
data = pd.DataFrame(deliveries_df[['region',
'vehicle_capacity']].value_counts(normalize=True)).reset_index()
data.rename(columns={0: "region_percent"}, inplace=True)
with sns.axes_style('whitegrid'):
grafico = sns.barplot(data=data, x="region", y="region_percent",
ci=None, palette="pastel")
grafico.set(title='Proporção de entregas por região',
xlabel='Região', ylabel='Proporção');
# + [markdown] id="LIMVe-MYQW2Q"
# **Insight**:
#
#
# * A **distribuição das entregas** está muito concentrada nos hubs das **regiões** **df-1** e **df-2**, mas pouco no da **região df-0**.
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 398} id="0sNyoOL0Pz_X" outputId="689676da-aa74-4651-f37c-9a88457a7180"
# Gráfico das quantidades de entrega separadas pelo tamanho e região:
delivery_region_df = deliveries_df[['region','delivery_size']]
delivery_sorted = delivery_region_df.sort_values(by=['delivery_size'],
ascending=False).reset_index(drop=True)
delivery_sorted_amout = pd.concat([delivery_sorted[['region','delivery_size']],
pd.DataFrame({'amout': len(delivery_sorted)*[1]})], axis=1)
delivery_sorted_amout_df = delivery_sorted_amout.groupby(['region',
'delivery_size']).agg('count').reset_index()
with sns.axes_style('whitegrid'):
grafico = sns.barplot(data=delivery_sorted_amout_df, x='delivery_size',
y='amout', hue='region', ci=None,palette='pastel')
grafico.set(title='Tamanho e quantidade de entregas por região',
xlabel='Tamanho de entregas',
ylabel='Quantidade de entregas',);
grafico.get_legend().set_title('Região');
grafico.figure.set_size_inches(w=40/2.54, h=15/2.54)
# + [markdown] id="oI3uHgyNR_6W"
# **Insight**:
#
# * A **região df-0** possui **pouco volume de carga** e o **tamanho** de suas entregas **em relação aos outros hubs** é menor.
#
#
# + [markdown] id="6dM1wLdwTujO"
# ## Resumo de insights
# + [markdown] id="yR0sOAavUDr-"
# A distribuição das entregas está muito **concentrada** nos hubs das **regiões df-1** e **df-2**, mas **pouco** no da **região df-0**. Contudo a **capacidade dos veículos é mesma** para todos os **hubs**, logo os **veículos** da **região df-0** poderiam ser **deslocados** para a **região df-1** pois tem o maior volume de entregas e as maiores entregas.
| Projeto_de_logistica.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Circuit Tutorial
# This tutorial will show you how to create and use `Circuit` objects, which represent (suprise, suprise) quantum circuits. Noteable among their features is the ability to interface pyGSTi with other quantum circuit standards (e.g., conversion to [OpenQasm](https://arxiv.org/abs/1707.03429))
#
# First let's get the usual imports out of the way.
import pygsti
from pygsti.objects import Circuit
from pygsti.objects import Label as L
# ## Labels
# Let's begin by discussing gate and layer labels, which we'll use to build circuits.
#
# ### Gate Labels
# Gate labels represent a single gate within a circuit, like a CNOT operation between two qubits. A gate label has two parts: a `str`-type name and a tuple of line labels. Gate names typically begin with 'G' because this is expected when we parse circuits from text files. The line labels assign the gate to those lines in the circuit. For example, `"Gx"` or `"Gcnot"` are common gate names, and the integers 0 to $n$ might be the available line labels. We can make a proper gate label by creating a instance of the `Label` class:
myGateLabel = L('Gcnot',(0,1))
# But in nearly all scenarios it's also fine to use the Python tuple `('Gcnot',0,1)` as shorthand - this will get converted into the `Label` object above as needed within pyGSTi.
myJustAsGoodGateLabel = ('Gcnot',0,1)
# As a **special case**, the tuple of line labels can be `None`. This is interpreted to mean that the gate acts on *all* the available lines. When just a string is used as a gate label it acts as though it's line labels are `None`. So these are also valid gate labels:
mySpecialGateLabel = L('Gi')
myJustAsGoodSpecialGateLabel = 'Gi'
# When dealing with actual `Label` objects you can access the name and line labels of a gate label via the `.name` and `.sslbls` (short for "state space labels", which are the same as line labels as we'll see) members:
print("name = ", myGateLabel.name, " sslbls = ", myGateLabel.sslbls)
print("name = ", mySpecialGateLabel.name, " sslbls = ", mySpecialGateLabel.sslbls)
# Simple enough; now let's move on to layer labels:
#
# ### Layer labels
#
# Layer labels represent an entire layer of a circuit. A layer label can either be a single gate label or a sequence of gate labels. In the former case, the layer is interpreted to have just a single gate in it. In the latter case, all of the gate labels comprising the layer label are interpreted as occurring simultaneously (in parallel) during the given circuit layer. Again, there's a proper way to make a layer label using a `Label` object, and a number of shorthand ways which are almost always equivalent:
layerLabel1 = myGateLabel # single-gate layer using Label object
layerLabel2 = myJustAsGoodGateLabel # single-gate layer using tuple
layerLabel3 = 'Gi' # single-gate layer using a string
layerLabel4 = L( [L('Gx',0), L('Gcnot',(0,1))] ) # multi-gate layer as Label object, from Label objects
layerLabel5 = L( [('Gx',0),('Gcnot',0,1)] ) # multi-gate layer as Label object, from tuple objects
layerLabel6 = L( [('Gx',0),L('Gcnot',(0,1))] ) # multi-gate layer as Label object, from mixed objects
layerLabel7 = [('Gx',0),('Gcnot',0,1)] # multi-gate layer as a list of tuples
layerLable8 = L( [] ) # *empty* gate layer - useful to mean the identity on all qubits
# etc, etc. -- anything reasonable works like it should
# Notice that the same `Label` object used for gate labels is used for layer labels. This is natural when gates and layers are thought of more broadly as "operations" (e.g. a layer of an $n$-qubit circuit is just a $n$-qubit gate). Thus, you can access the `.name` and `.sslbls` of a layer too (though the name is given the default value "COMPOUND"):
print("name = ", layerLabel5.name, " sslbls = ", layerLabel5.sslbls)
# A couple tricks:
# - when you're not sure whether a layer `Label` object has a multiple gates or is just a single simple gate label, you can iterate over the `.components` member of a `Label`. This iterates over the gate labels for a multi-gate layer label and just over the label itself for a simple gate label. For example:
print( list(L([('Gx',0),('Gcnot',0,1)]).components) )
print( list(L('Gx',0).components) )
# - you can use `lbl.qubits` as an alias for `lbl.sslbls`, and `lbl.num_qubits` instead of `len(lbl.sslbls)`. These can improve code legibility when dealing a system of qubits (as opposed to qutrits, etc.). **Beware**: both of these quantities can be `None`, just like `lbl.sslbls`.
lbl = L('Gcnot',(0,1))
print("The label %s applies to %d qubits: %s" % (str(lbl), lbl.num_qubits, str(lbl.qubits)))
lbl = L('Gi')
print("The label %s applies to %s qubits: %s" % (str(lbl), lbl.num_qubits, str(lbl.qubits)))
# ## Circuits
#
# The `Circuit` object encapsulates a quantum circuit as a sequence of *layer labels*, each of which contains zero or more non-identity *gate lables*. A `Circuit` has some number of labeled *lines* which should have a one-to-one correspondence with the factors $\mathcal{H}_i$ when the quantum-state space is written as a tensor product: $\mathcal{H}_1 \otimes \mathcal{H}_2 \cdots \otimes \mathcal{H}_n$. Line labels can be integers or strings (in the above examples we used the integers 0 and 1).
#
# ### Construction
# We initialize a `Circuit` with a sequence of *layer labels*, and either:
# - a sequence of line labels, as `line_labels`, or
# - the number of lines for the circuit, as `num_lines`, in which case the line labels are taken to be integers starting at 0.
# +
c = Circuit( [('Gx',0),('Gcnot',0,1),(),'Gall',('Gy',3)], line_labels=[0,1,2,3])
c2 = Circuit( [('Gx',0),('Gcnot',0,1),(),'Gall',('Gy',3)], num_lines=4) # equivalent to above
c3 = Circuit( [('Gx',0),('Gcnot',0,1),(),'Gall',('Gy',3)] ) # NOT equivalent, because line 2 is never used (see below)
print(c) # You can print circuits to get a
print(c2) # text-art version.
print(c3)
# -
# In the **case of 1 or 2 qubits**, it can be more convenient to dispense with the line labels entirely and just equate gates with circuit layers and represent them with simple Python strings. If we initialize a `Circuit` without specifying the line labels (either by `line_labels` or by `num_lines`) *and* the layer labels don't contain any non-`None` line labels, then a `Circuit` is created which has a single special **'\*'-line** which indicates that this circuit doesn't contain any explicit lines:
c4 = Circuit( ('Gx','Gy','Gi') )
print(c4)
# Using circuits with '\*'-lines is fine (and is the default type of circuit for the "standard" 1- and 2-qubit modules located at `pygsti.construction.std*`); one just needs to be careful not to combine these circuits with other non-'\*'-line circuits.
#
# **1Q Note:** Particularly within the 1-qubit context the **ordering direction** is important. The elements of a `Circuit` are read from **left-to-right**, meaning the first (left-most) layer is performed first. This is very natural for experiments since one can read the operation sequence as a script, executing each gate as one reads from left to right. It's also natural for 2+ qubit circuits which similar to standard quantum circuit diagrams. However, for 1-qubit circuits, since we insist on "normal" matrix multiplication conventions, the fact that the ordering of matrix products is *reversed* from that of operation sequences may be confusing. For example, the circuit `('Ga','Gb','Gc')`, in which Ga is performed first, corresponds to the matrix product $G_c G_b G_a$. The probability of this operation sequence for a SPAM label associated with the (column) vectors ($\rho_0$,$E_0$) is given by $E_0^T G_c G_b G_a \rho_0$, which can be interpreted as "prepare state 0 first, then apply gate A, then B, then C, and finally measure effect 0". While this nuance is typically hidden from the user (the `Model` functions which compute products and probabilities from `Circuit` objects perform the order reversal internally), it becomes very important if you plan to perform matrix products by hand.
#
# ### Implied SPAM
# A `Circuit` may optionally begin with an explicit state-preparation and end with an explicit measurement, but these may be omitted when used with `Model` objects which have only a single state-preparation and/or POVM. Usually state preparation and measurement operations are represented by `"rho"`- and `"M"`-prefixed `str`-type labels that therefore act on all circuit lines. For example:
c5 = Circuit( ['rho010',('Gz',1),[('Gswap',0,1),('Gy',2)],'Mx'] , line_labels=[0,1,2])
print(c5)
# ### Basic member access
# The basic member variables and functions of a `Circuit` that you may be interested in are demonstrated below:
print("depth = ", c.depth) # circuit depth, i.e., the number of layers
print("tup = ", c.tup) # circuit as a tuple of layer-labels (elements are *always* Label objects)
print("str = ", c.str) # circuit as a single-line string
print("lines = ",c.line_labels) # tuple of line labels
print("#lines = ",c.num_lines) #number of line labels
print("#multi-qubit gates = ", c.num_multiq_gates)
c_copy = c.copy() #copies the circuit
# ### Indexing and Slicing
# When a `Circuit` is indexed as if it were a tuple of layer-labels. The index must be an integer and a `Label` object is returned. Once can also access a particular gate label by providing a second index which is either a single line label, a tuple of line labels, or, *if the line labels are integers*, a slice of line labels:
c = Circuit( [('Gx',0),[('Gcnot',0,1),('Gz',3)],(),'Gall',('Gy',3)], line_labels=[0,1,2,3])
print(c)
print('c[1] = ',c[1]) # layer
print('c[0,0] = ',c[0,0]) # gate at layer=0, line=0 (Gx)
print('c[0,2] = ',c[0,2]) # gate at layer=0, line=2 (nothing)
print('c[1,0] = ',c[1,0]) # gate at layer=0, line=0 (NOTE: nothing because CNOT doesn't *only* occupy line 0)
print('c[1,(0,1)] = ',c[1,(0,1)]) # gate at layer=0, lines=0&1 (Gcnot)
print('c[1,(0,1,3)] = ', c[1,(0,1,3)]) # layer-label restricted to lines 0,1,&3
print('c[1,0:3] = ', c[1,0:3]) # layer-label restricted to lines 0,1,&2 (line-label slices OK b/c ints)
print('c[3,0] = ',c[3,0])
print('c[3,:] = ',c[3,:]) # DEBUG!
# If the first index is a tuple or slice of layer indices, a `Circuit` is returned which contains only the indexed layers. This indexing may be combined with the line-label indexing described above. Here are some examples:
print(c)
print('c[1:3] = ');print(c[1:3]) # Circuit formed from layers 1 & 2 of original circuit
print('c[2:3] = ');print(c[1:2]) # Layer 1 but as a circuit (not the same as c[1], which is a Label)
print('c[0:2,(0,1)] = ');print(c[0:2,(0,1)]) # upper left "box" of circuit
print('c[(0,3,4),(0,3)] = ');print(c[(0,3,4),(0,3)]) #Note: gates only partially in the selected "box" are omitted
# ### Editing Circuits
# **Circuits are by default created as read-only objects**. This is because making them read-only allows them to be hashed (e.g. used as the keys of a dictionary) and there are many tasks that don't require them being editable. That said, it's easy to get an editable `Circuit`: just create one or make a copy of one with `editable=True`:
ecircuit1 = Circuit( [('Gx',0),('Gcnot',0,1),(),'Gall',('Gy',3)], num_lines=4, editable=True)
ecircuit2 = c.copy(editable=True)
print(ecircuit1)
# When a circuit is editable, you can perform additional operations that alter the circuit in place (see below). When you're done, call `.done_editing()` to change the `Circuit` into read-only mode. Once in read-only mode, a `Circuit` cannot be changed back into editable-mode, you must make an editable *copy* of the circuit.
#
# As you may have guessed, you're allowed to *assign* the layers or labels of an editable circuit by indexing:
# +
ecircuit1[0,(2,3)] = ('Gcnot',2,3)
print(ecircuit1)
ecircuit1[2,1] = 'Gz' # interpreted as ('Gz',1)
print(ecircuit1)
ecircuit1[2:4] = [[('Gx',1),('Gcnot',3,2)],('Gy',1)] #assigns to layers 2 & 3
print(ecircuit1)
# -
# There are also methods for inserting and removing lines and layers:
# +
ecircuit1.append_circuit( Circuit([('Gx',0),'Gi'], num_lines=4) )
print(ecircuit1)
ecircuit1.insert_circuit( Circuit([('Gx',0),('Gx',1),('Gx',2),('Gx',3)], num_lines=4), 1)
print(ecircuit1)
ecircuit1.insert_layer( L( (L('Gz',0),L('Gz',3)) ), 0) #expects something like a *label*
print(ecircuit1)
ecircuit1.delete_layers([2,3])
print(ecircuit1)
ecircuit1.insert_idling_lines(2, ['N1','N2'])
print(ecircuit1)
ecircuit1.delete_lines(['N1','N2'])
print(ecircuit1)
# -
# Finally, there are more complex methods which do fancy things to `Circuit`s:
ecircuit1.compress_depth_inplace()
print(ecircuit1)
ecircuit1.change_gate_library({('Gx',0) : [('Gx2',0)]}, allow_unchanged_gates=True)
print(ecircuit1)
ecircuit1.done_editing()
# ### Circuits as tuples
# In many ways `Circuit` objects behave as a tuple of layer labels. We've already shown how indexing and slicing mimic this behavior. You can also add circuits together and multiply them by integers:
c2 = Circuit([('Gx',0),('Gx',1),('Gx',2),('Gx',3)], num_lines=4)
print(c)
print(c+c2)
print(c*2)
# There are also methods to "parallelize" and "serialize" circuits, which are available to read-only circuits too because they return new `Circuit` objects and don't modify anything in place:
c2 = Circuit([[('Gx',0),('Gx',1)],('Gx',2),('Gx',3)], num_lines=4)
print(c2)
print(c2.parallelize())
print(c2.serialize())
# ### String representations
# `Circuit` objects carry along with them a string representation, accessible via the `.str` member. This is intended to hold a compact human-readable expression for the circuit that can be parsed, using pyGSTi's standard circuit format and conventions, to reconstruct the circuit. This isn't quite true because the line-labels are not currently contained in the string representation, but this will likely change in future releases.
#
# Here's how you can construct a `Circuit` with or from a string representation which thereafter illustrates how you can print different representation of a `Circuit`. Note that two `Circuits` may be equal even if their string representations are different.
# +
#Construction of a Circuit
c1 = Circuit( ('Gx','Gx') ) # from a tuple
c2 = Circuit( ('Gx','Gx'), stringrep="Gx^2" ) # from tuple and string representations (must match!)
c3 = Circuit( "Gx^2" ) # from just a string representation
#All of these are equivalent (even though their string representations aren't -- only tuples are compared)
assert(c1 == c2 == c3)
#Printing displays the Circuit representation
print("Printing as string (multi-line string rep)")
print("c1 = %s" % c1)
print("c2 = %s" % c2)
print("c3 = %s" % c3, end='\n\n')
#Printing displays the Circuit representation
print("Printing .str (single-line string rep)")
print("c1 = %s" % c1.str)
print("c2 = %s" % c2.str)
print("c3 = %s" % c3.str, end='\n\n')
#Casting to tuple displays the tuple representation
print("Printing tuple(.) (tuple rep)")
print("c1 =", tuple(c1))
print("c2 =", tuple(c2))
print("c3 =", tuple(c3), end='\n\n')
#Operations
assert(c1 == ('Gx','Gx')) #can compare with tuples
c4 = c1+c2 #addition (note this concatenates string reps)
c5 = c1*3 #integer-multplication (note this exponentiates in string rep)
print("c1 + c2 = ",c4.str, ", tuple = ", tuple(c4))
print("c1*3 = ",c5.str, ", tuple = ", tuple(c5), end='\n\n')
# -
# ### File I/O
# Circuits can be saved to and read from their single-line string format, which uses square brackets to enclose each layer of the circuit. See the lines of [MyCircuits.txt](../tutorial_files/MyCircuits.txt), which we read in below, for examples. Note that a `Circuit`'s line labels are not included in their single-line-string format, and so to reliably import circuits the line labels should be supplied separately to the `load_circuit_list` function:
circuitList = pygsti.io.load_circuit_list("../tutorial_files/MyCircuits.txt", line_labels=[0,1,2,3,4])
for c in circuitList:
print(c)
# ### Converting circuits external formats
#
# `Circuit` objects can be easily converted to [OpenQasm](https://arxiv.org/abs/1707.03429) or [Quil](https://arxiv.org/pdf/1608.03355.pdf) strings, using the `convert_to_openqasm()` and `convert_to_quil()` methods. This conversion is automatic for circuits that containing only gates with name that are in-built into `pyGSTi` (the docstring of `pygsti.tools.internalgates.standard_gatename_unitaries()`). This is with some exceptions in the case of Quil: currently not all of the in-built gate names can be converted to quil gate names automatically, but this will be fixed in the future.
#
# For other gate names (or even more crucially, if you have re-purposed any of the gate names that `pyGSTi` knows for a different unitary), the desired gate name conversation must be specified as an optional argument for both `convert_to_openqasm()` and `convert_to_quil()`.
#
# Circuits with line labels that are *integers* or of the form 'Q*integer*' are auto-converted to the corresponding integer. If either of these labelling conventions is used but the mapping should be different, or if the qubit labelling in the circuit is not of one of these two forms, the mapping should be handed to these conversion methods.
# +
label_lst = [ [L('Gh','Q0'),L('Gh','Q1')], L('Gcphase',('Q0','Q1')), [L('Gh','Q0'),L('Gh','Q1')]]
c = Circuit(label_lst, line_labels=['Q0','Q1'])
print(c)
openqasm = c.convert_to_openqasm()
print(openqasm)
# -
label_lst = [L('Gxpi2','Q0'),L('Gcnot',('Q0','Q1')),L('Gypi2','Q1')]
c2 = Circuit(label_lst, line_labels=['Q0','Q1'])
quil = c2.convert_to_quil()
print(quil)
# ### Simulating circuits
# `Model` objects in pyGSTi are able to *simulate*, or "generate the outcome probabilities for", circuits. To demonstrate, let's create a circuit and a model (see the tutorials on ["explicit" models](ExplicitModel.ipynb) and ["implicit" models](ImplicitModel.ipynb) for more information on model creation):
clifford_circuit = Circuit([ [L('Gh','Q0'),L('Gh','Q1')],
L('Gcphase',('Q0','Q1')),
[L('Gh','Q0'),L('Gh','Q1')]],
line_labels=['Q0','Q1'])
model = pygsti.construction.create_localnoise_model(num_qubits=2, gate_names=('Gh','Gcphase'),
qubit_labels=['Q0','Q1'])
# Then circuit outcome probabilities can be computed using either the `model.probabilities(circuit)` or `circuit.simulate(model)`, whichever is more convenient:
out1 = model.probabilities(clifford_circuit)
out2 = clifford_circuit.simulate(model)
# The output is simply a dictionary of outcome probabilities:
out1
# The keys of the outcome dictionary `out` are things like `('00',)` instead of just `'00'` because of possible *intermediate* outcomes. See the [Instruments tutorial](advanced/Instruments.ipynb) if you're interested in learning more about intermediate outcomes.
#
# Computation of outcome probabilities may be done in a variety of ways, and `Model` objects are associated with a *forward simulator* that supplies the core computational routines for generating outcome probabilities. In the example above the simulation was performed by multiplying together process matrices. For more information on the types of forward simulators in pyGSTi and how to use them, see the [forward simulators tutorial](../algorithms/advanced/ForwardSimulationTypes.ipynb).
# ## Conclusion
# This concludes our detailed look into the `Circuit` object. If you're intersted in using circuits for specific applications, you might want to check out the [tutorial on circuit lists](advanced/CircuitLists.ipynb) or the [tutorial on constructing GST circuits](advanced/GSTCircuitConstruction.ipynb)
| jupyter_notebooks/Tutorials/objects/Circuit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Tulasi-ummadipolu/LetsUpgrade-Python-B7/blob/master/Day8%20Assignment1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="WbB6st4Om4yu" colab_type="code" colab={}
def fibonacci(num):
def fibo(n):
if n<=0:
print("The number is incorrect")
elif n==1:
return 0
elif n==2:
return 1
else:
return ((fibo(n-1) + fibo(n-2)))
return num(n)
return fibo
# + id="GI1j8z5ipvs5" colab_type="code" colab={}
@fibonacci
def fibo_num(n):
print(n,"Is the number")
# + id="X_m4H5Ibp6I_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3bb95ae8-0ab9-44df-cc9a-29e65adeaefe"
fibo_num(1)
# + id="Zki1MNWvp_dt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="301c3332-712d-4a8f-f687-9b7d8f550e85"
fibo_num(10)
# + id="mWdMslqLqBDu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="68e0c5b4-a37f-4705-d475-b56819f192c1"
fibo_num(20)
| Day8 Assignment1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project 1: Polynomial Regression Model
#
# 1\. Build Polynomial Regression Models
# * Use closed form solution to estimate parameters
# * Use packages of choice to estimate parameters<br>
#
# 2\. Model Performance Assessment
# * Provide an analytical rationale with choice of model
# * Visualize the Model performance
# * MSE, R-Squared, Train and Test Error <br>
#
# 3\. Model Interpretation
#
# * Intepret the results of your model
# * Intepret the model assement <br>
#
# 4\. Model Dianostics
# * Does the model meet the regression assumptions
#
# #### About this Notebook
#
# 1\. The dataset used is the housing dataset from Seattle homes
# 2\. Online resources we're used to aid in development and analysis
#
# Let's get started.
# ### Packages
#
# Importing the necessary packages for the analysis
# +
# %matplotlib inline
# Necessary Packages
import numpy as np
import pandas as pd
from pandas.plotting import scatter_matrix
import matplotlib.pyplot as plt
# Model and data preprocessing
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_squared_error, r2_score
# -
# The dataset provided is a titled *housing_data.csv* and contains housing prices and information about the features of the houses. Below, read the data into a variable and visualize the top 8 rows of the data.
# +
# Initiliazing seed
np.random.seed(42)
data = pd.read_csv('housing_data.csv')
data.dropna(inplace=True)
# -
df = pd.DataFrame(data)
df.head(5)
df.shape
# ### Split data into train and test
#
# In the code below, we need to split the data into the train and test for modeling and validation of our models. We will cover the Train/Validation/Test as we go along in the project. Fill the following code.
#
# 1\. Subset the features to the variable: features <br>
# 2\. Subset the target variable: target <br>
# 3\. Set the test size in proportion in to a variable: test_size <br>
#
X = df.drop(['price'], axis=1)
# df.iloc[:, 0:5]
y = df.drop(['lot_area', 'firstfloor_sqft', 'living_area', 'bath', 'garage_area'], axis=1)
# df.iloc[:, 5:6]
test_size = .33
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=42)
# ### Feature Assessment
features = pd.plotting.scatter_matrix(data, figsize=(14,8), alpha=1, diagonal='kde')
data.corr(method='pearson').style.format("{:.2}").background_gradient(cmap=plt.get_cmap('coolwarm'), axis=1)
# Pass the necessary arguments in the function to calculate the coefficients
def compute_estimators(feature, target):
n1 = np.sum(feature*target) - np.mean(target)*np.sum(feature)
d1 = np.sum(feature*feature) - np.mean(feature)*np.sum(feature)
# Compute the Intercept and Slope
beta1 = n1/d1
beta0 = np.mean(target) - beta1*np.mean(feature)
return beta0, beta1 # Return the Intercept and Slope
# Remember to pass the correct arguments
count = 0
for x in X_train:
x = X_train.iloc[:, count]
beta0, beta1 = compute_estimators(x, y_train['price'])
print(x.name, "beta0:", beta0)
print(x.name, "beta1:", beta1, "\n")
count += 1
# +
# Initilize the linear Regression model here
regression = LinearRegression()
count = 0
for x in X_train:
x = X_train.iloc[:, count]
regression.fit(x.values.reshape(-1, 1), y_train['price'])
print(x.name, "beta0:", regression.intercept_)
print(x.name, "beta1:", regression.coef_, "\n")
count += 1
# -
#Function used to plot the data
def plotting_model(feature, target, predictions, name):
""" Create a scatter and predictions """
fig = plt.figure(figsize=(10,8))
plot_model = regression.fit(feature, target)
plt.scatter(x=feature, y=target, color='blue')
plt.plot(feature, predictions, color='red')
plt.xlabel(name)
plt.ylabel('Price')
return regression
# +
#Function that computes predictions of our model using the betas above + the feature data we've been using
def model_predictions(intercept, slope, feature):
""" Compute Model Predictions """
y_hat = (slope*feature)+intercept
return y_hat
#Function to compute MSE which determines the total loss for each predicted data point in our model
def mean_square_error(y_outcome, predictions):
""" Compute the mean square error """
mse = (np.sum((y_outcome - predictions) ** 2))/np.size(predictions)
return mse
# +
count = 0
for x in X_train:
x = X_train.iloc[:, count]
# Compute the Coefficients
beta0, beta1 = compute_estimators(x, y_train['price'])
# Print the Intercept and Slope
print(x.name)
print('beta0:', beta0)
print('beta1:', beta1)
# Compute the Train and Test Predictions
y_hat = model_predictions(beta0, beta1, x)
# Plot the Model Scatter
name = x.name
regression = plotting_model(x.values.reshape(-1, 1), y_train, y_hat, name)
# Compute the MSE
mse = mean_square_error(y['price'], y_hat)
print('mean squared error:', mse)
print()
count+=1
# +
# PolynomialFeatures (prepreprocessing)
count = 0
# for x in X_train:
# x = X_train.iloc[:, count]
X = X['living_area'].values.reshape(-1, 1)
polynomial_regression = PolynomialFeatures(degree=3)
x_poly = polynomial_regression.fit_transform(X)
poly_regression = regression.fit(x_poly, y)
plt.scatter(X, y, color='blue')
plt.plot(X, poly_regression.predict(polynomial_regression.fit_transform(X)), color='red')
plt.title('Polynomial Regression')
plt.xlabel(x.name)
plt.ylabel('Price')
plt.show()
# count += 1
# -
| Machine_Learning/Polynomial_Regression/Poly_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import cv2
cap = cv2.VideoCapture(2)
cap2 = cv2.VideoCapture(4)
num = 0
while cap.isOpened():
succes1, img = cap.read()
succes2, img2 = cap2.read()
k = cv2.waitKey(5)
if k == 27:
break
elif k == ord('s'): # wait for 's' key to save and exit
cv2.imwrite('images/stereoLeft/imageL' + str(num) + '.png', img)
cv2.imwrite('images/stereoRight/imageR' + str(num) + '.png', img2)
print("images saved!")
num += 1
cv2.imshow('Img 1',img)
cv2.imshow('Img 2',img2)
# Release and destroy all windows before termination
cap.release()
cap2.release()
cv2.destroyAllWindows()
# -
| notebooks/StereoVisionCalibration/calibration_images.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.0 64-bit
# language: python
# name: python3
# ---
import warnings
import itertools
import pandas as pd
import numpy as np
import statsmodels.api as sm
import statsmodels.tsa.api as smt
import statsmodels.formula.api as smf
import scipy.stats as scs
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
df = pd.read_csv("co2_mm_gl.csv")
df.head(5)
df.dtypes
for i in range(len(df)):
if df.iloc[i,1] == 10 or df.iloc[i,1] == 11 or df.iloc[i,1] == 12:
df.iloc[i,0] = str(df.iloc[i,0]) + "-" + str(df.iloc[i,1])+ "-" + "01"
else:
df.iloc[i,0] = str(df.iloc[i,0]) + "-0" + str(df.iloc[i,1])+ "-" + "01"
df = df.set_index("year")
df.index.name = None
df.drop(['month', 'decimal', 'trend'], axis=1, inplace=True)
df = df.rename({'average': 'co2'}, axis=1)
df.index = pd.to_datetime(df.index)
df
data = sm.datasets.co2.load_pandas()
df2 = data.data
df2
df2.index
df.index
# The 'MS' string groups the data in buckets by start of the month
ts = df2['co2'].resample('MS').mean()
ts = ts.fillna(ts.bfill())
ts.index
part1 = ts[:262]
part2 = df.squeeze()
ts = pd.concat([part1, part2], axis=0)
ts.to_csv('ppm_ts.csv', index=True)
ts.isnull().sum()
plt.close()
ts.plot(figsize=(10, 6))
plt.show()
decomposition = sm.tsa.seasonal_decompose(ts, model='additive')
from pylab import rcParams
#rcParams['figure.figsize'] = 12, 10
fig = decomposition.plot()
fig.set_figwidth(12)
fig.set_figheight(8)
plt.show()
# +
# Define the p, d and q parameters to take any value between 0 and 2
p = d = q = range(0, 2)
# Generate all different combinations of p, d and q triplets
pdq = list(itertools.product(p, d, q))
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
print('Examples of parameter combinations for Seasonal ARIMA...')
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1]))
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4]))
# +
warnings.filterwarnings("ignore") # specify to ignore warning messages
best_aic = np.inf
best_pdq = None
best_seasonal_pdq = None
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
model = sm.tsa.statespace.SARIMAX(ts,
order = param,
seasonal_order = param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = model.fit()
# print("SARIMAX{}x{}12 - AIC:{}".format(param, param_seasonal, results.aic))
if results.aic < best_aic:
best_aic = results.aic
best_pdq = param
best_seasonal_pdq = param_seasonal
except:
continue
print("Best SARIMAX{}x{}12 model - AIC:{}".format(best_pdq, best_seasonal_pdq, best_aic))
# -
best_model = sm.tsa.statespace.SARIMAX(ts,
order=(0, 1, 1),
seasonal_order=(1, 1, 1, 12),
enforce_stationarity=False,
enforce_invertibility=False)
results = best_model.fit()
print(results.summary().tables[0])
print(results.summary().tables[1])
results.plot_diagnostics(figsize=(15,12))
plt.show()
pred = results.get_prediction(start=pd.to_datetime('2017-09-01'), dynamic=False)
pred_ci = pred.conf_int()
pred_ci.head(5)
plt.close()
axis = ts['2010':].plot(figsize=(10, 6))
pred.predicted_mean.plot(ax=axis, label='One-step ahead Forecast', alpha=0.7)
axis.fill_between(pred_ci.index, pred_ci.iloc[:, 0], pred_ci.iloc[:, 1], color='k', alpha=.25)
axis.set_xlabel('Date')
axis.set_ylabel('CO2 Levels')
plt.legend(loc='best')
plt.show()
# +
ts_forecasted = pred.predicted_mean
ts_truth = ts['2017-09-01':]
# Compute the mean sqaure error
mse = ((ts_forecasted - ts_truth) ** 2).mean()
print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2)))
# -
pred_dynamic = results.get_prediction(start=pd.to_datetime('2017-09-01'), dynamic=True, full_results=True)
pred_dynami_ci = pred_dynamic.conf_int()
axis = ts['2010':].plot(label='Observed', figsize=(10, 6))
pred_dynamic.predicted_mean.plot(ax=axis, label='Dynamic Forecast', alpha=0.7)
axis.fill_between(pred_dynami_ci.index, pred_dynami_ci.iloc[:, 0], pred_dynami_ci.iloc[:, 1], color='k', alpha=.25)
axis.fill_betweenx(axis.get_ylim(), pd.to_datetime('2017-09-01'), ts.index[-1], alpha=.1, zorder=-1)
axis.set_xlabel('Date')
axis.set_ylabel('CO2 Levels')
plt.legend(loc='best')
plt.show()
plt.close()
# +
ts_forecasted = pred_dynamic.predicted_mean
ts_truth = ts['2017-09-01':]
# Compute the mean square error
mse = ((ts_forecasted - ts_truth) ** 2).mean()
print('The Mean Squared Error of our forecasts is {}'.format(round(mse, 2)))
# +
# Get forecast 500 steps ahead in future
n_steps = 500
pred_uc_99 = results.get_forecast(steps=n_steps, alpha=0.01) # alpha=0.01 signifies 99% confidence interval
pred_uc_95 = results.get_forecast(steps=n_steps, alpha=0.05) # alpha=0.05 95% CI
# Get confidence intervals of forecasts
pred_ci_99 = pred_uc_99.conf_int()
pred_ci_95 = pred_uc_95.conf_int()
# -
n_steps = 500
idx = pd.date_range(ts.index[-1], periods=n_steps, freq='MS')
fc_95 = pd.DataFrame(np.column_stack([pred_uc_95.predicted_mean, pred_ci_95]),
index=idx, columns=['forecast', 'lower_ci_95', 'upper_ci_95'])
fc_99 = pd.DataFrame(np.column_stack([pred_ci_99]),
index=idx, columns=['lower_ci_99', 'upper_ci_99'])
fc_all = fc_95.combine_first(fc_99)
fc_all.head()
plt.close()
axis = ts.plot(label='Observed', figsize=(15, 6))
pred_uc_95.predicted_mean.plot(ax=axis, label='Forecast', alpha=0.7)
axis.fill_between(pred_ci_95.index, pred_ci_95.iloc[:, 0], pred_ci_95.iloc[:, 1], color='k', alpha=.25)
#axis.fill_between(pred_ci_99.index, pred_ci_99.iloc[:, 0], pred_ci_99.iloc[:, 1], color='b', alpha=.25)
axis.set_xlabel('Date')
axis.set_ylabel('CO2 Levels')
plt.legend(loc='best')
plt.show()
fc_all
fc_all.to_csv('fc_all.csv', index=True)
co2_emissions = pd.read_csv("global-co2-fossil-plus-land-use.csv")
co2_emissions
co2_emissions.Year = co2_emissions.Year.astype(str) + "-12-31"
co2_emissions = co2_emissions.set_index("Year")
co2_emissions.index.name = None
co2_emissions = co2_emissions.rename({'Fossil fuel + land use emissions (GtCO2)': 'co2'}, axis=1)
co2_emissions.index = pd.to_datetime(co2_emissions.index)
co2_emissions.drop(['Entity', 'Code', 'Land use emissions (GtCO2)', 'Fossil fuel and industry emissions (GtCO2)'], axis=1, inplace=True)
co2_emissions
ts = co2_emissions.squeeze()
ts.isnull().sum()
plt.close()
ts.plot(figsize=(10, 6))
plt.show()
ts = ts[ts.index.year > 1920]
plt.close()
ts.plot(figsize=(10, 6))
plt.show()
decomposition = sm.tsa.seasonal_decompose(ts, model='additive')
from pylab import rcParams
#rcParams['figure.figsize'] = 12, 10
fig = decomposition.plot()
fig.set_figwidth(12)
fig.set_figheight(8)
plt.show()
# +
# Define the p, d and q parameters to take any value between 0 and 2
p = d = q = range(0, 2)
# Generate all different combinations of p, d and q triplets
pdq = list(itertools.product(p, d, q))
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
print('Examples of parameter combinations for Seasonal ARIMA...')
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1]))
print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3]))
print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4]))
# +
warnings.filterwarnings("ignore") # specify to ignore warning messages
best_aic = np.inf
best_pdq = None
best_seasonal_pdq = None
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
model = sm.tsa.statespace.SARIMAX(ts,
order = param,
seasonal_order = param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = model.fit()
# print("SARIMAX{}x{}12 - AIC:{}".format(param, param_seasonal, results.aic))
if results.aic < best_aic:
best_aic = results.aic
best_pdq = param
best_seasonal_pdq = param_seasonal
except:
continue
print("Best SARIMAX{}x{}12 model - AIC:{}".format(best_pdq, best_seasonal_pdq, best_aic))
# -
best_model = sm.tsa.statespace.SARIMAX(ts,
order=(0, 1, 1),
seasonal_order=(1, 1, 1, 12),
enforce_stationarity=False,
enforce_invertibility=False)
results = best_model.fit()
print(results.summary().tables[0])
print(results.summary().tables[1])
results.plot_diagnostics(figsize=(15,12))
plt.show()
pred = results.get_prediction(start=pd.to_datetime('2000-12-31'), dynamic=False)
pred_ci = pred.conf_int()
pred_ci.head(5)
plt.close()
axis = ts['1960':].plot(figsize=(10, 6))
pred.predicted_mean.plot(ax=axis, label='One-step ahead Forecast', alpha=0.7)
axis.fill_between(pred_ci.index, pred_ci.iloc[:, 0], pred_ci.iloc[:, 1], color='k', alpha=.25)
axis.set_xlabel('Date')
axis.set_ylabel('CO2 Levels')
plt.legend(loc='best')
plt.show()
pred_dynamic = results.get_prediction(start=pd.to_datetime('2000-12-31'), dynamic=True, full_results=True)
pred_dynami_ci = pred_dynamic.conf_int()
axis = ts['1960':].plot(label='Observed', figsize=(10, 6))
pred_dynamic.predicted_mean.plot(ax=axis, label='Dynamic Forecast', alpha=0.7)
axis.fill_between(pred_dynami_ci.index, pred_dynami_ci.iloc[:, 0], pred_dynami_ci.iloc[:, 1], color='k', alpha=.25)
axis.fill_betweenx(axis.get_ylim(), pd.to_datetime('2000-12-31'), ts.index[-1], alpha=.1, zorder=-1)
axis.set_xlabel('Date')
axis.set_ylabel('CO2 Levels')
plt.legend(loc='best')
plt.show()
plt.close()
# +
# Get forecast 40 steps ahead in future
n_steps = 45
pred_uc_99 = results.get_forecast(steps=n_steps, alpha=0.01) # alpha=0.01 signifies 99% confidence interval
pred_uc_95 = results.get_forecast(steps=n_steps, alpha=0.05) # alpha=0.05 95% CI
# Get confidence intervals of forecasts
pred_ci_99 = pred_uc_99.conf_int()
pred_ci_95 = pred_uc_95.conf_int()
idx = pd.date_range(ts.index[-1], periods=n_steps, freq='AS')
fc_95 = pd.DataFrame(np.column_stack([pred_uc_95.predicted_mean, pred_ci_95]),
index=idx, columns=['forecast', 'lower_ci_95', 'upper_ci_95'])
fc_99 = pd.DataFrame(np.column_stack([pred_ci_99]),
index=idx, columns=['lower_ci_99', 'upper_ci_99'])
fc_all = fc_95.combine_first(fc_99)
fc_all.tail()
# -
plt.close()
axis = ts.plot(label='Observed', figsize=(15, 6))
pred_uc_95.predicted_mean.plot(ax=axis, label='Forecast', alpha=0.7)
axis.fill_between(pred_ci_95.index, pred_ci_95.iloc[:, 0], pred_ci_95.iloc[:, 1], color='k', alpha=.25)
#axis.fill_between(pred_ci_99.index, pred_ci_99.iloc[:, 0], pred_ci_99.iloc[:, 1], color='b', alpha=.25)
axis.set_xlabel('Date')
axis.set_ylabel('CO2 Levels')
plt.legend(loc='best')
plt.show()
fc_all.to_csv('fc_all2.csv', index=True)
| notebook1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python [conda env:root]
# language: python
# name: conda-root-py
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
import datetime as dt
# # Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
from sqlalchemy import create_engine, inspect
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect = True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement= Base.classes.measurement
Station= Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# # Exploratory Climate Analysis
# +
#Get a list of column name and types
inspector = inspect(engine)
measurement_columns = inspector.get_columns('measurement')
print("Measurement")
for columns in measurement_columns:
print(columns['name'], columns["type"])
station_columns = inspector.get_columns('station')
print("\nStations")
for columns in station_columns:
print(columns['name'], columns["type"])
# -
session.query(func.count(Measurement.date)).all()
early = session.query(Measurement.date).order_by(Measurement.date).first()
latest = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
print(f"Early: {early[0]} , Latest: {latest[0]}")
# +
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
# Perform a query to retrieve the data and precipitation scores
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
latestdate = dt.datetime.strptime(latest[0], '%Y-%m-%d')
querydate = dt.date(latestdate.year -1, latestdate.month, latestdate.day)
querydate
prec_db = [Measurement.date,Measurement.prcp]
queryresult = session.query(*prec_db).filter(Measurement.date >= querydate).all()
preci = pd.DataFrame(queryresult, columns=['Date','Precipitation'])
preci = preci.dropna(how='any')
preci = preci.sort_values(["Date"], ascending=True)
preci = preci.set_index("Date")
preci
# -
# Use Pandas Plotting with Matplotlib to plot the data
ax = preci.plot(rot=20);
ax.set_title("Precipitation Per Day Over the Past Year");
ax.set_ylabel("Precipitation Level");
# Use Pandas to calcualte the summary statistics for the precipitation data
preci.describe()
# Design a query to show how many stations are available in this dataset?
session.query(Station.id).count()
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
prec_db = [Measurement.station,func.count(Measurement.id)]
active_stat = session.query(*prec_db).\
group_by(Measurement.station).\
order_by(func.count(Measurement.id).desc()).all()
active_stat
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
prec_db = [func.min(Measurement.tobs),func.max(Measurement.tobs),func.avg(Measurement.tobs)]
mas= session.query(*prec_db).\
group_by(Measurement.station).\
order_by(func.count(Measurement.id).desc()).first()
mas
# +
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
queryresult = session.query(Measurement.date,Measurement.tobs).\
filter(Measurement.station == active_stat[0][0]).\
filter(Measurement.date >= querydate).all()
temperatures = list(np.ravel(queryresult))
temperatures[:5]
# -
prec_db = [Station.station,Station.name,Station.latitude,Station.longitude,Station.elevation]
queryresult = session.query(*prec_db).all()
stations_desc = pd.DataFrame(queryresult, columns=['Station','Name','Latitude','Longitude','Elevation'])
stations_desc.head()
stationname = stations_desc.loc[stations_desc["Station"] == active_stat[0][0],"Name"].tolist()[0]
stationname
# +
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
queryresult = session.query(Measurement.tobs).\
filter(Measurement.station == active_stat[0][0]).\
filter(Measurement.date >= querydate).all()
temperatures = list(np.ravel(queryresult))
stationname = stations_desc.loc[stations_desc["Station"] == active_stat[0][0],"Name"].tolist()[0]
plt.hist(temperatures, bins=12,rwidth=3.0,label='tobs')
plt.grid(axis='both', alpha=0.75)
plt.ylabel('Frequency')
plt.legend()
# -
| climate_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # miRNA-Seq on unique table
#
# This script is intended to create a single miRNA table, with the counts' results from PNRD (mature and precursors) and miRBase (mature and hairpins).
# %pylab inline
#import matplotlib_venn
import pandas
#import scipy
# ## Load raw data files
# In this segment, raw data files will be loaded, prepared to be merged and merged in the end.
#
# Preparation steps include:
# 1. renaming the columns to normalise the raw data
# 2. drop irrelevant columns (accession and non normalised counts)
# 3. rename columns to include original database
# 4. lowercase miRNA to be used as index
# 5. set miRNA name as index
#
# After merge, the data frame was sorted by index.
# +
# Loading
pnrd_mature = pandas.read_csv("pmrd_mature_counts.tsv", sep = "\t", header = 0)
pnrd_percursors = pandas.read_csv("pmrd_premirs_counts.tsv", sep = "\t", header = 0)
mirbase_mature = pandas.read_csv("mirbase_mature_counts.tsv", sep = "\t", header = 0)
mirbase_precursors = pandas.read_csv("mirbase_hairpins_counts.tsv", sep = "\t", header = 0)
# Preparation
columns_names = ["miRNA", "accession",
"FB", "FEF", "FH",
"MB", "MEF", "MH",
"TNB", "TNEF", "TNH",
"FB_norm", "FEF_norm", "FH_norm",
"MB_norm", "MEF_norm", "MH_norm",
"TNB_norm", "TNEF_norm", "TNH_norm"]
pnrd_mature.columns = columns_names
pnrd_percursors.columns = columns_names
mirbase_mature.columns = columns_names
mirbase_precursors.columns = columns_names
columns_to_drop = ["accession", "FB", "FEF", "FH", "MB", "MEF", "MH", "TNB", "TNEF", "TNH"]
pnrd_mature = pnrd_mature.drop(labels = columns_to_drop, axis = 1)
pnrd_percursors = pnrd_percursors.drop(labels = columns_to_drop, axis = 1)
mirbase_mature = mirbase_mature.drop(labels = columns_to_drop, axis = 1)
mirbase_precursors = mirbase_precursors.drop(labels = columns_to_drop, axis = 1)
pnrd_mature.columns = ["miRNA",
"FB_pnrd_m", "FEF_pnrd_m", "FH_pnrd_m",
"MB_pnrd_m", "MEF_pnrd_m", "MH_pnrd_m",
"TNB_pnrd_m", "TNEF_pnrd_m", "TNH_pnrd_m"]
pnrd_percursors.columns = ["miRNA",
"FB_pnrd_p", "FEF_pnrd_p", "FH_pnrd_p",
"MB_pnrd_p", "MEF_pnrd_p", "MH_pnrd_p",
"TNB_pnrd_p", "TNEF_pnrd_p", "TNH_pnrd_p"]
mirbase_mature.columns = ["miRNA",
"FB_mirbase_m", "FEF_mirbase_m", "FH_mirbase_m",
"MB_mirbase_m", "MEF_mirbase_m", "MH_mirbase_m",
"TNB_mirbase_m", "TNEF_mirbase_m", "TNH_mirbase_m"]
mirbase_precursors.columns = ["miRNA",
"FB_mirbase_p", "FEF_mirbase_p", "FH_mirbase_p",
"MB_mirbase_p", "MEF_mirbase_p", "MH_mirbase_p",
"TNB_mirbase_p", "TNEF_mirbase_p", "TNH_mirbase_p"]
pnrd_mature["miRNA"] = pnrd_mature["miRNA"].str.lower()
pnrd_percursors["miRNA"] = pnrd_percursors["miRNA"].str.lower()
mirbase_mature["miRNA"] = mirbase_mature["miRNA"].str.lower()
mirbase_precursors["miRNA"] = mirbase_precursors["miRNA"].str.lower()
pnrd_mature = pnrd_mature.set_index("miRNA")
pnrd_percursors = pnrd_percursors.set_index("miRNA")
mirbase_mature = mirbase_mature.set_index("miRNA")
mirbase_precursors = mirbase_precursors.set_index("miRNA")
# Merge
mirnas_all = pandas.concat([pnrd_mature, pnrd_percursors, mirbase_mature, mirbase_precursors], axis = 1, sort = False)
mirnas_all = mirnas_all.sort_index()
# -
mirnas_all.shape
# Obtained table has 233 lines, and 36 columns (9 columns x 4 databases).
# ## Add a column to indicate if miRNA seems to be relevant
# +
# Create the relevant column
mirnas_all["relevant"] = ""
# List relevant miRNAs (this data comes from the separate files)
# The counts lists were updated @ 2019.02.01
pnrd_mature_counts = ['vvi-miR156e', 'vvi-miR160c', 'vvi-miR160d', 'vvi-miR160e', 'vvi-miR167d', 'vvi-miR167e', 'vvi-miR171a', 'vvi-miR171c', 'vvi-miR171d', 'vvi-miR171i', 'vvi-miR172d', 'vvi-miR2950', 'vvi-miR3624', 'vvi-miR3624*', 'vvi-miR3625', 'vvi-miR3625*', 'vvi-miR3626', 'vvi-miR3626*', 'vvi-miR3630*', 'vvi-miR3632', 'vvi-miR3632*', 'vvi-miR3633b*', 'vvi-miR3635', 'vvi-miR3635*', 'vvi-miR394a', 'vvi-miR394b', 'vvi-miR394c', 'vvi-miR395a', 'vvi-miR395b', 'vvi-miR395c', 'vvi-miR395d', 'vvi-miR395e', 'vvi-miR395f', 'vvi-miR395g', 'vvi-miR395h', 'vvi-miR395i', 'vvi-miR395j', 'vvi-miR395k', 'vvi-miR395l', 'vvi-miR395m', 'vvi-miR396b', 'vvi-miR397a', 'vvi-miR399a', 'vvi-miR399c', 'vvi-miR399h', 'vvi-miR535b', 'vvi-miR535c']
pnrd_mature_de = ['vvi-miR156e', 'vvi-miR156f', 'vvi-miR156g', 'vvi-miR156i', 'vvi-miR159a', 'vvi-miR159b', 'vvi-miR160c', 'vvi-miR160d', 'vvi-miR160e', 'vvi-miR164d', 'vvi-miR167a', 'vvi-miR167c', 'vvi-miR167e', 'vvi-miR169a', 'vvi-miR169c', 'vvi-miR169e', 'vvi-miR169k', 'vvi-miR169x', 'vvi-miR171c', 'vvi-miR171g', 'vvi-miR171i', 'vvi-miR172c', 'vvi-miR172d', 'vvi-miR2111*', 'vvi-miR2950', 'vvi-miR3623*', 'vvi-miR3624', 'vvi-miR3624*', 'vvi-miR3625', 'vvi-miR3625*', 'vvi-miR3626', 'vvi-miR3626*', 'vvi-miR3627', 'vvi-miR3627*', 'vvi-miR3629a*', 'vvi-miR3629c', 'vvi-miR3630*', 'vvi-miR3631b*', 'vvi-miR3632', 'vvi-miR3632*', 'vvi-miR3633b*', 'vvi-miR3634', 'vvi-miR3634*', 'vvi-miR3635', 'vvi-miR3635*', 'vvi-miR3637', 'vvi-miR3640*', 'vvi-miR393a', 'vvi-miR393b', 'vvi-miR394b', 'vvi-miR395a', 'vvi-miR395b', 'vvi-miR395c', 'vvi-miR395d', 'vvi-miR395e', 'vvi-miR395f', 'vvi-miR395g', 'vvi-miR395h', 'vvi-miR395i', 'vvi-miR395j', 'vvi-miR395k', 'vvi-miR395l', 'vvi-miR395m', 'vvi-miR396a', 'vvi-miR396b', 'vvi-miR396c', 'vvi-miR396d', 'vvi-miR397a', 'vvi-miR398a', 'vvi-miR399a', 'vvi-miR399b', 'vvi-miR399c', 'vvi-miR399h', 'vvi-miR479', 'vvi-miR482', 'vvi-miR535b', 'vvi-miR535c']
pnrd_precursors_counts = ['vvi-MIR156e', 'vvi-MIR160c', 'vvi-MIR160d', 'vvi-MIR160e', 'vvi-MIR167d', 'vvi-MIR169c', 'vvi-MIR169g', 'vvi-MIR169j', 'vvi-MIR169k', 'vvi-MIR169s', 'vvi-MIR169u', 'vvi-MIR171e', 'vvi-MIR172a', 'vvi-MIR172b', 'vvi-MIR2111', 'vvi-MIR319e', 'vvi-MIR3629b', 'vvi-MIR3629c', 'vvi-MIR3631c', 'vvi-MIR393a', 'vvi-MIR394b', 'vvi-MIR394c', 'vvi-MIR395a', 'vvi-MIR395b', 'vvi-MIR395c', 'vvi-MIR395d', 'vvi-MIR395e', 'vvi-MIR395f', 'vvi-MIR395g', 'vvi-MIR395h', 'vvi-MIR395i', 'vvi-MIR395j', 'vvi-MIR395k', 'vvi-MIR395l', 'vvi-MIR395m', 'vvi-MIR395n', 'vvi-MIR398a', 'vvi-MIR399a', 'vvi-MIR399b', 'vvi-MIR399c', 'vvi-MIR399h', 'vvi-MIR477', 'vvi-MIR845e']
pnrd_precursors_de = ['vvi-MIR156e', 'vvi-MIR156f', 'vvi-MIR156g', 'vvi-MIR156i', 'vvi-MIR160c', 'vvi-MIR160d', 'vvi-MIR167d', 'vvi-MIR169a', 'vvi-MIR169c', 'vvi-MIR169e', 'vvi-MIR169g', 'vvi-MIR169j', 'vvi-MIR169k', 'vvi-MIR169n', 'vvi-MIR169s', 'vvi-MIR169u', 'vvi-MIR171e', 'vvi-MIR172a', 'vvi-MIR172b', 'vvi-MIR2111', 'vvi-MIR3624', 'vvi-MIR3627', 'vvi-MIR3631c', 'vvi-MIR3634', 'vvi-MIR393a', 'vvi-MIR395a', 'vvi-MIR395b', 'vvi-MIR395c', 'vvi-MIR395d', 'vvi-MIR395e', 'vvi-MIR395f', 'vvi-MIR395g', 'vvi-MIR395h', 'vvi-MIR395i', 'vvi-MIR395j', 'vvi-MIR395k', 'vvi-MIR395l', 'vvi-MIR395m', 'vvi-MIR395n', 'vvi-MIR396b', 'vvi-MIR396c', 'vvi-MIR396d', 'vvi-MIR398a', 'vvi-MIR399a', 'vvi-MIR399b', 'vvi-MIR399c', 'vvi-MIR399h', 'vvi-MIR477', 'vvi-MIR845c', 'vvi-MIR845d', 'vvi-MIR845e']
mirbase_mature_counts = ['vvi-miR156e', 'vvi-miR167b', 'vvi-miR169a', 'vvi-miR172d', 'vvi-miR2950-5p', 'vvi-miR3625-5p', 'vvi-miR3626-3p', 'vvi-miR3626-5p', 'vvi-miR3630-3p', 'vvi-miR3632-3p', 'vvi-miR3633b-3p', 'vvi-miR393b', 'vvi-miR394a', 'vvi-miR394b', 'vvi-miR395a', 'vvi-miR396b', 'vvi-miR397a', 'vvi-miR399a', 'vvi-miR399b']
mirbase_mature_de = ['vvi-miR156f', 'vvi-miR160c', 'vvi-miR167c', 'vvi-miR169a', 'vvi-miR171g', 'vvi-miR172d', 'vvi-miR319e', 'vvi-miR3623-3p', 'vvi-miR3624-3p', 'vvi-miR3625-5p', 'vvi-miR3626-3p', 'vvi-miR3627-5p', 'vvi-miR3632-5p', 'vvi-miR3633b-3p', 'vvi-miR3634-3p', 'vvi-miR3637-3p', 'vvi-miR3640-5p', 'vvi-miR395a', 'vvi-miR396a', 'vvi-miR396b', 'vvi-miR396d', 'vvi-miR398a', 'vvi-miR399a', 'vvi-miR399b']
mirbase_precursors_counts = ['vvi-MIR156e', 'vvi-MIR160c', 'vvi-MIR160d', 'vvi-MIR160e', 'vvi-MIR167d', 'vvi-MIR169c', 'vvi-MIR169g', 'vvi-MIR169j', 'vvi-MIR169k', 'vvi-MIR169n', 'vvi-MIR169s', 'vvi-MIR169u', 'vvi-MIR171e', 'vvi-MIR171f', 'vvi-MIR172a', 'vvi-MIR172b', 'vvi-MIR2111', 'vvi-MIR3629a', 'vvi-MIR3629b', 'vvi-MIR3631c', 'vvi-MIR393a', 'vvi-MIR394a', 'vvi-MIR394b', 'vvi-MIR395a', 'vvi-MIR395b', 'vvi-MIR395c', 'vvi-MIR395d', 'vvi-MIR395e', 'vvi-MIR395f', 'vvi-MIR395g', 'vvi-MIR395h', 'vvi-MIR395i', 'vvi-MIR395j', 'vvi-MIR395k', 'vvi-MIR395l', 'vvi-MIR395m', 'vvi-MIR395n', 'vvi-MIR398a', 'vvi-MIR399a', 'vvi-MIR399b', 'vvi-MIR399c', 'vvi-MIR399h', 'vvi-MIR477a', 'vvi-MIR828a', 'vvi-MIR845e']
mirbase_precursors_de = ['vvi-MIR156e', 'vvi-MIR156f', 'vvi-MIR156g', 'vvi-MIR156i', 'vvi-MIR160c', 'vvi-MIR167d', 'vvi-MIR169c', 'vvi-MIR169e', 'vvi-MIR169g', 'vvi-MIR169j', 'vvi-MIR169k', 'vvi-MIR169n', 'vvi-MIR169p', 'vvi-MIR169s', 'vvi-MIR169u', 'vvi-MIR171e', 'vvi-MIR171f', 'vvi-MIR172a', 'vvi-MIR172d', 'vvi-MIR2111', 'vvi-MIR3624', 'vvi-MIR3627', 'vvi-MIR3629b', 'vvi-MIR3634', 'vvi-MIR393a', 'vvi-MIR394a', 'vvi-MIR395a', 'vvi-MIR395b', 'vvi-MIR395c', 'vvi-MIR395d', 'vvi-MIR395e', 'vvi-MIR395f', 'vvi-MIR395g', 'vvi-MIR395h', 'vvi-MIR395i', 'vvi-MIR395j', 'vvi-MIR395k', 'vvi-MIR395l', 'vvi-MIR395m', 'vvi-MIR395n', 'vvi-MIR396b', 'vvi-MIR396c', 'vvi-MIR396d', 'vvi-MIR398a', 'vvi-MIR399a', 'vvi-MIR399b', 'vvi-MIR399h', 'vvi-MIR477a', 'vvi-MIR845c', 'vvi-MIR845d', 'vvi-MIR845e']
# Normalize all to lower case
pnrd_mature_counts = [x.lower() for x in pnrd_mature_counts]
pnrd_mature_de = [x.lower() for x in pnrd_mature_de]
pnrd_precursors_counts = [x.lower() for x in pnrd_precursors_counts]
pnrd_precursors_de = [x.lower() for x in pnrd_precursors_de]
mirbase_mature_counts = [x.lower() for x in mirbase_mature_counts]
mirbase_mature_de = [x.lower() for x in mirbase_mature_de]
mirbase_precursors_counts = [x.lower() for x in mirbase_precursors_counts]
mirbase_precursors_de = [x.lower() for x in mirbase_precursors_de]
# Add information to the column
for miRNA in mirnas_all.index:
reference_value = list()
if miRNA in pnrd_mature_counts:
reference_value.append("PNRD_M_C")
if miRNA in pnrd_mature_de:
reference_value.append("PNRD_M_DE")
if miRNA in pnrd_precursors_counts:
reference_value.append("PNRD_P_C")
if miRNA in pnrd_precursors_de:
reference_value.append("PNRD_P_DE")
if miRNA in mirbase_mature_counts:
reference_value.append("miRBase_M_C")
if miRNA in mirbase_mature_de:
reference_value.append("miRBase_M_DE")
if miRNA in mirbase_precursors_counts:
reference_value.append("miRBase_P_C")
if miRNA in mirbase_precursors_de:
reference_value.append("miRBase_P_DE")
mirnas_all["relevant"].loc[miRNA] = "; ".join(reference_value)
# -
# ## Output final data.frame results
#
# Obtained data frame is going to do exported to a tab separated values file.
mirnas_all.to_csv("all_miRNAs.csv", sep = "\t")
mirnas_all
| jupyter_notebooks/merge_results_into_table.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import Adagrad, Adam, RMSprop
from keras.objectives import mean_squared_error
from keras.regularizers import l2
import seaborn as snb
from utils.GraphUtil import *
from utils.SlidingWindowUtil import SlidingWindow
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import MinMaxScaler
dat = pd.read_csv('sampling_617685_metric_10min_datetime.csv', index_col=0, parse_dates=True)
n_sliding_window = 4
scaler = MinMaxScaler()
scale_dat = scaler.fit_transform(dat.cpu_rate)
dat_sliding = np.array(list(SlidingWindow(scale_dat, n_sliding_window)))
X_train_size = int(len(dat_sliding)*0.7)
# sliding = np.array(list(SlidingWindow(dat_sliding, n_sliding_window)))
# sliding = np.array(dat_sliding, dtype=np.int32)
X_train = dat_sliding[:X_train_size]
y_train = scale_dat[n_sliding_window:X_train_size+n_sliding_window].reshape(-1,1)
X_test = dat_sliding[X_train_size:]
y_test = scale_dat[X_train_size+n_sliding_window-1:].reshape(-1,1)
model = Sequential([
Dense(n_sliding_window+2, input_dim=n_sliding_window, activation='relu',init='uniform'),
Dense(1)
])
np.random.seed(7)
optimizer = Adagrad()
model.compile(loss='mean_squared_error', optimizer=optimizer)
history = model.fit(X_train,y_train, nb_epoch=5000,batch_size=100, validation_split=0.7, verbose=0, shuffle=True)
log = history.history
df = pd.DataFrame.from_dict(log)
# %matplotlib
df.plot(kind='line')
y_pred = model.predict(X_test)
from sklearn.metrics import mean_absolute_error
mean_absolute_error(y_pred,y_test)
# %matplotlib
plot_figure(y_pred=y_pred, y_true=y_test)
# # LSTM neural network
from keras.layers import LSTM
batch_size = 10
time_steps = 1
Xtrain = np.reshape(X_train, (X_train.shape[0], time_steps, n_sliding_window))
ytrain = np.reshape(y_train, (y_train.shape[0], time_steps, y_train.shape[1]))
Xtest = np.reshape(X_test, (X_test.shape[0], time_steps, n_sliding_window))
model = Sequential()
model.add(LSTM(6,batch_input_shape=(batch_size,time_steps,n_sliding_window),stateful=True,activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adagrad')
history = model.fit(Xtrain[:2900],y_train[:2900], nb_epoch=6000,batch_size=batch_size,shuffle=False, verbose=0,
validation_data=(Xtest[:len_test],y_test[:len_test]))
log = history.history
df = pd.DataFrame.from_dict(log)
# %matplotlib
df.plot(kind='line')
len_test = 1200
y_pred = model.predict(Xtest[:len_test],batch_size=batch_size)
# mean_absolute_error(y_pred,y_test[:len_test])
y_pred
# %matplotlib
plot_figure(y_pred=scaler.inverse_transform(y_pred), y_true=scaler.inverse_transform(y_test))
results = []
results.append({'score':mean_absolute_error(y_pred,y_test[:len_test]),'y_pred':y_pred})
pd.DataFrame.from_dict(results).to_csv("lstm_result.csv",index=None)
| 6_google_trace/VMFuzzyPrediction/KerasTutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Simple Regressions
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data = pd.read_csv("bigPoppa.csv")
data
# # Graphing all well locations
sns.scatterplot(data["easting"], data["northing"])
plt.ylabel("Northing")
plt.xlabel("Easting")
plt.title("Location of Horizontal wells, starting position")
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gradient descent algorithm for Scenario 2
#
#
# In this part, we implement an gradient descent algorithm to optimization the objective loss function in Scenario 2:
#
#
# $$\min F := \min \frac{1}{2(n-i)} \sum_{i=1000}^n (fbpredic(i) + a*tby(i) +b*ffr(i) + c*fta(i) - asp(i))^2$$
#
# Gradient descent:
#
# $$ \beta_k = \beta_{k-1} + \delta* \nabla F, $$
# where $\delta$ control how far does each iteration go.
#
#
# ### Detailed plan
#
# First, split the data as train and test with 80% and 20% respectively. For the training part, we need prophet() predicted price, there are a couple of issues. One is prophet() can not predict too far in the future. The other is we can not call prophet() too many times, this takes a lot of time. So we will use a sliding window strategy:
#
# 1, Split the train data as train_1 and train_2, where train_1 is used as a sliding window to fit prophet(), and give predictions in train_2. Train_2 is used train the model we proposed above.
#
# 2, After we got full size (size of train_2) predictions from prophet(), then we use gradient descent to fit the above model, extracting the coefficients of features to make predicution in the testing data.
#
# +
import pandas as pd
import numpy as np
## For plotting
import matplotlib.pyplot as plt
from matplotlib import style
import datetime as dt
import seaborn as sns
from datetime import datetime
sns.set_style("whitegrid")
# -
df= pd.read_csv('dff5.csv')
df = df.rename(columns = {"Date":"ds","Close":"y"})
df = df[['ds', 'y', 'fbsp','diff', 'tby', 'ffr', 'fta', 'eps', 'div', 'per', 'une', 'rus',
'wti', 'ppi', 'rfs', 'vix']]
df
dfnl = df[['ds', 'y', 'fbsp','tby', 'ffr', 'fta', 'eps', 'div', 'per', 'une', 'rus',
'wti', 'ppi', 'rfs']]
# adding nonlinear terms
dfnl['fbsp_tby'] = dfnl['fbsp'] * dfnl['tby']
dfnl['fbsp_ffr'] = dfnl['fbsp'] * dfnl['ffr']
dfnl['fbsp_div'] = dfnl['fbsp'] * dfnl['div']
dfnl['eps_tby'] = dfnl['eps'] * dfnl['tby']
dfnl['eps_ffr'] = dfnl['eps'] * dfnl['ffr']
dfnl['eps_div'] = dfnl['eps'] * dfnl['div']
dfnl.to_csv('dfnl.csv')
dfnl
dfnlf = dfnl[['ds', 'y', 'fbsp','tby', 'ffr', 'fta', 'eps', 'div', 'per', 'une', 'rus',
'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div']]
dfnlf.to_csv('dfnlf.csv')
dfnlf = pd.read_csv('dfnlf.csv', parse_dates = True, index_col = 0)
# +
p = 0.9
# Train around 90% of dataset
cutoff = int((p*len(df)//100)*100)
df_train = df[:cutoff].copy()
df_test = df[cutoff:].copy()
dfnlf_train = dfnlf[:cutoff].copy()
dfnlf_test = dfnlf[cutoff:].copy()
# -
df.columns
dfnlf.columns
# +
#possible_features = ['tby', 'ffr', 'fta', 'eps', 'div', 'per',
# 'une', 'rus', 'wti', 'ppi', 'rfs', 'vix']
# -
possible_features = ['tby', 'ffr', 'fta', 'eps', 'div', 'une', 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div',
'eps_tby', 'eps_ffr', 'eps_div']
from sklearn.linear_model import LinearRegression
reg = LinearRegression(fit_intercept=False, normalize=True, copy_X = True)
reg.fit(dfnlf_train[possible_features], dfnlf_train['y'] - dfnlf_train.fbsp)
# +
coef = []
for i in range(len(possible_features)):
coef.append(np.round(reg.coef_[i],5))
print(coef)
# -
pp_test = dfnlf_test.fbsp.copy() # predicted price on testing data
pp_train = dfnlf_train.fbsp.copy() # predicted price on training data
dfnlf_test1 = dfnlf_test[possible_features].copy()
dfnlf_train1 = dfnlf_train[possible_features].copy()
for i in range(len(possible_features)):
pp_test += coef[i] * dfnlf_test1[dfnlf_test1.columns[i]].ravel()
pp_train += coef[i] * dfnlf_train1[dfnlf_train1.columns[i]].ravel()
from sklearn.metrics import mean_squared_error as MSE
# MSE for test data
# Actual close price: df_test[:test_time].y
# Predicted price by prophet: pred_test
# Predicted price by tuning
mse1 = MSE(dfnlf_test.y, dfnlf_test.fbsp) #
mse2 = MSE(dfnlf_test.y, pp_test)
print(mse1,mse2)
# MSE for train data
mse3 = MSE(dfnlf_train.y, dfnlf_train.fbsp)
mse4 = MSE(dfnlf_train.y, pp_train)
print(mse3,mse4)
# +
plt.figure(figsize=(18,10))
# plot the training data
plt.plot(dfnlf_train.y,'b',
label = "Training Data")
plt.plot(pp_train,'g-',
label = "Improved Fitted Values")
# plot the fit
plt.plot(dfnlf_train.fbsp,'r-',
label = "FB Fitted Values")
# # plot the forecast
plt.plot(dfnlf_test.fbsp,'r--',
label = "FB Forecast")
plt.plot(pp_test,'g--',
label = "Improved Forecast")
plt.plot(dfnlf_test.y,'b--',
label = "Test Data")
plt.legend(fontsize=14)
plt.xlabel("Date", fontsize=16)
plt.ylabel("SP&500 Close Price", fontsize=16)
plt.show()
# +
plt.figure(figsize=(18,10))
# plot the training data
plt.plot(dfnlf_train.ds,dfnlf_train.y,'b',
label = "Training Data")
plt.plot(dfnlf_train.ds, pp_train,'g-',
label = "Improved Fitted Values")
# plot the fit
plt.plot(dfnlf_train.ds, dfnlf_train.fbsp,'r-',
label = "FB Fitted Values")
# # plot the forecast
plt.plot(dfnlf_test.ds, dfnlf_test.fbsp,'r--',
label = "FB Forecast")
plt.plot(dfnlf_test.ds, pp_test,'g--',
label = "Improved Forecast")
plt.plot(dfnlf_test.ds,dfnlf_test.y,'b--',
label = "Test Data")
plt.legend(fontsize=14)
plt.xlabel("Date", fontsize=16)
plt.ylabel("SP&500 Close Price", fontsize=16)
plt.show()
# -
| scratch work/Nonlinear Modifications by Hao/Scenario2-Copy2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="46xlVf1NzAl1"
# # Tensorflow multioutput model
#
# + id="-CGCVnvTn6ZL"
#@title ##**Prepare environment** { display-mode: "form" }
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print("Environment prepared for work")
# + id="51t4n-gUn8_F"
#@title ##**Prepare dataset** { display-mode: "form" }
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train = x_train.reshape(len(x_train), 28, 28, 1)
x_test = x_test.reshape(len(x_test), 28, 28, 1)
y_train_b = [0 if i % 2 else 1 for i in y_train]
y_test_b = [0 if i % 2 else 1 for i in y_test]
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
# integer encode
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(y_train_b)
# binary encode
onehot_encoder = OneHotEncoder(sparse=False)
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
y_train_b = onehot_encoder.fit_transform(integer_encoded)
print(f'Dataset successfully prepared for work\n'\
f'Dataset contains: {len(x_train) + len(x_test)} images')
# + id="9VJ-hShEn9Bd"
#@title ##**Show dataset examples** { display-mode: "form" }
plt.figure(figsize=(10, 10))
for i in range(25):
plt.subplot(5, 5, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(x_train[i].reshape(28, 28), cmap=plt.cm.binary)
label = str(y_train[i])
val = "odd" if (y_train_b[i][0] == 1) else "even"
label += ", " + val
plt.xlabel(label)
plt.show()
# + [markdown] id="zz7T7poNskdt"
# # Make model
#
# + id="efjSA5xmn9EE"
main_input = tf.keras.Input(shape=(28, 28, 1), name='main_input')
x = tf.keras.layers.Convolution2D(32, (3, 3), input_shape = (28, 28, 1), activation='relu')(main_input)
x = tf.keras.layers.ZeroPadding2D((1, 1))(x)
x = tf.keras.layers.Convolution2D(64, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2), strides=(2, 2))(x)
x = tf.keras.layers.Dense(64, activation="relu")(x)
x = tf.keras.layers.Flatten()(x)
numeric_out = tf.keras.layers.Dense(10, activation='softmax', name='numeric_out')(x)
binary_out = tf.keras.layers.Dense(2, activation='sigmoid', name='binary_out')(x)
model = tf.keras.Model(inputs=main_input, outputs=[numeric_out, binary_out])
model.compile(optimizer='rmsprop', loss=['sparse_categorical_crossentropy', 'binary_crossentropy'], metrics=['accuracy'])
model.summary()
# + id="s8OXwo0Nw48t"
#@title ##**Plot model (optional)** { display-mode: "form" }
from keras.utils.vis_utils import plot_model
plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
# + id="86fKLuVun9Q8"
#@title ##**Train model** { display-mode: "form" }
ep = model.fit(x_train, { "numeric_out": y_train, "binary_out": y_train_b}, epochs=1)
# + id="ceSMVZGTn9Tk"
#@title ##**Evaluate model (optional)** { display-mode: "form" }
x = model.evaluate(x_train, { "numeric_out": y_train, "binary_out": y_train_b}, verbose=2)
# + [markdown] id="kl3aZkkGtRJ9"
# # Test model
# + id="6qEkhx3zn9WE"
plt.figure(figsize=(10, 10))
for i in range(25):
img = x_train[i]
pred = model.predict(img.reshape(1, 28, 28, 1))
label = str(np.argmax(pred[0]))
val = "even" if (np.argmax(pred[1]) == 1) else "odd"
label += ", " + val
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(img.reshape(28, 28), cmap=plt.cm.binary)
plt.xlabel(label)
plt.show()
| multioutput.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
## Default imports: don't touch these
from scipy.ndimage.interpolation import map_coordinates
from scipy.ndimage.filters import gaussian_filter
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
import math
# -
# %matplotlib inline
from mnist import MNIST
def load_dataset(s="data"):
mndata = MNIST('../%s/'%s)
X_train, labels_train = map(np.array, mndata.load_training())
X_test, labels_test = map(np.array, mndata.load_testing())
X_train = X_train/255.0
X_test = X_test/255.0
return (X_train, labels_train), (X_test, labels_test)
(X_train, labels_train), (X_test, labels_test) = load_dataset()
# #Perturbing
#
# The MNIST dataset is fairly "clean"; it's been preprocessed and cropped nicely. However, real-world data is not this clean. To make our classifier more robust, we can apply perturb our images ever so slightly so that the classifier learns to deal with various distortions: rotations, skewing, elastic transformations, and noise.
#
# In this test, we examine the effects of training a classifier with **perturbations**, singleton transformations, applied to the training dataset. To simulate a less-than-ideal dataset, we apply a combination of transformations (i.e., "noise") to the test dataset.
#
# ## Skewing
#
# To *deskew* an image, we consider each set of pixels along an axis. Consider the values to be a distribution, and convert the distribution to standard normal. To **skew** an image, we do the reverse:
#
# 1. Randomly initialize a distribution with an off-center mean and non-zero variance.
# 2. Distort all pixels by this amount.
#
# ## Rotations
#
# To generalize rotations to higher dimensions, we first rotate the centroid for the array. Then, apply the affine transformation $\text{image'} = A(\text{image}) + b$.
#
# ## Elastic Transformation
#
#
plt.imshow(X_train[0].reshape(28,28)) # This is what the image looks like
examples = [54229,17473, 6642, 29232, 38150, 2186, 58443, 15689, 14413, 14662]
# +
from mpl_toolkits.axes_grid1 import AxesGrid
def draw_examples_with_perturbation(examples, f):
"""Draw examples with provided perturbation f
:param examples: list of examples
:param f: transformation function with takes a 28x28 image
and returns a 28x28 image
"""
examples = [(e, n) for n, e in enumerate(examples)]
grid = AxesGrid(plt.figure(figsize=(8,15)), 141, # similar to subplot(141)
nrows_ncols=(len(examples), 2),
axes_pad=0.05,
label_mode="1",
)
for examplenum,num in examples:
image = X_train[examplenum].reshape(28,28)
im = grid[2*num].imshow(image)
im2 = grid[2*num+1].imshow(f(image))
# -
# #Perturbations
# ##Skewing
def skew(image):
"""Skew the image provided.
Taken from StackOverflow:
http://stackoverflow.com/a/33088550/4855984
"""
image = image.reshape(28, 28)
h, l = image.shape
distortion = np.random.normal(loc=12, scale=1)
def mapping(point):
x, y = point
dec = (distortion*(x-h))/h
return x, y+dec+5
return interpolation.geometric_transform(
image, mapping, (h, l), order=5, mode='nearest')
draw_examples_with_perturbation(examples, skew)
# ##Rotation
def rotate(image, d):
"""Rotate the image by d/180 degrees."""
center = 0.5*np.array(image.shape)
rot = np.array([[np.cos(d), np.sin(d)],[-np.sin(d), np.cos(d)]])
offset = (center-center.dot(rot)).dot(np.linalg.inv(rot))
return interpolation.affine_transform(
image,
rot,
order=2,
offset=-offset,
cval=0.0,
output=np.float32)
rotate_cw = lambda image: rotate(image, -(3*np.random.random()/5))
rotate_ccw = lambda image: rotate(image, 3*np.random.random()/5)
draw_examples_with_perturbation(examples, rotate_cw)
draw_examples_with_perturbation(examples, rotate_ccw)
# ##Noise
def noise(image, n=100):
"""Add noise by randomly changing n pixels"""
indices = np.random.random(size=(n, 2))*28
image = image.copy()
for x, y in indices:
x, y = int(x), int(y)
image[x][y] = 0
return image
draw_examples_with_perturbation(examples, noise)
# ## Elastic Transformations
def elastic_transform(image, alpha=36, sigma=5, random_state=None):
"""Elastic deformation of images as described in [Simard2003]_.
.. [Simard2003] <NAME>, "Best Practices for
Convolutional Neural Networks applied to Visual Document Analysis", in
Proc. of the International Conference on Document Analysis and
Recognition, 2003.
:param image: a 28x28 image
:param alpha: scale for filter
:param sigma: the standard deviation for the gaussian
:return: distorted 28x28 image
"""
assert len(image.shape) == 2
if random_state is None:
random_state = np.random.RandomState(None)
shape = image.shape
dx = gaussian_filter((random_state.rand(*shape) * 2 - 1), sigma, mode="constant", cval=0) * alpha
dy = gaussian_filter((random_state.rand(*shape) * 2 - 1), sigma, mode="constant", cval=0) * alpha
x, y = np.meshgrid(np.arange(shape[0]), np.arange(shape[1]), indexing='ij')
indices = np.reshape(x+dx, (-1, 1)), np.reshape(y+dy, (-1, 1))
return map_coordinates(image, indices, order=1).reshape(shape)
draw_examples_with_perturbation(examples, elastic_transform)
# ##Results
from sklearn.preprocessing import OneHotEncoder
from sklearn import linear_model
import sklearn.metrics as metrics
# +
def createModel(x,y):
yp = OneHotEncoder()
y = yp.fit_transform(y.reshape(x.shape[0],1)).toarray()
clf = linear_model.Ridge (alpha = 0)
clf.fit(x,y)
return clf
def predict(model,x):
return np.argmax(model.predict(x),axis=1)
# +
def vectorize(f):
def vectorized(X):
X = X.copy()
for i, row in enumerate(X):
X[i] = f(row.reshape(28, 28)).reshape(784)
return X
return vectorized
# perturbations
perturb_rotate_cw = vectorize(rotate_cw)
perturb_rotate_ccw = vectorize(rotate_ccw)
perturb_skew = vectorize(skew)
perturb_noise = vectorize(noise)
perturb_elastic_transform = vectorize(elastic_transform)
perturb_rotate_elastic_transform = vectorize(lambda image: rotate(elastic_transform(image), 15*np.pi/180))
# combinations of perturbations, to simulate a noisy dataset
perturb_noise_rotate = vectorize(lambda image: noise(rotate_cw(image)))
perturb_noise_skew = vectorize(lambda image: noise(skew(image)))
perturb_noise_elastic_transform = vectorize(lambda image: noise(elastic_transform(image)))
# +
from math import ceil
def perturb(X, labels):
"""Perturb the data in place, by applying various combinations of noise."""
size = ceil(X.shape[0]/4)
X_test_perturbed = perturb_skew(X[:size])
X_test_perturbed = np.concatenate([X_test_perturbed, perturb_noise_rotate(X[size:2*size])])
X_test_perturbed = np.concatenate([X_test_perturbed, perturb_noise_skew(X[2*size:3*size])])
X_test_perturbed = np.concatenate([X_test_perturbed, perturb_noise_elastic_transform(X[3*size:4*size])])
indices = list(range(X.shape[0]))
np.random.shuffle(indices)
return X_test_perturbed[indices], labels[indices]
# -
model_unchanged = createModel(X_train, labels_train)
X_test_noisy, labels_test_noisy = perturb(X_test, labels_test)
metrics.accuracy_score(predict(model_unchanged, X_train), labels_train)
metrics.accuracy_score(predict(model_unchanged, X_test_noisy), labels_test_noisy)
# +
def perturb_extend(X, labels, dim=28, n=2500):
"""Duplicate training data, by perturbing each image several ways.
Taken from gist at https://gist.github.com/fmder/e28813c1e8721830ff9c.
Each image will see the following perturbations:
1. rotation clockwise
2. rotation counterclockwise
3. skew
4. noise
5. elastic transformation
"""
num_transformations = 6
print('[Perturb] Preprocessing images...')
indices = np.random.random_integers(low=0, high=X.shape[0]-1, size=n)
X_featurize = X[indices]
labels_featurize = labels[indices]
X_new = np.concatenate([X, perturb_skew(X_featurize)])
X_new = np.concatenate([X_new, perturb_rotate_cw(X_featurize)])
X_new = np.concatenate([X_new, perturb_rotate_ccw(X_featurize)])
X_new = np.concatenate([X_new, perturb_noise(X_featurize)])
X_new = np.concatenate([X_new, perturb_elastic_transform(X_featurize)])
X_new = np.concatenate([X_new, perturb_rotate_elastic_transform(X_featurize)])
print('[Perturb] All samples generated. Shuffling...')
labels_new = np.concatenate([labels] + [labels_featurize]*num_transformations)
print('[Perturb] Preprocessing complete. ({num}x{n} samples)'.format(
num=num_transformations,
n=n
))
return X_new.reshape(X_new.shape[0], dim*dim), labels_new
X_train_perturbed, labels_train_perturbed = perturb_extend(X_train, labels_train)
# -
model_perturbed = createModel(X_train_perturbed, labels_train_perturbed)
metrics.accuracy_score(predict(model_perturbed, X_train_perturbed), labels_train_perturbed)
metrics.accuracy_score(predict(model_perturbed, X_test_noisy), labels_test_noisy)
# # Overall Results
#
# Using L2 Regularized Regression (Ridge Regression), we have
#
#
# ### Baseline
#
# Train Accuracy: 85.01%
# Test Accuracy: 23.62%
#
# ### Train with Perturbations
#
# Train Accuracy: 82.15%
# Test Accuracy: 67.45%
#
# We additionally found that deskewing would overfit the training data with an average training accuracy of 99.6% and test accuracy of 98.6%, with a 1% deficit.
#
# On the other, training with perturbations on top of deskewing would result in an average training accuracy of 99.2% and test accuracy of 98.7%, leaving an average of 0.5% deficit.
#
# ##Conclusion
#
# We see that training with perturbations doubled our test accuracy, when testing against noisier datasets. In other words, the second classifier is far more generalized than the first.
#
# This demonstration is flawed, as we train for the very transformations that we test against. However, this demonstrates that even training on singleton transformations is enough to make a classifier more robust, against combinations of transformations. Thus, we can apply this more broadly by identifying only a few transformations to represent a vast majority of noise.
| notebooks/Perturbing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Вектора
#
# ## Back to school
#
# В геометрии вектор — направленный отрезок прямой, то есть отрезок, для которого указано, какая из его граничных точек является началом, а какая — концом
#
# 
# ## Линейная алгебра
#
# Вектор - математическая структура, которая представляет собой набор элементов, называемых векторами, для которых определены операции сложения друг с другом и умножения на число — скаляр
#
# $x=(2,3,4)$
#
# 
# +
import numpy as np
x = np.array([2,3,4])
print(x)
# -
# Размерность вектора количество "компонент" в этом векторе
#
# К примеру, если мы рассмотрим вектор мутаций в геноме, то его размерность может быть несколько сотен тысяч
#
# $mutations=(0,0,1,0,1,1,0,.....)$
print(x.shape)
# ## Векторное пространство
#
# Векторное пространство - это множество векторов одной размерности, будем обозначать $\mathbb{V}$, которое обладает следующими свойствами:
# * Заданы 2 замкнутые операции: сложение векторов и умножение вектора на скаляр(число)
# * $x+y = y+x,\ \forall x,y \in \mathbb{V}$
# * $x+(y+z)=(x+y)+z,\ \forall x,y,z \in \mathbb{V}$
# * $\exists 0 \in \mathbb{V}: x+0=x,\ \forall x \in \mathbb{V}$
# * $\forall x \in \mathbb{V}\ \exists -x \in \mathbb{V}: x+(-x)=0$
# * $\alpha(\beta x) = (\alpha \beta)x$
# * $1 x = x$
# * $(\alpha + \beta)x = \alpha x + \beta x$
# * $\alpha (x+y)=\alpha x + \alpha y$
#
# В рамках курса будем говорить о евклидовых пространствах: $\mathbb{R}^n$
# ## Сложение векторов
# $a=(a_1, a_2, ..., a_n)$
#
# $b=(b_1, b_2, ..., b_n)$
#
# $a+b=(a_1+b_1, a_2+b_2, ..., a_n+b_n)$
#
# 
# +
x = np.array([2,3,4])
y = np.array([1,1,1])
print(x+y)
# -
# ## Умножение вектора на скаляр
# $a=(a_1,a_2,...,a_n)$
#
# $\beta a=(\beta a_1, \beta a_2, ..., \beta a_3)$
#
# 
x = np.array([2,3,4])
beta = 2
print(beta*x)
# Интересный вывод: если вы возьмете выборку из m векторов, то можно найти средний вектор:
#
# $\bar{x}=\frac{1}{m}\sum_{i=1}^{m}(x_1^i, x_2^i, ..., x_n^i)$
# ## Линейная независимость - информация в векторах
#
# Пусть вектор x отображает информацию о тарифе пользователя:
# * Стоимость одной минуты
# * Стоимость одного SMS
# * Стоимость МБ данных
# * Стоимость ГБ данных
# * Расход за первый квартал
# * Расход за второй квартал
# * Расход за третий квартал
# * Расход за четвертый квартал
# * Расход за год
#
# $x_1 = (.3, .1, 0.01, 10.24, 300, 200, 100, 50, 650)$
#
# $x_2 = (.5, .7, 0.02, 20.48, 100, 200, 500, 0, 800)$
#
# $x_3 = (.1, .0, 0.1, 102.4, 900, 900, 900, 900, 3600)$
#
# **Линейная зависимость**:
#
# $\beta_1 x_1 + \beta_2 x_2 + .... + \beta_3 x3 = 0$, где $\exists \beta \neq 0$,
#
# **Линейная зависимость вектора**:
#
# $x_i = \beta_1 x_1 + \beta_2 x_2 + ... + \beta_{i-1} x_{i-1} + \beta_{i+1} x_{i+1} + ... + \beta_n x_n$
#
# $dim V$ - максимальное число линейно независимых векторов в пространстве
#
# 
#
# $e_1=(1,0,0)\ e_2=(0,1,0)\ e_3=(0,0,1)$
# ## Норма вектора
#
# $\| x \| = scalar$
#
# $\| 0 \| = 0$
#
# $\| {x+y} \| \leq \| x \| + \| y \|$
#
# $\| {\alpha x} \| = \alpha \| x \|$
#
# Евклидова норма
# $\| x_2 \| = \sqrt{\sum_{i=1}^{2}x_i^2}$
#
# 
# +
from numpy.linalg import norm
x = np.array([3,4])
print(norm(x))
# -
# # Метрика
#
# $\rho(x,y) = \| {x-y} \|$
# +
x = np.array([1,2,5])
y = np.array([-2,0,11])
print(norm(x-y))
# -
# ## Скалярное произведение векторов
#
# $x \cdot y = \sum_{i=1}^{n} x_i y_i$
#
# $\| x \| = \sqrt{x \cdot x}$
#
# $x \cdot y = \| x \| \| y \| cos(x,y)$
#
# $cos(x,y) = \frac{x \cdot y}{\| x \| \| y \|}$
#
# $cos(x,y) = 1$ - вектора лежат на одной прямой
#
# $cos(x,y) = 0$ - ортоганальные вектора
# +
x = np.array([1,0])
y = np.array([0,1])
print(np.dot(x,y))
# +
x = np.array([12,-1])
y = np.array([3.1,8.9999])
cos_value = np.dot(x,y) / (norm(x) * norm(y))
print('Значение косинуса: ', cos_value)
print('Значение угла в радианах: ', np.arccos(cos_value))
print('Значение угла в градусах: ', np.degrees(np.arccos(cos_value)))
# +
# PS - изменение размерности
x = np.array([1,2,3])
print('X=',x)
print('Shape=',x.shape)
x = x.reshape((1,3))
print('X=', x)
print('Shape=', x.shape)
# -
| module_002_math/lesson_004_vectors/tutorials_sources/Vectors.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # LAB 05.01 - Predictions impact
# !wget --no-cache -O init.py -q https://raw.githubusercontent.com/rramosp/ai4eng.v1.20211.udea/main/content/init.py
import init; init.init(force_download=False); init.get_weblink()
from local.lib.rlxmoocapi import submit, session
student = session.Session(init.endpoint).login( course_id=init.course_id,
lab_id="L05.01" )
# ## Task 1. Compute PNL from strategy
#
# observe the following signal `s`, and model trend predictions `p` (not perfect predictions!!)
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# %matplotlib inline
s = np.round((np.random.normal(size=20)*5+.5).cumsum()+100,2)
p = (np.random.random(size=len(s)-1)>.3).astype(int)
print (s.shape, p.shape)
# +
plt.plot(s, color="black")
plt.scatter(np.arange(len(p))[p==0], s[:-1][p==0], color="red", label="prediction down")
plt.scatter(np.arange(len(p))[p==1], s[:-1][p==1], color="blue", label="prediction up")
plt.grid(); plt.legend(); plt.xlabel("time instant"); plt.ylabel("price")
plt.xticks(range(len(s)), range(len(s)))
pd.DataFrame(np.vstack((s,list(p)+[np.nan])), index=["signal", "prediction"])
print ("SIGNAL ", s)
print ("PREDICTION", p)
# -
# fill in the `pnl` variable below with a list of 19 values corresponding on applying the same strategy as in the notes, buying or selling always ONE money unit:
#
# - if the prediction is zero, we believe the price is going down, so we sell ONE money unit at the current price and buy it at the next instant of time
# - if the prediction is one, we do the opposite
# - BUT there is a **commission** of 1%, applied on the instant you make the first operation (which uses the current price)
#
# observe that there are 20 signal points, and 19 predictions.
#
# you can use your tool of choice (Excel, Python, etc.) to compute your answer
#
# **HINT**: Understand each component of the expression for `perfect_prediction` below to try to obtain your answer with Python.
#
#
# **For instance**: the following signal and predictions:
# +
from IPython.display import Image
Image("local/imgs/timeseries-prediction.png", width=600)
# -
# produce the following set of PNL
#
# 2.65 7.86 -0.31 7.48 2.61 2.19 1.33 -2.08 -2.71 -2.88 0.42 -5.39 3.03 1.53 3.45 9.88 10.70 -7.69 -0.60
#
# - at `t=0` the PNL is $(107.06-103.38)\times 1 - 103.38\times 1 \times .01=2.65$, since the prediction was correct
# - at `t=2` the PNL is $(116.84-115.99)\times 1 - 115.99\times 1 \times .01=-0.31$, since the prediction was correct, BUT the price difference is small and the commission overcomes the profit.
# - at `t=7` the PNL is $(111.76 - 112.71)\times1 - 112.71\times1\times.01=-2.08$, since the prediction was incorrect
#
#
# in the expressions above, the first term is the net profit or loss, and the second one is due to the commission. Multiplication by $1$ simply signals we are trading ONE unit.
# also, observe that the following python code, will generate a perfect prediction signal, which, when applied to our strategy, will result in a list of all positive PNLs.
perfect_prediction = (s[1:]>s[:-1]).astype(int)
perfect_prediction
# **CHALLENGE 1** (not mandatory): make your answer in python
#
# **hints**:
#
# s[1:] will give you all elements of s except the first one
# s[:-1] will give you all elements of s except the last one
# s[1:] - s[:-1] will give you the difference of price in one time with respect to the next one
# (p-0.5)*2 will convert vector p (containing 0's and 1's) into a vector of -1's and +1's
# **fill in the following variable**
pnl = [... ]
# **submit your answer**
student.submit_task(globals(), task_id="task_01");
# ## Task 2: Simulated prediction signal
# given the following signal, produce a synthetic prediction signal with the given percentage of correct predictions.
#
# observe that `s` has length 21, but your synthetic prediction will have a length of 20.
#
# fill in the variable `prediction`, with a list with 20 zeros or ones, containing a prediction with `acc` correct predictions.
#
# for instance, with the following signal
#
# [100.37 102.92 102.69 104.57 105.06 97.9 103. 100.32 97.59 107.07
# 112.19 106.32 104.14 100.3 97.03 107.28 100.36 100.99 111.48 117.07
# 126.04]
#
# the following predictions:
#
# p = [1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0]
#
# produce a trend prediction accuracy of 60% (`acc=0.6`)
#
# **HINT**: Do it in Python
#
# - use the perfect prediction from the exercise above to start with.
# - use `np.random.permutation`
#
# for instance:
# +
# a list
a = np.r_[10,20,30,40,50,60,70,80,90]
# 3 positions randomly chosen
k = np.random.permutation(len(a)-1)[:3]
print (k)
# changing the value of the items on those positions
a[k] = a[k] + 1
a
# -
# **your signal and target accuracy to achieve**
# +
s = ((np.random.normal(size=21)*5+.5).cumsum()+100).round(2)
acc = np.round(np.random.random()*.9+.1, 1)
print ("YOUR SIGNAL", s)
print ("THE ACCURACY YOUR SYNTHETIC PREDICTIONS MUST ACHIEVE: ", acc)
# -
my_synthetic_prediction = [ ... ]
# **submit your answer**
student.submit_task(globals(), task_id="task_02");
# ## Task 3: ML Metric vs Business Metric
#
# now, your are given a signal (length=21) and you will have to create
#
# - an array of 9 rows x 20 columns with synthetic predictions so that the first row (row number zero in python) has accuracy of 10%, the second has 20%, etc.
# - a list of 9 numbers containing the PNL of using the synthetic predictions on the above array as input for a trading strategy.
#
# for instance, for this signal:
#
# [101.33, 96.75, 98.2 , 95.3 , 97.96, 98.75, 92.46, 82.2 , 78.61, 80. ,
# 88.78, 98.72, 103.22, 113.65, 103.89, 107.36, 114.6 , 103.9 , 108.71, 104.2 , 107.8 ]
#
# you will have to create the following variables:
#
# pset = np.array([[1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0],
# [1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0],
# [1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0],
# [1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1],
# [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0],
# [0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0],
# [0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1],
# [0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1],
# [0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1]])
#
#
# pnl = np.array([-121.5, -69.44, -62.90, -46.72, -4.08, -19.04, 23.5, 41.0, 77.02])
#
# **NOTE**: Specify your PNL rounded to **TWO** decimal places
s = ((np.random.normal(size=21)*5+.5).cumsum()+100).round(2)
s
# +
# a 9x20 numpy array
pset =
# 9 elements numpy array or list
pnl =
# -
# **submit your answer**
student.submit_task(globals(), task_id="task_03");
# ### understand accuracy vs. PNL
#
# - what is the minimum accuracy from which a model might be profitable?
# - and if the commision changes?
# +
accuracies = np.linspace(.1,.9,9)
plt.plot(accuracies, pnl)
plt.axhline(0, color="black", lw=2)
plt.title("ML metric vs. Busines metric")
plt.grid(); plt.xlabel("model accuracy"); plt.ylabel("PNL")
# -
| content/LAB 05.01 - MEASURING PREDICTIVITY IMPACT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/shin-sforzando/etude-GNN/blob/master/GNN02.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="JReLBMyYY3Zi" colab_type="text"
# # Graph Neural Network - 02
#
# ## 概要
#
# 第2回では近年のGNNに関する学術的な傾向把握と、グラフ構造を簡便に扱えるPythonライブラリの調査。
# + [markdown] id="YfwqM0xKZAq4" colab_type="text"
# ## 学術界でのGNNの扱い
#
# GNN研究者は特にICLR(International Conference on Learning Representations)、次点でICML(International Conference on Machine Learning))への論文投稿を目標にしているようだ。
# NeurIPS (Neural Information Processing Systems; 旧NIPS)も毎年GNNに関するセッションはあるが、GNNに関する論文数は現状、ICLRが一番多い。
#
# 2019年のICLRでAcceptされた論文のうち、10%がGNN関連。
#
# ### [What graph neural networks cannot learn: depth vs width (<NAME>)](https://openreview.net/attachment?id=B1l2bp4YwS&name=original_pdf)
#
# スイス連邦工科大のAndreasがICLR2019で示唆したのは、現在のGNNで扱える程度のネットワークの大きさ(厳密には `幅 x 深さ` )ではほとんど実用的な学習が行えないだろうという予測。
# この論文をreferして、様々な追検証が行われ、後のNeurIPSなどで議論が交わされた。
# 「不可能の証明」系の論文なので、過去の著名な学説をおおよそカバーした内容で最初に読むのにおすすめ。
#
# ### [HOPPITY: LEARNING GRAPH TRANSFORMATIONS TO DETECT AND FIX BUGS IN PROGRAMS](https://openreview.net/forum?id=SJeqs6EFvB)
#
# GNNの実用例として、プログラムのバグ検出に挑戦した例。
# Javascriptの抽象構文木をグラフとして、GitHub上の290,715件のコミットを学習し、36,361個のプログラムから9,490件のバグを発見した。
# これはMatrix-centricなNeural Networkよりも遥かに優れているらしい。
#
# ### [thunlp / GNNPapers](https://github.com/thunlp/GNNPapers)
#
# GitHub上で、GNNについて読むべき論文をまとめているサイトを発見。
# ざっと450本(現在も更新され続けてる)。
# 少しずつ読んでいきます…
# + [markdown] id="r92J7tQ5ZHcw" colab_type="text"
# ## Pythonでグラフ構造を扱う(調査編)
#
# GNNに行く前に、まずはPythonでグラフ構造を自由に扱えるようになりたい。
#
# ### 分析系
#
# - [NetworkX](https://networkx.github.io/)
# - [Gephi](https://gephi.org/)
# - [graph-tool](https://graph-tool.skewed.de/)
# - [igraph](https://igraph.org/)
#
# ### 可視化系
#
# - [Matplotlib](https://matplotlib.org/) + [seaborn](https://seaborn.pydata.org/)
# - [Plotly](https://plotly.com/)
# - [Graphviz](https://graphviz.org/)
#
# ### GNN特化系
#
# - [PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/index.html)
# - [Deep Graph Library](https://www.dgl.ai/)
# + [markdown] id="f4cNvDCa1ymB" colab_type="text"
# ## 次回予告
#
# - Pythonでグラフ構造を可視化
| GNN02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # RNN classifier in keras
# #### Load dependencies
import keras
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from keras.layers import Embedding # new!
from keras.layers import SpatialDropout1D, SimpleRNN
from keras.callbacks import ModelCheckpoint # new!
import os # new!
from sklearn.metrics import roc_auc_score, roc_curve # new!
import matplotlib.pyplot as plt # new!
# %matplotlib inline
# +
# output directory name:
output_dir = 'model_output/rnn'
# training:
epochs = 10
batch_size = 128
# vector-space embedding:
n_dim = 64
n_unique_words = 10000
max_review_length = 100
pad_type = trunc_type = 'pre'
drop_embed = 0.2
# neural network architecture:
n_rnn = 256
droput_rnn = 0.2
# -
# #### Load data
# For a given data set:
#
# * the Keras text utilities [here](https://keras.io/preprocessing/text/) quickly preprocess natural language and convert it into an index
# * the `keras.preprocessing.text.Tokenizer` class may do everything you need in one line:
# * tokenize into words or characters
# * `num_words`: maximum unique tokens
# * filter out punctuation
# * lower case
# * convert words to an integer index
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words)
# #### Preprocess data
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_train[:6]
for i in range(6):
print len(x_train[i])
# #### Design neural network architecture
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(SpatialDropout1D(drop_embed))
# model.add(Conv1D(n_conv, k_conv, activation='relu'))
# model.add(Conv1D(n_conv, k_conv, activation='relu'))
# model.add(GlobalMaxPooling1D())
# model.add(Dense(n_dense, activation='relu'))
# model.add(Dropout(dropout))
model.add(SimpleRNN(n_rnn, dropout=droput_rnn))
model.add(Dense(1, activation='sigmoid'))
model.summary()
n_dim, n_unique_words, n_dim * n_unique_words
max_review_length, n_dim, n_dim * max_review_length
# #### Configure Model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs,
verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
# #### Evaluate
model.load_weights(output_dir+"/weights.03.hdf5") # zero-indexed
y_hat = model.predict_proba(x_valid)
len(y_hat)
y_hat[0]
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
pct_auc = roc_auc_score(y_valid, y_hat)*100.0
"{:0.2f}".format(pct_auc)
| Deep Learning/NLP Tutorials/.ipynb_checkpoints/vanilla_lstm_in_keras-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Python Visualization for Exploration of Data
# By Rahool
#
# In this lesson we will investigate methods for the exploration of data using visualization techniques based on <NAME>'s python tutorial in Seattle. We will use several Python packages to create the visualizations, matplotlib, Pandas plotting, and seaborn.
# For these lessons we will be working with a data set containing the prices and characteristics of a number of automobiles. The ultimate goal is to build a model for predicting the price of a car from its characteristics.
# About this Jupyter Notebook
# This notebook contains material to help you learn how to explore data visually.
# The data set can be downloaded from OpenPV: https://openpv.nrel.gov/search
#
# This notebook was constructed using the Anconda 3.5 Python distribution. If you are not running version Anaconda 3.5 or higher, we suggest you update your Anaconda distribution now. You can download the Python 3 Anaconda distribution for your operating system from the
# To run this notebook you need the Seaborn graphics packages. If you have not done so, you will need to install Seaborn as it is not in the Anaconda distribution as of now. From a command prompt on your computer type the following command. If no errors occur, you will have installed Seaborn.
# pip install seaborn
# or
# conda install seaborn
# You can find more about installing seaborn can be seen on the Installing and getting started page.
# Full OpenPV Dataset by NREL.gov can be found at https://maps-api.nrel.gov/open_pv/installs/download_all
import pandas as pd
solar=pd.read_csv('openpv_all.csv')
# +
solar.head()
# -
solar=solar[solar['size_kw']>10]
solar=solar[solar['size_kw']<1000000]
solar=solar[solar['cost_per_watt']>0]
solar=solar[solar['cost_per_watt']<1000000]
solar.head()
solar=solar[solar['size_kw']>50]
def solar_size_2(df, plot_cols):
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
for col in plot_cols:
fig = plt.figure(figsize=(10, 10))
ax = fig.gca()
temp1 = df.ix[df['install_type'] == 'residential']
temp2 = df.ix[df['install_type'] == 'commercial']
if temp1.shape[0] > 0:
temp1.plot(kind = 'scatter', x = col, y = 'cost_per_watt' ,
ax = ax, color = 'DarkBlue', s= .02 * solar['size_kw']**1.3,
alpha = 0.3)
if temp2.shape[0] > 0:
temp2.plot(kind = 'scatter', x = col, y = 'cost_per_watt' ,
ax = ax, color = 'Red', s= .02 * solar['size_kw']**1.3,
alpha = 0.3)
ax.set_title('Scatter Plot of Solar Price vs. Annual PV Production >50kW')
red_patch = mpatches.Patch(color='Red', label='Commercial')
blue_patch = mpatches.Patch(color='DarkBlue', label='Residential')
plt.legend(handles=[red_patch, blue_patch])
plt.xlim(0,1e5)
plt.ylim(0,15)
plt.xlabel('Annual Energy Production (W)')
plt.ylabel('Cost per Watt ($/W)')
plt.savefig('Solar_Price_vs_Production_50.png')
return 'Done'
solar_size_2(solar, ['reported_annual_energy_prod'])
import seaborn as sns
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10)) # define plot area
ax = fig.gca() # define axis
sns.set_style("whitegrid")
sns.kdeplot(solar[['size_kw', 'cost_per_watt']], ax = ax, cmap="Blues_d")
ax.set_title('KDE plot of Solar Size (kW) and Cost') # Give the plot a main title
ax.set_xlabel('Solar Size (kW)') # Set text for the x axis
ax.set_ylabel('Cost ($/W)')# Set text for y axis
plt.xlim(0,2000)
plt.ylim(-2.5,12)
solar1=solar[solar['install_type'].str.contains("residential|commercial", na = False)]
solar1=solar1[solar1['tracking_type'].str.contains("Single-Axis|Dual-Axis", na = False)]
fig = plt.figure(figsize=(5,5)) # define plot area
ax = fig.gca() # define axis
sns.set_style("whitegrid")
sns.violinplot(x = 'install_type', y = 'size_kw', data = solar1, hue= 'tracking_type',split = True)
ax.set_title('Violin plots of Solar Projects') # Give the plot a main title
ax.set_xlabel('Install Type') # Set text for the x axis
ax.set_ylabel('Solar Project size (kW)')# Set text for y axis
plt.ylim(-500,1500)
| OpenPV_DataVisualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
from finitediff.grid import adapted_grid, plot_convergence
from finitediff.grid.tests._common import g, g2
# %matplotlib inline
plot_convergence('grid_additions', [(32,), (32, 0, 0, 0), (32, 0, 0, 0, 0, 0, 0)], g,
extremum_refinement='max')
plot_convergence('grid_additions', [(32,), (16, 16), (8, 8, 8, 8)], g,
extremum_refinement=(np.argmax, 2, lambda y, i: True))
plot_convergence('grid_additions', [(32,), (32, 0), (32, 0, 0)], g,
extremum_refinement=(np.argmax, 4, lambda y, i: True))
plot_convergence('extremum_refinement', [(np.argmax, n, lambda y, i: True) for n in (0, 1, 2, 4, 8)], g,
grid_additions=(8, 8, 8, 8))
def predicate(y, i):
pred = np.all([v*1.2 < y[i] for j, v in enumerate(y) if j != i])
print(pred)
return pred
extremum_refiners = [(np.argmax, n, predicate) for n in (0, 1, 2, 4, 8)]
plot_convergence('extremum_refinement', extremum_refiners, g, grid_additions=(8, 0, 0, 0))
plot_convergence('extremum_refinement', extremum_refiners, g, grid_additions=(16, 0, 0, 0))
plot_convergence('extremum_refinement', extremum_refiners, g, grid_additions=(32, 0, 0, 0))
# +
def predicate2(y, i):
ok = True
if i > 0:
ok = ok and y[i - 1] < y[i]*0.7
if i < y.size - 1:
ok = ok and y[i + 1] < y[i]*0.7
print(ok)
return ok
extremum_refiners2 = [(np.argmax, n, predicate2) for n in (0, 1, 2, 4, 8)]
plot_convergence('extremum_refinement', extremum_refiners2, g, grid_additions=(16, 0, 0, 0))
| examples/adapted_grid_peak.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
from RLA.easy_plot.plot_func import plot_res_func
task='demo_task'
location = 'log/' + task
regs = [
'2022/03/01/21-[12]*'
]
_ = plot_res_func(prefix_dir=location, regs=regs , param_keys=['learning_rate'],
value_keys=['perf/mse'],
y_bound=0.1,
# replace_legend_keys=["X"], bound_line=[[perf_map[task], 'grey', '--', 'BC']],
# resample=1024, smooth_step=10, legend_outside=False, title='', use_buf=False,
verbose=False,
xlabel='epochs',
ylabel='reward ratio',
shaded_range=False,
pretty=True,
save_name='simple_res.pdf')
| example/simplest_code/plot_res.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import pandas as pd
import csv
import os, os.path
import sys
# to move files from one directory to another
import shutil
# -
perry_dir = "/Users/hn/Documents/01_research_data/NASA/Perry_and_Co/"
plots_dir = "/Users/hn/Documents/01_research_data/NASA/snapshots/TS/06_snapshot_flat_PNG/"
# +
# choices_set_1 = pd.read_excel(io=param_dir + "set1_PerryandCo.xlsx", sheet_name=0)
# response_set_1 = pd.read_excel(io=param_dir + "Perry_and_Co_Responses_set_1.xlsx", sheet_name=0)
# +
choices_set_1_xl = pd.ExcelFile(perry_dir + "set1_PerryandCo.xlsx")
choises_set_1_sheet_names = choices_set_1_xl.sheet_names # see all sheet names
response_set_1_xl = pd.ExcelFile(perry_dir + "Perry_and_Co_Responses_set_1.xlsx")
response_set_1_sheet_names = response_set_1_xl.sheet_names # see all sheet names
# -
print (choises_set_1_sheet_names)
print (response_set_1_sheet_names)
a_choice_sheet = choices_set_1_xl.parse(choises_set_1_sheet_names[0])
a_response_sheet = response_set_1_xl.parse(response_set_1_sheet_names[0])
print (list(a_choice_sheet.columns))
# # List Crop Types in Set 1
# +
crop_types = []
for a_choice_sheet in choises_set_1_sheet_names:
# read a damn sheet
a_choice_sheet = choices_set_1_xl.parse(a_choice_sheet)
unique_crop_types = a_choice_sheet['CropTyp'].unique()
# add them to the damn list
crop_types += list(unique_crop_types)
# -
print (crop_types)
print ()
print ("There are [{cro_type_count}] crop types.".format(cro_type_count=len(crop_types)))
# ## Count number of questions
# +
question_count = 0
for a_choice_sheet in choises_set_1_sheet_names:
# read a damn sheet
a_choice_sheet = choices_set_1_xl.parse(a_choice_sheet)
# add them to the damn list
question_count += a_choice_sheet.shape[0]
print('There are [{ques_count}] questions.'.format(ques_count=question_count))
# -
unwanted_opinions = ["<EMAIL>"]
a_response_sheet
# +
## Define the damn output dataframe
vote_df = pd.DataFrame(columns=['Form', 'Question', 'opinion_count', 'ID',
"Perry", "Andrew", "Tim", "Kirti"],
index=range(question_count))
vote_df.head(1)
curr_row = 0
extended_choices = pd.DataFrame()
###### populate the output datafrme
for response_sheet_name in response_set_1_sheet_names:
# pick up the numeric part of the sheet names from google forms sheets
# this is the form number as well.
sheet_numeric_part = [s for s in response_sheet_name.split()[-1] if s.isdigit()]
sheet_numeric_part = "".join(sheet_numeric_part)
# form sheet names of choices excel sheets
choise_sheet_name = "extended_" + sheet_numeric_part
a_choice_sheet = choices_set_1_xl.parse(choise_sheet_name)
a_response_sheet = response_set_1_xl.parse(response_sheet_name)
extended_choices = pd.concat([extended_choices, a_choice_sheet])
#
# drop unwanted opinions, i.e. keep experts opinions
#
# a_response_sheet = a_response_sheet[~(a_response_sheet["Email Address"].isin(unwanted_opinions))].copy()
for a_col_name in a_response_sheet.columns:
if "QUESTION" in a_col_name:
question_number = a_col_name.split()[1]
vote_df.loc[curr_row, "Form"] = sheet_numeric_part
vote_df.loc[curr_row, "Question"] = question_number
vote_df.loc[curr_row, "opinion_count"] = len(a_response_sheet[a_col_name].unique())
vote_df.loc[curr_row, "ID"] = a_choice_sheet.loc[int(question_number)-1, "ID"]
vote_df.loc[curr_row, "Perry"] = a_response_sheet[a_response_sheet["Email Address"] == \
"<EMAIL>"][a_col_name].values[0]
vote_df.loc[curr_row, "Andrew"] = a_response_sheet[a_response_sheet["Email Address"] == \
"<EMAIL>"][a_col_name].values[0]
vote_df.loc[curr_row, "Tim"] = a_response_sheet[a_response_sheet["Email Address"] == \
"<EMAIL>"][a_col_name].values[0]
if "<EMAIL>" in list(a_response_sheet['Email Address']):
vote_df.loc[curr_row, "Kirti"] = a_response_sheet[a_response_sheet["Email Address"] == \
"<EMAIL>"][a_col_name].values[0]
curr_row += 1
out_name = perry_dir + "set1_all_votes.csv"
vote_df.to_csv(out_name, index = False)
# -
vote_df.Kirti.unique()
# +
## Define the damn output dataframe
vote_df = pd.DataFrame(columns=['Form', 'Question', 'opinion_count', 'ID', "Perry", "Andrew", "Tim"],
index=range(question_count))
vote_df.head(1)
curr_row = 0
extended_choices = pd.DataFrame()
###### populate the output datafrme
for response_sheet_name in response_set_1_sheet_names:
# pick up the numeric part of the sheet names from google forms sheets
# this is the form number as well.
sheet_numeric_part = [s for s in response_sheet_name.split()[-1] if s.isdigit()]
sheet_numeric_part = "".join(sheet_numeric_part)
# form sheet names of choices excel sheets
choise_sheet_name = "extended_" + sheet_numeric_part
a_choice_sheet = choices_set_1_xl.parse(choise_sheet_name)
a_response_sheet = response_set_1_xl.parse(response_sheet_name)
extended_choices = pd.concat([extended_choices, a_choice_sheet])
#
# drop unwanted opinions, i.e. keep experts opinions
#
a_response_sheet = a_response_sheet[~(a_response_sheet["Email Address"].isin(unwanted_opinions))].copy()
for a_col_name in a_response_sheet.columns:
if "QUESTION" in a_col_name:
question_number = a_col_name.split()[1]
vote_df.loc[curr_row, "Form"] = sheet_numeric_part
vote_df.loc[curr_row, "Question"] = question_number
vote_df.loc[curr_row, "opinion_count"] = len(a_response_sheet[a_col_name].unique())
vote_df.loc[curr_row, "ID"] = a_choice_sheet.loc[int(question_number)-1, "ID"]
vote_df.loc[curr_row, "Perry"] = a_response_sheet[a_response_sheet["Email Address"] == \
"<EMAIL>"][a_col_name].values[0]
vote_df.loc[curr_row, "Andrew"] = a_response_sheet[a_response_sheet["Email Address"] == \
"<EMAIL>"][a_col_name].values[0]
vote_df.loc[curr_row, "Tim"] = a_response_sheet[a_response_sheet["Email Address"] == \
"<EMAIL>"][a_col_name].values[0]
curr_row += 1
# -
extended_output = pd.merge(vote_df, extended_choices, on=['ID'], how='left')
extended_output.head(2)
# check of the question numbers are correct.
sum(extended_output['Question'].astype(int) - extended_output['Question_in_set'])
# +
# Total number of disagreements
disagreement_table = extended_output[extended_output.opinion_count != 1].copy()
disagreement_table.shape
# -
disagreement_table = disagreement_table[disagreement_table.CropTyp != "grass hay"].copy()
disagreement_table.shape
# +
disagreement_table.reset_index(inplace=True, drop=True)
disagreement_table_grouped = disagreement_table.groupby(['CropTyp', 'opinion_count']).count()
disagreement_table_grouped.reset_index(inplace=True, drop=True)
disagreement_table_grouped.sort_values(by=["opinion_count", 'ID', "CropTyp"],
ascending = [False, False, True],
inplace=True)
print (list(disagreement_table_grouped[disagreement_table_grouped.opinion_count==3].CropTyp.unique()))
# -
disagreement_table_grouped.head(6)
disagreement_table_grouped.shape
# ## Group by crop type and count number of disagreements
# +
extended_output_grouped = extended_output.groupby(['CropTyp', 'opinion_count']).count()
extended_output_grouped.reset_index(inplace=True, drop=True)
extended_output_grouped.sort_values(by=['opinion_count', "CropTyp"],
ascending = [False, True],
inplace=True)
extended_output_grouped.head(2)
# -
extended_output_grouped.shape
print (list(extended_output_grouped[extended_output_grouped.opinion_count==3].CropTyp.unique()))
# +
extended_output_grouped = extended_output.groupby(['CropTyp', 'opinion_count']).count()
extended_output_grouped.reset_index(inplace=True, drop=True)
extended_output_grouped.sort_values(by=['ID', "opinion_count", "CropTyp"],
ascending = [False, True, True],
inplace=True)
print (list(extended_output_grouped[extended_output_grouped.opinion_count==3].CropTyp.unique()))
# -
extended_output_grouped
# ### Export disagreement table of set 1
# +
output_dir = perry_dir
disagreement_table.sort_values(by=["ID", 'opinion_count', "CropTyp"],
ascending = [False, False, True],
inplace=True)
out_name = output_dir + "disagreement_table_set_1_sortCropOpinion.csv"
disagreement_table.to_csv(out_name, index = False)
disagreement_table.sort_values(by=["ID", 'opinion_count', "CropTyp"],
ascending = [False, False, True],
inplace=True)
out_name = output_dir + "disagreement_table_set_1_sortOpinionCrop.csv"
disagreement_table.to_csv(out_name, index = False)
##########
########## all stats
##########
extended_output.sort_values(by=['opinion_count', "CropTyp"],
ascending = [False, True],
inplace=True)
out_name = output_dir + "set_1_experts_stats_extended_sortOpinionCrop.csv"
extended_output.to_csv(out_name, index = False)
extended_output.sort_values(by=["CropTyp", "opinion_count"],
ascending = [True, False],
inplace=True)
out_name = output_dir + "set_1_experts_stats_extended_sortCropOpinion.csv"
extended_output.to_csv(out_name, index = False)
# -
# ### Create a long PDF of fields where disagreements occur
import subprocess
os.getcwd()
# +
# with open("sometexfile.tex","w") as file:
# file.write("\\documentclass{article}\n")
# file.write("\\begin{document}\n")
# file.write("Hello Palo Alto!\n")
# file.write("\\end{document}\n")
# +
# import subprocess, os
# with open("sometexfile.tex","w") as file:
# file.write("\\documentclass{article}\n")
# file.write("\\begin{document}\n")
# file.write("Hello Palo Alto!\n")
# file.write("\\end{document}\n")
# x = subprocess.call("pdflatex sometexfile.tex")
# if x != 0:
# print("Exit-code not 0, check result!")
# else:
# os.system("start sometexfile.pdf")
# -
disagreement_table.sort_values(by=["CropTyp"],
ascending = [True],
inplace=True)
disagreement_table.reset_index(inplace=True, drop=True)
disagreement_table.head(2)
# +
disagreement_table.sort_values(by=['CropTyp'],
ascending = [True],
inplace=True)
disagreement_table.reset_index(inplace=True, drop=True)
os.chdir('/Users/hn/Documents/01_research_data/NASA/Perry_and_Co/PDF_set_1_disagreement/')
with open("PDF_set_1_disagreement.tex","w") as file:
file.write("\\documentclass{article}\n")
# file.write("\\usepackage[utf8]{inputenc}\n")
# file.write("\\usepackage[scaled=0.85]{beramono}\n")
# file.write("\\pdfoutput=24\n")
# file.write("\\usepackage{framed, color}\n")
# file.write("\\usepackage{etaremune}\n")
file.write("\\usepackage[osf,sc]{mathpazo}\n")
file.write("\\usepackage{color}\n")
file.write("\\usepackage{mathtools,hyperref}\n")
file.write("\\hypersetup{\n")
file.write(" colorlinks=true,\n")
file.write(" linkcolor=cyan,\n")
file.write(" filecolor=cyan,\n")
file.write(" urlcolor=mgreen,\n")
file.write(" citecolor=cyan}\n")
# file.write("\\definecolor{mgreen}{RGB}{25,147,100}\n")
# file.write("\\definecolor{shadecolor}{rgb}{1,.8,.1}\n")
# file.write("\\definecolor{shadecolor2}{RGB}{245,237,0}\n")
# file.write("\\definecolor{orange}{RGB}{255,137,20}\n")
# file.write("\\definecolor{orange}{RGB}{245,37,100}\n")
# file.write("\\usepackage[english]{babel}\n")
# file.write("\\usepackage{babel,blindtext}\n")
# file.write("\\usepackage{fullpage}\n")
# file.write("\\usepackage{amsfonts}\n")
# file.write("\\usepackage{lscape}\n")
# file.write("\\usepackage{bbm}\n")
# file.write("\\usepackage{todonotes}\n")
# file.write("\\usepackage{cite}\n")
# file.write("\\usepackage{verbatim}\n")
# file.write("\\usepackage{bm}\n")
file.write("\\usepackage[margin=1in]{geometry}\n")
file.write("\\usepackage[T1]{fontenc}\n")
# file.write("\\usepackage{authblk}\n")
file.write("\\usepackage{caption}\n")
file.write("\\captionsetup{justification= raggedright, singlelinecheck = false}\n")
# file.write("\\newcommand{\\blue}{\\color{blue}}\n")
# file.write("\\newcommand*{\\affaddr}[1]{#1}\n")
# file.write("\\newcommand*{\\affmark}[1][*]{\\textsuperscript{#1}}\n")
file.write("\\title{\\bf Set 1 - Disagreements}\n")
file.write("\\author{}\n")
file.write("\\date{}\n")
file.write("\\begin{document}\n")
file.write("\\maketitle\n")
file.write("\\section{Disagreements}\n")
file.write("\n")
file.write("\n")
for a_row in range(disagreement_table.shape[0]):
curr_row = disagreement_table.loc[a_row]
TS_file_name = curr_row["NDVI_TS_Name"]
file.write("\\begin{figure*}[ht]\n")
file.write("\\centering\n")
file.write("\\includegraphics[width=1\\textwidth]{/Users/hn/Documents/" + \
"01_research_data/NASA/snapshots/TS/06_snapshot_flat_PNG/" + \
TS_file_name + "}\n")
file.write("\\caption[]{" + \
"\\textbf{\\color{red}{" + curr_row["CropTyp"] + "}}" + \
", ID: " + curr_row["ID"].replace("_", "\_") + \
", Form: " + curr_row["Form"] + \
", Question: " + curr_row["Question"] + "." + \
"\\\\\\hspace{\\textwidth} Perry: " + curr_row["Perry"] + \
"\\\\\\hspace{\\textwidth} Andrew: " + curr_row["Andrew"] + \
"\\\\\\hspace{\\textwidth} Tim: " + curr_row["Tim"] + \
"}\n")
file.write("\\label{fig:figure" + str(a_row) + "}\n")
file.write("\\end{figure*}\n")
file.write("\\clearpage\n")
file.write("\n")
file.write("\n")
file.write("\n")
file.write("\\end{document}\n")
# -
| NASA/Python_codes/Perry_and_Co/set_1_response_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# +
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.utils.data as Data
import torch.nn.functional as F
import torchvision
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
import time
import progressbar
import importlib
import random
# torch.manual_seed(1) # reproducible
plt.style.use('default')
# %matplotlib inline
import pdb
import Models_MNIST as mds
# Hyper Parameters
EPOCH = 5
BATCH_SIZE = 256
DOWNLOAD_MNIST = False
m1 = 64
m2 = 128
m3 = 512
cudaopt = True
EPS = 1e-4
# Mnist digits dataset
train_data = torchvision.datasets.MNIST(
root='../data',
train=True, # this is training data
transform=torchvision.transforms.ToTensor(), # Converts a PIL.Image or numpy.ndarray to
# torch.FloatTensor of shape (C x H x W) and normalize in the range [0.0, 1.0]
download=True, # download it if you don't have it
)
# LIMITING TRAINING DATA
Ntrain = int(60e3)
train_set = np.random.permutation(60000)[0:Ntrain]
train_data.train_data = train_data.train_data[torch.LongTensor(train_set),:,:]
train_data.train_labels = train_data.train_labels[torch.LongTensor(train_set)]
test_data = torchvision.datasets.MNIST(
root='../data',
train=False, # this is testing data
transform=torchvision.transforms.ToTensor(), # Converts a PIL.Image or numpy.ndarray to
# torch.FloatTensor of shape (C x H x W) and normalize in the range [0.0, 1.0]
download=True, # download it if you don't have it
)
# Data Loader for easy mini-batch return in training, the image batch shape will be (50, 1, 28, 28)
train_loader = Data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
test_loader = Data.DataLoader(dataset=test_data, batch_size=BATCH_SIZE, shuffle=True)
# -
x = np.linspace(1,EPOCH,EPOCH)
Rhos = 1/(1+np.exp(-(x- EPOCH*6/9 )*.2))
plt.plot(Rhos)
# ### Model Baseline
# +
Loss_test_0 = np.zeros((EPOCH,))
Acc_test_0 = np.zeros((EPOCH,))
Acc_train_0 = np.zeros((EPOCH,))
print('\n\t\t\t\t\tTraining Baseline\n')
model_0 = mds.ML_ISTA_NET(m1,m2,m3)
if cudaopt:
model_0.cuda()
optimizer = torch.optim.Adam(model_0.parameters(), lr = 0.0001, eps = EPS)
bar = progressbar.ProgressBar()
for epoch in range(EPOCH):
bar.update((epoch+1)/EPOCH*100)
# train 1 epoch
model_0.train()
train_correct = 0
for step, (x, y) in enumerate(train_loader):
b_x = Variable(x) # batch x, shape (batch, 28*28)
b_y = Variable(y) # batch label
if cudaopt:
b_y, b_x = b_y.cuda(), b_x.cuda()
encoded, scores = model_0(b_x)
train_pred = scores.data.max(1, keepdim=True)[1]
train_correct += train_pred.eq(b_y.data.view_as(train_pred)).long().cpu().sum()
loss = F.nll_loss(scores, b_y) # negative log likelyhood
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients
Acc_train_0[epoch] = 100 * float(train_correct) /float(len(train_loader.dataset))
# testing
model_0.eval()
correct = 0
test_loss = 0
for step, (x, y) in enumerate(test_loader):
b_x = Variable(x) # batch x, shape (batch, 28*28)
b_y = Variable(y) # batch label
if cudaopt:
b_y, b_x = b_y.cuda(), b_x.cuda()
gamma, scores = model_0(b_x)
test_loss += F.nll_loss(scores, b_y, size_average=False).data[0]
pred = scores.data.max(1, keepdim=True)[1]
correct += pred.eq(b_y.data.view_as(pred)).long().cpu().sum()
test_loss /= len(test_loader.dataset)
Loss_test_0[epoch] = test_loss
Acc_test_0[epoch] = 100 * float(correct) /float(len(test_loader.dataset))
torch.save(model_0.state_dict(), 'cnn_model.pt')
# -
# ### ML-JISTA TESTING
# +
# importlib.reload(mds)
# T = 0
# RHO = float(Rhos[0])
# dataiter = iter(train_loader)
# x, labels = dataiter.next()
# model = mds.ML_JISTA_NET(m1,m2,m3)
# x = Variable(x)# batch x, shape (batch, 28*28)
# labels = Variable(labels)
# # print(x.shape)
# # print(labels.shape)
# temp = np.empty(labels.shape[0])
# # print(temp.shape)
# encoded, scores, sorted_labels = model.joint_train(x, labels, T, RHO)
# # print(scores.shape)
# # print(sorted_labels.shape)
# # print(type(scores))
# # print(type(sorted_labels))
# loss = F.nll_loss(scores, sorted_labels)
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# X1 = torch.rand(4,6,1,1)
# X1_dims = list(X1.shape)
# X1_mat = X1.view(-1, X1_dims[1])
# st_factors = 1-2/(torch.sum(X1_mat**2, dim=0))
# st_factors_mat = torch.diag(st_factors)
# X2_mat = F.relu(torch.t(torch.mm(st_factors_mat, torch.t(X1_mat))))
# X2 = X2_mat.view(X1_dims[0], X1_dims[1], X1_dims[2], X1_dims[3])
# print(X1_mat)
# print(X2_mat)
# -
# ### ML-JISTA
# +
importlib.reload(mds)
Loss_test_jista_r = np.zeros((EPOCH,))
Acc_test_jista_r = np.zeros((EPOCH,))
Acc_train_jista_r = np.zeros((EPOCH,))
print('\n\t\t\t\t\tTraining ML-JISTA \n')
T = 0 # number of unfoldings/iterations of ml-ista
model_jnn = mds.ML_JISTA_NET(m1,m2,m3)
if cudaopt:
model_jnn.cuda()
optimizer = torch.optim.Adam(model_jnn.parameters(), lr = 0.0001, eps = EPS)
bar = progressbar.ProgressBar()
for epoch in range(EPOCH):
# print("Epoch: " + str(int(epoch)))
bar.update((epoch+1)/EPOCH*100)
# train 1 epoch
model_jnn.train()
RHO = float(Rhos[epoch])
train_correct = 0
for step, (x, y) in enumerate(train_loader):
b_x = Variable(x) # batch x, shape (batch, 28*28)
b_y = Variable(y) # batch label
if cudaopt:
b_y, b_x = b_y.cuda(), b_x.cuda()
encoded, scores, sorted_labels = model_jnn.joint_train(b_x, b_y, T, RHO)
sorted_labels = sorted_labels.type(torch.cuda.LongTensor)
train_pred = scores.data.max(1, keepdim=True)[1]
train_correct += train_pred.eq(sorted_labels.data.view_as(train_pred)).long().cpu().sum()
loss = F.nll_loss(scores, sorted_labels) # negative log likelyhood
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients
Acc_train_jista_r[epoch] = 100 * float(train_correct) /float(len(train_loader.dataset))
# testing
model_jnn.eval()
correct = 0
test_loss = 0
for step, (x, y) in enumerate(test_loader):
b_x = Variable(x) # batch x, shape (batch, 28*28)
b_y = Variable(y) # batch label
if cudaopt:
b_y, b_x = b_y.cuda(), b_x.cuda()
gamma, scores = model_jnn.forward(b_x,T,RHO)
test_loss += F.nll_loss(scores, b_y, size_average=False).data[0]
pred = scores.data.max(1, keepdim=True)[1]
correct += pred.eq(b_y.data.view_as(pred)).long().cpu().sum()
test_loss /= len(test_loader.dataset)
Loss_test_jista_r[epoch] = test_loss
Acc_test_jista_r[epoch] = 100 * float(correct) /float(len(test_loader.dataset))
# print("Performance at epoch " + str(int(epoch)) + ": " + str(Acc_test_ista_r[epoch]))
torch.save(model_jnn.state_dict(), 'mljista_model.pt')
# -
# ### ML-ISTA
# +
# Loss_test_ista_r = np.zeros((EPOCH,))
# Acc_test_ista_r = np.zeros((EPOCH,))
# print('\n\t\t\t\t\tTraining ML-ISTA \n')
# T = 0 # number of unfoldings/iterations of ml-ista
# model = mds.ML_ISTA_NET(m1,m2,m3)
# if cudaopt:
# model.cuda()
# optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001, eps = EPS)
# bar = progressbar.ProgressBar()
# for epoch in range(EPOCH):
# # print("Epoch: " + str(int(epoch)))
# bar.update((epoch+1)/EPOCH*100)
# # train 1 epoch
# model.train()
# RHO = float(Rhos[epoch])
# for step, (x, y) in enumerate(train_loader):
# b_x = Variable(x) # batch x, shape (batch, 28*28)
# b_y = Variable(y) # batch label
# if cudaopt:
# b_y, b_x = b_y.cuda(), b_x.cuda()
# encoded, scores = model(b_x,T,RHO)
# # print(scores.shape)
# # print(b_y.shape)
# loss = F.nll_loss(scores, b_y) # negative log likelyhood
# optimizer.zero_grad() # clear gradients for this training step
# loss.backward() # backpropagation, compute gradients
# optimizer.step() # apply gradients
# # testing
# model.eval()
# correct = 0
# test_loss = 0
# for step, (x, y) in enumerate(test_loader):
# b_x = Variable(x) # batch x, shape (batch, 28*28)
# b_y = Variable(y) # batch label
# if cudaopt:
# b_y, b_x = b_y.cuda(), b_x.cuda()
# gamma, scores = model(b_x,T,RHO)
# test_loss += F.nll_loss(scores, b_y, size_average=False).data[0]
# pred = scores.data.max(1, keepdim=True)[1]
# correct += pred.eq(b_y.data.view_as(pred)).long().cpu().sum()
# test_loss /= len(test_loader.dataset)
# Loss_test_ista_r[epoch] = test_loss
# Acc_test_ista_r[epoch] = 100 * float(correct) /float(len(test_loader.dataset))
# torch.save(model.state_dict(), 'mlista_model.pt')
# + [markdown] slideshow={"slide_type": "-"}
# ### ML-FISTA
# + slideshow={"slide_type": "-"}
# Loss_test_fista_r = np.zeros((EPOCH,))
# Acc_test_fista_r = np.zeros((EPOCH,))
# print('\n\t\t\t\t\tTraining ML-FISTA \n')
# model = mds.ML_FISTA_NET(m1,m2,m3)
# if cudaopt:
# model.cuda()
# optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001, eps = EPS)
# bar = progressbar.ProgressBar()
# for epoch in range(EPOCH):
# bar.update((epoch+1)/EPOCH*100)
# # train 1 epoch
# model.train()
# RHO = float(Rhos[epoch])
# for step, (x, y) in enumerate(train_loader):
# b_x = Variable(x) # batch x, shape (batch, 28*28)
# b_y = Variable(y) # batch label
# if cudaopt:
# b_y, b_x = b_y.cuda(), b_x.cuda()
# encoded, scores = model(b_x,T,RHO)
# loss = F.nll_loss(scores, b_y) # negative log likelyhood
# optimizer.zero_grad() # clear gradients for this training step
# loss.backward() # backpropagation, compute gradients
# optimizer.step() # apply gradients
# # testing
# model.eval()
# correct = 0
# test_loss = 0
# for step, (x, y) in enumerate(test_loader):
# b_x = Variable(x) # batch x, shape (batch, 28*28)
# b_y = Variable(y) # batch label
# if cudaopt:
# b_y, b_x = b_y.cuda(), b_x.cuda()
# gamma, scores = model(b_x,T,RHO)
# test_loss += F.nll_loss(scores, b_y, size_average=False).data[0]
# pred = scores.data.max(1, keepdim=True)[1]
# correct += pred.eq(b_y.data.view_as(pred)).long().cpu().sum()
# test_loss /= len(test_loader.dataset)
# Loss_test_fista_r[epoch] = test_loss
# Acc_test_fista_r[epoch] = 100 * float(correct) /float(len(test_loader.dataset))
# -
# ## Plot train accuracy
# +
fig = plt.figure(figsize=(8,6))
plt.style.use('default')
plt.plot(Acc_train_0, linewidth = 2,label='baseline')
# plt.plot(Acc_test_ista_r, linewidth = 2,label = 'ML-ISTA')
plt.plot(Acc_train_jista_r, linewidth = 2,label = 'ML-JISTA')
# plt.plot(Acc_test_fista_r, linewidth = 2,label = 'ML-FISTA')
plt.grid('on')
plt.title('Train Accuracy - 0 Unfoldings')
plt.legend()
plt.axis([0, EPOCH-1, 0, 100])
plt.show()
# -
# ## Plot test accuracy
# + slideshow={"slide_type": "-"}
fig = plt.figure(figsize=(8,6))
plt.style.use('default')
plt.plot(Acc_test_0, linewidth = 2,label='baseline')
# plt.plot(Acc_test_ista_r, linewidth = 2,label = 'ML-ISTA')
plt.plot(Acc_test_jista_r, linewidth = 2,label = 'ML-JISTA')
# plt.plot(Acc_test_fista_r, linewidth = 2,label = 'ML-FISTA')
plt.grid('on')
plt.title('Test Accuracy - 0 Unfoldings')
plt.legend()
plt.axis([0, EPOCH-1, 75, 100])
plt.show()
# -
# ## Visualise global filters of baseline
# +
cols = 5
rows = 5
indices = random.sample(range(m3), cols*rows)
dict1 = model_0.W3
atom1_dim = dict1.shape[3]
print(dict1.shape)
dict2 = F.conv_transpose2d(dict1, model_0.W2, stride=model_0.strd1, dilation=1)
atom2_dim = dict2.shape[3]
print(dict2.shape)
dict3 = F.conv_transpose2d(dict2, model_0.W1, stride=model_0.strd2, dilation=1)
atom3_dim = dict3.shape[3]
print(dict3.shape)
idx = 1
plt.figure(figsize=(10,10))
for j in range(rows):
for i in range(cols):
plt.subplot(cols,rows,idx)
plt.imshow(np.reshape(dict3.cpu().data.numpy()[idx-1], (atom3_dim, atom3_dim)), cmap='gray')
plt.axis('off')
idx+=1
plt.show()
# -
# ## Visualise global filters of JISTA
# +
cols = 5
rows = 5
indices = random.sample(range(m3), cols*rows)
dict1 = model_jnn.W3
atom1_dim = dict1.shape[3]
print(dict1.shape)
dict2 = F.conv_transpose2d(dict1, model_jnn.W2, stride=model_jnn.strd1, dilation=1)
atom2_dim = dict2.shape[3]
print(dict2.shape)
dict3 = F.conv_transpose2d(dict2, model_jnn.W1, stride=model_jnn.strd2, dilation=1)
atom3_dim = dict3.shape[3]
print(dict3.shape)
idx = 1
plt.figure(figsize=(10,10))
for j in range(rows):
for i in range(cols):
plt.subplot(cols,rows,idx)
plt.imshow(np.reshape(dict3.cpu().data.numpy()[idx-1], (atom3_dim, atom3_dim)), cmap='gray')
plt.axis('off')
idx+=1
plt.show()
| joint/dev/ML_ISTA/Jere/Run_Models_MNIST.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ErnestDannielTiston/OOP-1-2/blob/main/Prelim_Exam.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="M_j7CpI0VJ4J" outputId="74b95d34-ddd2-424f-c9bd-ebdcce1dd8f0"
class Student:
def __init__(self,Name,Student_No,Age,School,Course):
self.Name = Name
self.Student_No = Student_No
self.Age = Age
self.School = School
self.Course = Course
def Info(self):
print("\n","Greetings I am",self.Name,"and my student number is", self.Student_No,"\n","I just turned",self.Age,"in the 15th of March",
"\n", "I am currently enrolled in", self.School,"taking", self.Course)
Myself = Student("<NAME>","202106651","18", "Cavite State University-Indang Campus", "Bachelor of Science in Computer Engineering")
Myself.Info()
| Prelim_Exam.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_zakharov:
# -
# ## Zakharov
#
# The Zakharov function has no local minima except the global one. It is shown here in its two-dimensional form.
# **Definition**
# \begin{align}
# \begin{split}
# f(x) &=& \sum\limits_{i=1}^n {x_i^2} + \bigg( \frac{1}{2} \sum\limits_{i=1}^n {ix_i} \bigg)^2 + \bigg( \frac{1}{2} \sum\limits_{i=1}^n {ix_i} \bigg)^4, \\[2mm]
# && -10 \leq x_i \leq 10 \quad i=1,\ldots,n
# \end{split}
# \end{align}
# **Optimum**
# $$f(x^*) = 0 \; \text{at} \; x^* = (0,\ldots,0) $$
# **Contour**
# + code="usage_problem.py" section="zakharov"
import numpy as np
from pymoo.factory import get_problem, get_visualization
problem = get_problem("zakharov", n_var=2)
get_visualization("fitness-landscape", problem, angle=(45, 45), _type="surface").show()
get_visualization("fitness-landscape", problem, _type="contour", contour_levels = 200, colorbar=True).show()
# -
| doc/source/problems/single/zakharov.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests, json, time, math
from datetime import datetime
from multiprocessing import Pool
from sqlalchemy import *
from bs4 import BeautifulSoup
import pandas as pd
from tqdm.auto import tqdm
# #### Good to have
def split_list(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]
for i in tqdm(range(10)):
time.sleep(.3)
# #### Get user inventory
# ##### Option A: For Loop
# +
path_user_id = 'data/steam_user_id.txt'
with open(path_user_id, 'rb') as f:
lst_user_id = f.readlines()
lst_user_id[:5]
# -
'\n76561198158086086 \n\n\t'.strip()
for user_id in lst_user_id[:5]:
base_url = 'http://api.steampowered.com/IPlayerService/GetOwnedGames/v0001/'
params = {
'key' : 'D0C62157A8941F12A687382B6D635449',
'steamid' : user_id.strip(),
'format' : 'json'
}
r = requests.get(base_url, params = params, headers = {})
user_inventory = r.json().get('response').get('games')
time.sleep(.5)
print(user_id, '\n', user_inventory)
# ##### Option B: Multiprocessing
# +
# what is multiprocessing?
# Multiprocessing vs threading, queue
# -
path_user_id = 'data/steam_user_id.txt'
with open(path_user_id, 'r') as f:
lst_user_id = f.readlines()[:50]
def worker(lst_user_id_temp):
dic_temp = {}
for user_id in tqdm(lst_user_id_temp, leave=False):
base_url = 'http://api.steampowered.com/IPlayerService/GetOwnedGames/v0001/'
params = {
'key' : 'D0C62157A8941F12A687382B6D635449',
'steamid' : user_id.strip(),
'format' : 'json' }
r = requests.get(base_url, params = params)
user_inventory = r.json().get('response').get('games')
dic_temp.update({user_id.strip():user_inventory})
time.sleep(.5)
return dic_temp
# +
p = Pool(2)
dic_master = {}
for i in tqdm(list(split_list(lst_user_id,10))):
lst_temp_dic = p.map(worker, split_list(i,5))
for j in lst_temp_dic:
dic_master.update(j)
time.sleep(5)
# -
with open('data/crawled_user_inventory.txt', 'w') as f:
for user_id, user_inventory in list(dic_master.items()):
f.write(json.dumps({str(user_id):user_inventory}))
f.write('\n')
# ### Web Crawler II
# #### 1) rate limit
# #### 2) Headers, cookies
# #### 3) Multiprocessing / Threading
# #### 4) Selenium
r = requests.get('https://www.youtube.com/watch?v=h31myLyc_qk')
soup = BeautifulSoup(r.text, 'lxml')
soup.find_all('yt-formatted-string', {'class':'style-scope ytd-video-primary-info-renderer'})
from selenium import webdriver
# https://chromedriver.chromium.org/
driver = webdriver.Chrome('/Users/alanliu/chromedriver')
driver.get('https://www.youtube.com/watch?v=h31myLyc_qk')
soup = BeautifulSoup(driver.page_source, 'lxml')
soup.find('yt-formatted-string', {'class':'style-scope ytd-video-primary-info-renderer'}).string
# #### get app info
# get all available app id
url = 'https://api.steampowered.com/ISteamApps/GetAppList/v2/'
r = requests.get(url)
dic_app_list = r.json()
lst_app_id = [i.get('appid') for i in dic_app_list.get('applist').get('apps')]
len(lst_app_id)
current_count = 0
path_app_detail_sample = 'app_detail_sample.txt'
with open(path_app_detail_sample, 'w') as f:
for app_id in tqdm(lst_app_id[:5]):
url_app_detail = ('http://store.steampowered.com/api/appdetails?appids=%s') % (app_id)
for i in range(3):
try:
r = requests.get(url_app_detail)
result = r.json()
break
except Exception as e:
print(e)
time.sleep(5)
f.write(json.dumps(result))
f.write('\n')
if current_count > 0 and current_count % 200 == 0:
time.sleep(300)
else:
time.sleep(.5)
# +
path_app_detail = 'data/app_detail.txt'
with open(path_app_detail, 'r') as f:
dic_steam_app = {
'initial_price':{},
'name':{},
'score':{},
'windows':{},
'mac':{},
'linux':{},
'type':{},
'release_date':{},
'recommendation':{},
'header_image':{}
}
lst_raw_string = f.readlines()[:200]
for raw_string in tqdm(lst_raw_string):
try:
app_data = list(json.loads(raw_string).values())[0]
if app_data.get('success'):
app_data = app_data.get('data')
steam_id = app_data.get('steam_appid')
initial_price = app_data.get('price_overview',{}).get('initial')
if app_data.get('is_free') == True:
initial_price = 0
app_name = app_data.get('name')
critic_score = app_data.get('metacritic', {}).get('score')
app_type = app_data.get('type')
for (platform, is_supported) in app_data.get('platforms',{}).items():
if is_supported == True:
dic_steam_app[platform].update({steam_id:1})
else:
dic_steam_app[platform].update({steam_id:0})
if app_data.get('release_date',{}).get('coming_soon') == False:
release_date = app_data.get('release_date',{}).get('date')
if not release_date == '':
try:
release_date = datetime.strptime(release_date, '%b %d, %Y')
except Exception as e:
try:
release_date = datetime.strptime(release_date, '%d %b, %Y')
except:
release_date = None
recommendation = app_data.get('recommendations',{}).get('total')
header_image = app_data.get('header_image')
dic_steam_app['initial_price'].update({steam_id:initial_price})
dic_steam_app['name'].update({steam_id:app_name})
dic_steam_app['score'].update({steam_id:critic_score})
dic_steam_app['type'].update({steam_id:app_type})
dic_steam_app['release_date'].update({steam_id:release_date})
dic_steam_app['recommendation'].update({steam_id:recommendation})
dic_steam_app['header_image'].update({steam_id:header_image})
time.sleep(.1)
except:
pass
# -
# #### Work with MySQL in Python
df_app_info = pd.DataFrame(dic_steam_app)
df_app_info.index.name = 'app_id'
df_app_info.reset_index(inplace=True)
df_app_info.head()
user = ''
password = ''
host = '127.0.0.1'
db_name = 'steam'
engine = create_engine(f'mysql+pymysql://{user}:{password}@{host}/{db_name}?charset=utf8mb4')
df_app_info.to_sql('tbl_app_info', engine, if_exists='replace',index=False)
engine.execute(
'''
select * from tbl_app_info limit 10
''').fetchall()
pd.read_sql_query('''
select * from tbl_app_info limit 10
''', engine)
| NLP/03_01_week_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
plt.rcParams["font.family"] = "Times New Roman"
plt.rcParams["font.size"] = 16
import seaborn as sns
sns.set_style("white")
import warnings
warnings.filterwarnings("ignore")
# %load_ext autoreload
# %autoreload 2
# My packages
from source import parse_mxml as pm
from source import log_representation as lr
from source import plots as plts
from source import drift_detection as dd
from source import drift_localization as dl
from source import offline_streaming_clustering as off_sc
from sklearn.cluster import KMeans
import random
random.seed(42)
import os
import glob
import gc
gc.enable()
# +
def insensitive_glob(pattern):
def either(c):
return '[%s%s]' % (c.lower(), c.upper()) if c.isalpha() else c
return glob.glob(''.join(map(either, pattern)))
def if_any(string, lista):
for l in lista:
if l in string:
return True
return False
# -
logs = insensitive_glob("../process_mining_datasets/*/*k.MXML")
logs = [x for x in logs if "2.5" not in x]
# ### Read and Prep log file
#Example
logs[1]
log_read = pm.all_prep(logs[1])
tokens = lr.get_traces_as_tokens(log_read)
y_true = list(range(int(len(tokens)/10), len(tokens), int(len(tokens)/10)))
# ### Vector space representations
activity_binary = lr.get_binary_representation(tokens)
transitions_binary = lr.get_binary_transitions_representation(tokens)
activity_frequency = lr.get_frequency_representation(tokens)
# ### Trace Clustering - Transitions Binary
run_df = off_sc.run_offline_clustering_window(
KMeans(n_clusters=3, random_state=42),
125,
transitions_binary,
sliding_window=False,
sliding_step=1
)
# ##### Features from the evolution of trace clustering
run_df['avg_dist_between_centroids'].plot(figsize=(16,4), c='red')
plts.plot_drift_vertical_lines(len(activity_binary), label="True drift")
plt.legend();
run_df['avg_dist_intra_cluster'].plot(figsize=(16,4), c='red')
plts.plot_drift_vertical_lines(len(activity_binary), label="True drift")
plt.legend();
# ### Trace Clustering - Activity Binary
# +
clustering_window_size = 125
run_df = off_sc.run_offline_clustering_window(
KMeans(n_clusters=3, random_state=42),
clustering_window_size,
activity_binary,
sliding_window=False,
sliding_step=1
)
# -
# ##### Features from the evolution of trace clustering
run_df['avg_dist_between_centroids'].plot(figsize=(16,4))
plts.plot_drift_vertical_lines(len(activity_binary), label="True drift")
plt.legend();
run_df['Silhouette'].plot(figsize=(16,4))
plts.plot_drift_vertical_lines(len(activity_binary), label="True drift")
plt.legend();
# ### Drift Detection
drifts, info = dd.detect_concept_drift(
run_df,
'avg_dist_between_centroids',
rolling_window=2,
std_tolerance=2.5,
min_tol=0.03
)
dd.get_metrics(drifts, y_true, window_size=clustering_window_size)
plts.plot_deteccao_drift(
run_df,
'avg_dist_between_centroids',
drifts,
y_true,
info['means'],
info['lowers'],
info['uppers'],
save_png=""
)
# ### Drift Localization
dl.localize_drift(
run_df.centroids.loc[500],
run_df.centroids.loc[625],
activity_binary.columns
)
# +
# Result of drift localization in the ground truth drifts
dl.localize_all_drifts(
run_df,
[x + clustering_window_size for x in y_true],
clustering_window_size,
activity_binary.columns
)
# +
# Result of drift localization in all predicted drifts
dl.localize_all_drifts(
run_df,
drifts,
clustering_window_size,
activity_binary.columns
)
| Drift Detection and Localization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.011762, "end_time": "2021-03-07T11:21:15.956829", "exception": false, "start_time": "2021-03-07T11:21:15.945067", "status": "completed"} tags=[]
# ## Summary -> <br>
# In this notebook, we'll learn how to perform Siamese Classification. <br>
#
# Siamese Classification is a One-Shot Learning primarily used in Facial Recognition, Signature Verfication. <br>
#
# Data Preparation for Siamese Network Learning is usually a challenge to deal with. <br>
# Apart from that, comes the network architecture, which has two inputs feeding to a common network architecture. <br>
# <br>
# Let's begin step by step.
# + [markdown] papermill={"duration": 0.010427, "end_time": "2021-03-07T11:21:15.978113", "exception": false, "start_time": "2021-03-07T11:21:15.967686", "status": "completed"} tags=[]
# #### Necessary Imports
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 5.901573, "end_time": "2021-03-07T11:21:21.890466", "exception": false, "start_time": "2021-03-07T11:21:15.988893", "status": "completed"} tags=[]
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Flatten, Dense, Dropout, Lambda
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.python.keras.utils.vis_utils import plot_model
from tensorflow.keras import backend as K
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image, ImageFont, ImageDraw
import random
# + papermill={"duration": 0.418118, "end_time": "2021-03-07T11:21:22.319888", "exception": false, "start_time": "2021-03-07T11:21:21.901770", "status": "completed"} tags=[]
from keras.layers import Input, Conv2D, Lambda, merge, Dense, Flatten, MaxPooling2D, Activation, Dropout
from keras.models import Model, Sequential
from keras.regularizers import l2
from keras import backend as K
from keras.optimizers import Adam
from skimage.io import imshow
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
# + [markdown] papermill={"duration": 0.010673, "end_time": "2021-03-07T11:21:22.341874", "exception": false, "start_time": "2021-03-07T11:21:22.331201", "status": "completed"} tags=[]
# ## Loading the Dataset
# + [markdown] papermill={"duration": 0.010593, "end_time": "2021-03-07T11:21:22.363294", "exception": false, "start_time": "2021-03-07T11:21:22.352701", "status": "completed"} tags=[]
# Using a already processed provided [here](https://www.kaggle.com/arpandhatt/package-data), we'll apply the preprocessing needed for training a Siamese Network Model.
# + papermill={"duration": 7.758146, "end_time": "2021-03-07T11:21:30.132197", "exception": false, "start_time": "2021-03-07T11:21:22.374051", "status": "completed"} tags=[]
npz = np.load('../input/package-data/input_data.npz')
X_train = npz['X_train']
Y_train = npz['Y_train']
del npz
print(f"We have {Y_train.shape[0] - 1000} examples to work with")
# + papermill={"duration": 0.017412, "end_time": "2021-03-07T11:21:30.166898", "exception": false, "start_time": "2021-03-07T11:21:30.149486", "status": "completed"} tags=[]
# + [markdown] papermill={"duration": 0.017175, "end_time": "2021-03-07T11:21:30.203147", "exception": false, "start_time": "2021-03-07T11:21:30.185972", "status": "completed"} tags=[]
# ### Developing the Siamese Network Model
# + papermill={"duration": 3.541566, "end_time": "2021-03-07T11:21:33.762314", "exception": false, "start_time": "2021-03-07T11:21:30.220748", "status": "completed"} tags=[]
left_input = Input((75,75,3))
right_input = Input((75,75,3))
convnet = Sequential([
Conv2D(5,3,input_shape = (75,75,3)),
Activation('relu'),
MaxPooling2D(),
Conv2D(5,3),
Activation('relu'),
MaxPooling2D(),
Conv2D(7,2),
Activation('relu'),
MaxPooling2D(),
Conv2D(7,2),
Activation('relu'),
Flatten(),
Dense(18),
Activation('sigmoid')
])
encoded_l = convnet(left_input)
encoded_r = convnet(right_input)
L1_layer = Lambda(lambda tensor : K.abs(tensor[0] - tensor[1]))
L1_distance = L1_layer([encoded_l, encoded_r])
prediction = Dense(1, activation = 'sigmoid')(L1_distance)
siamese_net = Model(inputs = [left_input, right_input], outputs = prediction)
optimizer = Adam(0.001, decay = 2.5e-4)
siamese_net.compile(loss = 'binary_crossentropy', optimizer = optimizer, metrics = ['accuracy'])
# + papermill={"duration": 0.617956, "end_time": "2021-03-07T11:21:34.392768", "exception": false, "start_time": "2021-03-07T11:21:33.774812", "status": "completed"} tags=[]
plot_model(siamese_net)
# + [markdown] papermill={"duration": 0.012162, "end_time": "2021-03-07T11:21:34.418040", "exception": false, "start_time": "2021-03-07T11:21:34.405878", "status": "completed"} tags=[]
# #### Preprocessing the Dataset
# + [markdown] papermill={"duration": 0.012299, "end_time": "2021-03-07T11:21:34.442657", "exception": false, "start_time": "2021-03-07T11:21:34.430358", "status": "completed"} tags=[]
# We're taking 1000 samples into consideration for initial model training.
# + papermill={"duration": 0.0263, "end_time": "2021-03-07T11:21:34.481443", "exception": false, "start_time": "2021-03-07T11:21:34.455143", "status": "completed"} tags=[]
image_list = np.split(X_train[:1000], 1000)
label_list = np.split(Y_train[:1000], 1000)
# + [markdown] papermill={"duration": 0.012066, "end_time": "2021-03-07T11:21:34.506034", "exception": false, "start_time": "2021-03-07T11:21:34.493968", "status": "completed"} tags=[]
# Preparing the test dataset
# + papermill={"duration": 0.248652, "end_time": "2021-03-07T11:21:34.767000", "exception": false, "start_time": "2021-03-07T11:21:34.518348", "status": "completed"} tags=[]
iceimage = X_train[101]
test_left = []
test_right = []
test_targets = []
for i in range(Y_train.shape[0] - 1000):
test_left.append(iceimage)
test_right.append(X_train[i + 1000])
test_targets.append(Y_train[i + 1000])
test_left = np.squeeze(np.array(test_left))
test_right = np.squeeze(np.array(test_right))
test_targets = np.squeeze(np.array(test_targets))
# + [markdown] papermill={"duration": 0.012339, "end_time": "2021-03-07T11:21:34.792571", "exception": false, "start_time": "2021-03-07T11:21:34.780232", "status": "completed"} tags=[]
# For each image, 10 pairs are created of which 6 are same & 4 are different.
# + _kg_hide-output=true papermill={"duration": 4.036598, "end_time": "2021-03-07T11:21:38.841831", "exception": false, "start_time": "2021-03-07T11:21:34.805233", "status": "completed"} tags=[]
"""
Train Data Creation
"""
left_input = []
right_input = []
targets = []
# define the number of same pairs to be created per image
same = 6
# define the number of different pairs to be created per image
diff = 4
for i in range(len(image_list)):
# obtain the label of the ith image
label_i = int(label_list[i])
#print(label_i)
# get same pairs
print("-"* 10 ); print("Same")
# randomly select 'same' number of items for the images with the 'same' label as that of ith image.
print(random.sample(list(np.where(np.array(label_list) == label_i)[0]), same))
for j in random.sample(list(np.where(np.array(label_list) == label_i)[0]), same):
left_input.append(image_list[i])
right_input.append(image_list[j])
targets.append(1.)
# get diff pairs
print('='*10); print("Different")
# randomly select 'same' number of items for the images with the 'same' label as that of ith image.
print(random.sample(list(np.where(np.array(label_list) != label_i)[0]), diff))
for j in random.sample(list(np.where(np.array(label_list) != label_i)[0]), diff):
left_input.append(image_list[i])
right_input.append(image_list[j])
targets.append(0.)
# + papermill={"duration": 0.737559, "end_time": "2021-03-07T11:21:39.602664", "exception": false, "start_time": "2021-03-07T11:21:38.865105", "status": "completed"} tags=[]
left_input = np.squeeze(np.array(left_input))
right_input = np.squeeze(np.array(right_input))
targets = np.squeeze(np.array(targets))
# + papermill={"duration": 0.022024, "end_time": "2021-03-07T11:21:39.648074", "exception": false, "start_time": "2021-03-07T11:21:39.626050", "status": "completed"} tags=[]
# + papermill={"duration": 0.021399, "end_time": "2021-03-07T11:21:39.691453", "exception": false, "start_time": "2021-03-07T11:21:39.670054", "status": "completed"} tags=[]
# + [markdown] papermill={"duration": 0.021522, "end_time": "2021-03-07T11:21:39.735086", "exception": false, "start_time": "2021-03-07T11:21:39.713564", "status": "completed"} tags=[]
# ## Begin Model Training
# + papermill={"duration": 107.011074, "end_time": "2021-03-07T11:23:26.767843", "exception": false, "start_time": "2021-03-07T11:21:39.756769", "status": "completed"} tags=[]
siamese_net.summary()
siamese_net.fit([left_input, right_input], targets, batch_size = 16, epochs = 30, verbose = 1,
validation_data = ([test_left, test_right], test_targets))
# + papermill={"duration": 0.429255, "end_time": "2021-03-07T11:23:27.620144", "exception": false, "start_time": "2021-03-07T11:23:27.190889", "status": "completed"} tags=[]
| siamese-network-one-shot-learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## organising the data
import pandas as pd
import numpy as np
# creating list variable
name = ["Ben", "Martin", "Andy", "Paul", "Graham", "Carina", "Karina", "Doug", "Mark", "Zoe"]
husband = ["1973-06-21", "1970-07-16", "1949-10-08", "1969-05-24"]
wife = ["1984-11-12", "1973-08-02", "1948-11-11", "1983-07-23"]
# calculating difference in age
agegap = pd.to_datetime(husband) - pd.to_datetime(wife)
agegap
# creating coding variables/factors
job = [1]*5 + [2]*5
job
# creating numeric variable
friends = [5,2,0,4,1,10,12,15,12,17]
income = [20000,40000,35000,22000,50000,5000,100,3000,10000,10]
alcohol = [10,15,20,5,30,25,20,16,17,18]
neurotic = [10,17,14,13,21,7,13,9,14,13]
# creating final dataframe
lecturer_data = pd.DataFrame({'friends':friends,'income':income,'alcohol':alcohol,'neurotic':neurotic, 'job':job })
lecturer_data
lecturer_data['job'] = lecturer_data['job'].replace({1:'Lecturer', 2:'Student'})
lecturer_data
# Missing Data, use None
neurotic = [10,17,np.nan,13,21,7,13,9,14,np.nan]
neurotic
# when data is missing
k = np.nanmean(neurotic) # nanmean() function can be used to calculate the mean of array ignoring the NaN value
k
| Python/statistics_with_Python/03_Python_Enviroment/Markdown_notebook/03_organisingData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from pymongo import MongoClient
import re, string, nltk, csv
from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt
print("Connecting to MongoDB ...")
client = MongoClient('localhost:27017')
db = client['comments']
# -
rawComments = db['rawComments'].find()
def translate_numbers(word):
word = word.replace('2', 'a')
word = word.replace('3', 'a')
word = word.replace('5', "kh")
word = word.replace('7', 'h')
word = word.replace('8', "gh")
word = word.replace('9', "k")
return word
def remove_redundant_letters(word):
return re.sub(r'(.)\1+', r'\1', word)
def cleanComment(comment):
tokens = comment.split()
#ignoring case by converting the words to lowercase letters
tokens = [word.lower() for word in tokens]
# translate arabic phonetic numbers used in tunisian dialect (for example: '7' --> 'h', '5' --> "kh")
tokens = [translate_numbers(w) for w in tokens]
#remove punctuation
table = str.maketrans('', '', string.punctuation)
tokens = [w.translate(table) for w in tokens]
#remove redundant letters (for example: "mahleeeeeh" --> "mahleh")
tokens = [remove_redundant_letters(w) for w in tokens]
#remove short words of length <=2 because in general they are insignificant and will slow down the process
tokens = [word for word in tokens if len(word) > 2]
cleancomment = " ".join(tokens)
return cleancomment
def checkSimilarity(word1, word2):
return nltk.edit_distance(word1, word2) < 2
def sentimentScore(words, dictionary):
scoreComment = tokenCount = 0
for word in words:
for token in dictionary:
if checkSimilarity(word, token[0]):
if token[1] != "":
scoreComment = scoreComment + int(token[1])
tokenCount = tokenCount + 1
break
if tokenCount != 0:
scoreComment = scoreComment / tokenCount
return scoreComment
# +
dictionary = []
with open('C:/Users/INFOTEC/Desktop/PI/cleanDictionary.csv', 'r', newline='',encoding="utf8") as dictionaryFile:
dictionaryReader = csv.reader(dictionaryFile, delimiter=',')
i = 0
for row in dictionaryReader:
if (row[1] == 0):
continue
dictionary.append([row[0], row[1]])
print("Cleaning comments")
# +
for comment in rawComments:
existant = db['cleanComments'].find({"id": comment["id"]}).count()
if existant:
continue
cleancomment = cleanComment(comment["review"])
words = cleancomment.split()
score = sentimentScore(words, dictionary)
db.cleanComments.insert({
"_id": comment["_id"],
"id": comment["id"],
"review": cleancomment,
"score": score
})
# -
print(rawComments)
wordcloud = WordCloud(width = 800, height = 800,
background_color ='black',
min_font_size = 10).generate(cleancomment)
plt.figure()
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off")
plt.show()
wordcloud.to_file('C:/Users/INFOTEC/Desktop/PI/world.png')
| Data Processing/NLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
from tensorflow import keras
# +
import glob
import json
import pandas as pd
import os
import gzip
import re
from nltk.stem import WordNetLemmatizer
from nltk import pos_tag
from nltk.corpus import stopwords
import numpy as np
import pandas as pd
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
#Calculate accuracy
# -
def read_data(directory):
dfs = []
for label in ['real', 'fake']:
for file in glob.glob(directory + os.path.sep + label + os.path.sep + '*gz'):
print('reading %s' % file)
df = pd.DataFrame((json.loads(line) for line in gzip.open(file)))
df['label'] = label
dfs.append(df)
df=pd.concat(dfs)[['publish_date', 'source', 'text', 'title', 'tweets', 'label']]
list_text = [i for i in list(df.text) if i != '']
return df[df.text.isin(list_text)]
directory = r'C:\Users\lenovo\Desktop\IIT\training_data_2'
df = read_data(directory)
# +
def get_text(list):
stopword=set(stopwords.words('english'))
list_new=[]
for l in list:
l=re.sub(r"[^\w']",' ',l).lower()
l1=[tokennizer(w) for w in l.split() if len(tokennizer(w))>2]
l=' '.join(l1)
l1=[tokennizer(w) for w in l.split() if len(tokennizer(w))>2 and tokennizer(w) not in stopword]
l=' '.join(lemmatize(l1))
list_new.append(l)
return list_new
def tokennizer(s):
s = re.sub(r'http\S+', '', s)
s = re.sub(r'[0-9_\s]+', '', s)
s = re.sub(r"[^'\w]+", '', s)
s = re.compile(r"(?<=[a-zA-Z])'re").sub(' are', s)
s = re.compile(r"(?<=[a-zA-Z])'m").sub(' am', s)
s = re.compile(r"(?<=[a-zA-Z])'ve").sub(' have', s)
s = re.compile(r"(it|he|she|that|this|there|here|what|where|when|who|why|which)('s)").sub(r"\1 is", s)
s = re.sub(r"'s", "", s)
s = re.sub(r"can't", 'can not', s)
s = re.compile(r"(?<=[a-zA-Z])n't").sub(' not', s)
s = re.compile(r"(?<=[a-zA-Z])'ll").sub(' will', s)
s = re.compile(r"(?<=[a-zA-Z])'d").sub(' would', s)
return s
def lemmatize(l):
wnl = WordNetLemmatizer()
for word, tag in pos_tag(l):
if tag.startswith('NN'):
yield wnl.lemmatize(word, pos='n')
elif tag.startswith('VB'):
yield wnl.lemmatize(word, pos='v')
elif tag.startswith('JJ'):
yield wnl.lemmatize(word, pos='a')
elif tag.startswith('R'):
yield wnl.lemmatize(word, pos='r')
else:
yield word
# -
text = get_text(list(df.text))
vec1 = TfidfVectorizer(min_df=2, max_df=1., ngram_range=(1, 1),stop_words= 'english')
X = vec1.fit_transform(text)
y = np.array(df.label)
for i in range(len(y)):
if y[i] == 'real':
y[i] = 0
else:
y[i] = 1
np.set_printoptions(threshold = 10000)
X = X.todense()
X = np.array(X)
X.shape
from tensorflow.python.keras.layers import Dropout, Flatten
dropout_rate = .5
model = keras.Sequential()
model.add(keras.layers.Dense(16,input_shape=(9969,)))
model.add(keras.layers.Dense(16, activation='relu'))
model.add(Dropout(rate=dropout_rate))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
np.random.seed(116)
np.random.shuffle(X)
np.random.seed(116)
np.random.shuffle(y)
# +
x_val = X[:300]
partial_x_train = X[300:500]
y_val = y[:300]
partial_y_train = y[300:500]
testX = X[500:]
testy = y[500:]
# -
y_val
history = model.fit(partial_x_train,
partial_y_train,
epochs=120,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
# +
results = model.evaluate(testX, testy)
print(results)
# -
history_dict = history.history
history_dict.keys()
# +
import matplotlib.pyplot as plt
acc = history_dict['acc']
val_acc = history_dict['val_acc']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# +
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
| notebooks/W3L3_Zhicheng-L_TensorFlow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_amazonei_tensorflow_p36
# language: python
# name: conda_amazonei_tensorflow_p36
# ---
# # SageMaker와 RoboMaker로 분산 DeepRacer RL 훈련
#
# ---
# ## 소개
#
#
# 이 노트북에서는 Amazon SageMaker RL 및 AWS RoboMaker의 3D 운전 시뮬레이터를 사용하여 강화 학습을 사용하여 완전 자율 1/18 스케일 경주 용 자동차를 훈련시킵니다. [AWS RoboMaker](https://console.aws.amazon.com/robomaker/home#welcome) 는 개발자가 로봇 공학 애플리케이션을 쉽게 개발, 테스트 및 배포할 수 있는 서비스입니다.
#
# 이 노트북은 [AWS DeepRacer](https://console.aws.amazon.com/deepracer/home#welcome)의 탈옥 경험을 제공하여 훈련/시뮬레이션 프로세스 및 RL 알고리즘 튜닝을 보다 강력하게 제어합니다.
#
# 
#
#
# ---
# ## 어떻게 동작하나요?
#
# 
#
# 강화 학습 에이전트(즉, 자율 주행차)는 보상 기댓값(expected rward)을 극대화하기 위해 주어진 상태(state)에서 행동(action)를 취함으로써 환경과의 상호 작용(예: 트랙)을 통해 운전하는 법을 배웁니다. 에이전트는 반복되는 에피소드를 통해 시행 착오(trial-and-error)를 통해 훈련에서 최적의 행동 계획을 학습합니다.
#
# 위 그림은 SageMaker 및 **롤아웃**을 수행하는 두 개의 RoboMaker 시뮬레이션 환경에서 분산 RL 훈련의 예시를 보여줍니다. 현재 모델 또는 정책(policy)을 사용하여 고정된 수의 에피소드를 실행합니다. 롤아웃은 에이전트 경험(상태 전이 튜플)을 수집하고 이 데이터를 SageMaker와 공유하여 훈련합니다. SageMaker는 모델 정책을 업데이트한 후, 다음 롤아웃 시퀀스를 실행하는 데 사용됩니다. 이 훈련 루프는 모델이 수렴할 때까지, 즉 자동차가 운전하는 법을 배우고 트랙을 벗어나는 것을 멈출 때까지 계속됩니다. 보다 엄밀하게, 다음과 같은 관점에서 문제를 정의할 수 있습니다.
#
# 1. **목표(Objective)**: 트랙 중앙에 가까이 머물면서 자율적으로 운전하는 법을 배웁니다.
# 2. **환경(Environment)**: AWS RoboMaker에서 호스팅되는 3D 구동 시뮬레이터
# 3. **상태(State)**: 위 그림과 같이 자동차 헤드 카메라로 캡처한 주행 POV 이미지
# 4. **행동(Action)**: 다른 각도에서 6개의 이산(discrete) 스티어링 휠 위치 (구성 가능)
# 5. **보상(Reward)**: 중심선(center line)에 가깝게 머무르는 것에 대한 긍정적인 보상; 오프 트랙에 대한 높은 페널티. 보상 함수는 구성 가능하며 더 복잡하게 만들 수도 있습니다. (예: steering 페널티 추가 가능)
# ## 전제 조건(Pre-requisites)
# ### 시뮬레이션 응용 프로그램을 변경하려는 경우에만 아래 명령을 사용하세요 (RoboMaker 코드 변경).
# +
# #
# # Run these commands if you want to modify the simapp
# #
# # Clean the build directory if present
# # !python3 sim_app_bundler.py --clean
# # Download Robomaker simApp from the deepracer public s3 bucket
# simulation_application_bundle_location = "s3://deepracer-managed-resources-us-east-1/deepracer-simapp.tar.gz"
# # !aws s3 cp {simulation_application_bundle_location} ./
# # Untar the simapp bundle
# # !python3 sim_app_bundler.py --untar ./deepracer-simapp.tar.gz
# # Now modify the simapp from build directory and run this command.
# # Most of the simapp files can be found here (Robomaker changes)
# # bundle/opt/install/sagemaker_rl_agent/lib/python3.5/site-packages/
# # bundle/opt/install/deepracer_simulation_environment/share/deepracer_simulation_environment/
# # bundle/opt/install/deepracer_simulation_environment/lib/deepracer_simulation_environment/
# # # Copying the notebook src/markov changes to the simapp (For sagemaker container)
# # !rsync -av ./src/markov/ ./build/simapp/bundle/opt/install/sagemaker_rl_agent/lib/python3.5/site-packages/markov
# # !python3 sim_app_bundler.py --tar
# -
# ### 라이브러리 임포트
# 시작하기 위해, 필요한 Python 라이브러리를 가져와서 권한 및 구성을 위한 몇 가지 전제 조건으로 환경을 설정합니다.
#
# 로컬 시스템 또는 SageMaker 노트북 인스턴스에서 이 노트북을 실행할 수 있습니다. 이 두 시나리오에서 다음 코드 셀들을 실행하여 SageMaker에서 훈련 작업을 시작하고 RoboMaker에서 시뮬레이션 작업을 시작할 수 있습니다.
import boto3
import sagemaker
import sys
import os
import re
import numpy as np
import subprocess
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from docker_utils import build_and_push_docker_image
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from time import gmtime, strftime
import time
from IPython.display import Markdown
from markdown_helper import *
# ### 기본 파라메터 초기화
# +
# Select the instance type
instance_type = "ml.c4.2xlarge"
#instance_type = "ml.p2.xlarge"
#instance_type = "ml.c5.4xlarge"
# Starting SageMaker session
sage_session = sagemaker.session.Session()
# Create unique job name.
job_name_prefix = 'deepracer-notebook'
# Duration of job in seconds (1 hours)
job_duration_in_seconds = 3600
# AWS Region
aws_region = sage_session.boto_region_name
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia),"
"US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
# -
# ### S3 버킷 설정
#
# 체크포인트(checkpoint) 및 메타데이터에 사용하려는 S3 버킷에 대한 연결 및 인증을 설정합니다.
# +
# S3 bucket
s3_bucket = sage_session.default_bucket()
# SDK appends the job name and output folder
s3_output_path = 's3://{}/'.format(s3_bucket)
#Ensure that the S3 prefix contains the keyword 'sagemaker'
s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", gmtime())
# Get the AWS account id of this account
sts = boto3.client("sts")
account_id = sts.get_caller_identity()['Account']
print("Using s3 bucket {}".format(s3_bucket))
print("Model checkpoints and other metadata will be stored at: \ns3://{}/{}".format(s3_bucket, s3_prefix))
# -
# ### IAM 역할 생성
#
# SageMaker 노트북 `role = sagemaker.get_execution_role()`을 실행할 때 실행 역할(execution role)을 얻거나 로컬 시스템에서 실행할 때 utils 메소드 `role = get_execution_role()`을 사용하여 실행 역할을 작성하세요.
# +
try:
sagemaker_role = sagemaker.get_execution_role()
except:
sagemaker_role = get_execution_role('sagemaker')
print("Using Sagemaker IAM role arn: \n{}".format(sagemaker_role))
# -
# > 시뮬레이터는 AWS RoboMaker 서비스를 기반으로 하므로, 이 노트북은 `SageMaker 로컬 모드`에서 실행할 수 없습니다.
# ### 이 노트북에서 AWS RoboMaker를 호출하기 위한 권한 설정
#
# 이 노트북이 AWS RoboMaker 작업을 실행할 수 있게 하려면, 이 노트북의 기본 실행 역할에 하나의 신뢰 관계(trust relationshop)를 추가해야 합니다.
display(Markdown(generate_help_for_robomaker_trust_relationship(sagemaker_role)))
# ### SageMaker에서 S3 버킷으로의 권한 설정
#
# SageMaker는 Redis IP 주소 모델을 S3 버킷에 씁니다. 따라서, 버킷에 대한 `PutObject` 권한이 필요합니다. 이 권한으로 사용중인 sagemaker 역할(role)을 확인하세요.
display(Markdown(generate_s3_write_permission_for_sagemaker_role(sagemaker_role)))
#
# ### SageMaker가 KinesisVideoStream을 생성할 수 있는 권한 설정
#
# SageMaker 노트북은 kinesis 비디오 스트리머를 생성해야 합니다. kinesis 비디오 스트리머에서 자동차가 에피소드를 생성하는 것을 관찰할 수 있습니다.
display(Markdown(generate_kinesis_create_permission_for_sagemaker_role(sagemaker_role)))
# ### 도커 이미지 빌드 및 푸시
#
# ./Dockerfile 파일은 docker에 설치된 모든 패키지를 포함합니다. 기본 sagemaker 컨테이너를 사용하는 대신 이 도커 컨테이너를 사용할 것입니다.
# %%time
from copy_to_sagemaker_container import get_sagemaker_docker, copy_to_sagemaker_container, get_custom_image_name
cpu_or_gpu = 'gpu' if instance_type.startswith('ml.p') else 'cpu'
repository_short_name = "sagemaker-docker-%s" % cpu_or_gpu
custom_image_name = get_custom_image_name(repository_short_name)
try:
print("Copying files from your notebook to existing sagemaker container")
sagemaker_docker_id = get_sagemaker_docker(repository_short_name)
copy_to_sagemaker_container(sagemaker_docker_id, repository_short_name)
except Exception as e:
print("Creating sagemaker container")
docker_build_args = {
'CPU_OR_GPU': cpu_or_gpu,
'AWS_REGION': boto3.Session().region_name,
}
custom_image_name = build_and_push_docker_image(repository_short_name, build_args=docker_build_args)
print("Using ECR image %s" % custom_image_name)
# ### 도커 이미지 제거
#
# Docker를 완전히 제거하거나 SageMaker 인스턴스의 공간을 정리하려는 경우에만 제거하세요.
# +
# # !docker rm -f $(docker ps -a -q);
# # !docker rmi -f $(docker images -q);
# -
# ### VPC 구성
#
# SageMaker와 RoboMaker는 네트워크를 통해 서로 통신해야 하므로, 이러한 서비스는 모두 VPC 모드에서 실행해야 합니다. 작업 시작 스크립트에 서브넷과 보안 그룹을 제공하면 됩니다.
# deepracer-vpc 스택이 생성되었는지 확인하고 존재하는 경우 이를 사용합니다. (AWS Deepracer 콘솔을 한 번 이상 사용하여 모델을 생성하는 경우에 존재 합니다). 그렇지 않으면 기본 VPC 스택을 사용합니다.
# +
ec2 = boto3.client('ec2')
#
# Check if the user has Deepracer-VPC and use that if its present. This will have all permission.
# This VPC will be created when you have used the Deepracer console and created one model atleast
# If this is not present. Use the default VPC connnection
#
deepracer_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups']\
if group['GroupName'].startswith("aws-deepracer-")]
# deepracer_security_groups = False
if(deepracer_security_groups):
print("Using the DeepRacer VPC stacks. This will be created if you run one training job from console.")
deepracer_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] \
if "Tags" in vpc for val in vpc['Tags'] \
if val['Value'] == 'deepracer-vpc'][0]
deepracer_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == deepracer_vpc]
else:
print("Using the default VPC stacks")
deepracer_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
deepracer_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if 'VpcId' in group and group["GroupName"] == "default" and group["VpcId"] == deepracer_vpc]
deepracer_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == deepracer_vpc and subnet['DefaultForAz']==True]
print("Using VPC:", deepracer_vpc)
print("Using security group:", deepracer_security_groups)
print("Using subnets:", deepracer_subnets)
# -
# ### 라우트 테이블 생성
#
# VPC 모드에서 실행 중인 SageMaker 작업은 S3 자원에 액세스할 수 없습니다. 따라서, SageMaker 컨테이너에서 S3에 액세스할 수 있도록 VPC S3 엔드포인트를 생성해야 합니다. VPC 모드에 대한 자세한 내용을 보려면, [이 링크](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)를 방문하세요.
# +
#TODO: Explain to customer what CREATE_ROUTE_TABLE is doing
CREATE_ROUTE_TABLE = True
def create_vpc_endpoint_table():
print("Creating ")
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == deepracer_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(sagemaker_role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, deepracer_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
if not route_tables:
raise Exception(("No route tables were found. Please follow the VPC S3 endpoint creation "
"guide by clicking the above link."))
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=deepracer_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, deepracer_vpc)))
raise e
if CREATE_ROUTE_TABLE:
create_vpc_endpoint_table()
# -
# ## 환경 설정
#
# 환경은 "deepracer_racetrack_env.py"라는 파이썬 파일에 정의되어 있으며, 파일은 `src/markov/environments/`에서 찾을 수 있습니다. 이 파일은 Gazebo 기반 RoboMakersimulator의 gym 인터페이스를 구현합니다. 이것은 SageMaker와 RoboMaker에서 사용하는 공통 환경 파일입니다. 환경 변수-`NODE_TYPE`은 코드가 실행 중인 노드를 정의합니다. 따라서 `rospy` 의존성이 있는 표현식은 RoboMaker에서만 실행됩니다.
#
# `src/markov/rewards/`에서 `reward_function`을 수정하여 다양한 보상 함수를 실험할 수 있습니다. 또한, `src/markov/actions/`.json 파일을 수정하여 행동 공간(action space)과 조향 각도(steering angles)를 변경할 수 있습니다
#
# ### RL 알고리즘에 대한 사전 설정 구성
#
# RL 훈련 작업을 구성하는 파라미터는 `src/markov/presets/`에 정의되어 있습니다. 사전 설정 파일을 사용하여 에이전트 파라메터들을 정의하여 특정 에이전트 알고리즘을 선택할 수 있습니다. 이 예에서는 Clipped PPO를 사용하는 것이 좋습니다. 이 파일을 편집하여 학습률(learning_rate), 신경망 구조, 배치 크기(batch_size), 감가율(discount factor) 등과 같은 알고리즘 매개 변수를 수정할 수 있습니다.
# +
# Uncomment the pygmentize code lines to see the code
# Reward function
# #!pygmentize src/markov/rewards/default.py
# Action space
# #!pygmentize src/markov/actions/single_speed_stereo_shallow.json
# Preset File
# #!pygmentize src/markov/presets/default.py
# #!pygmentize src/markov/presets/preset_attention_layer.py
# -
# ### SageMaker 및 RoboMaker가 선택할 수 있도록 사용자 정의 파일을 S3 버킷으로 복사
# +
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
print(s3_location)
# Clean up the previously uploaded files
# !aws s3 rm --recursive {s3_location}
# !aws s3 cp src/markov/rewards/default.py {s3_location}/customer_reward_function.py
# !aws s3 cp src/markov/actions/default.json {s3_location}/model/model_metadata.json
# #!aws s3 cp src/markov/presets/default.py {s3_location}/presets/preset.py
# #!aws s3 cp src/markov/presets/preset_attention_layer.py {s3_location}/presets/preset.py
# -
# ### Python SDK / 스크립트 모드를 사용하여 RL 모델 훈련
#
# 다음으로, 훈련 진행 상황을 모니터링하기 위해, cloudwatch 로그에서 캡처할 알고리즘 지표들을 아래와 같이 정의합니다. 이들은 알고리즘에 특화된 파라메터들이며 알고리즘마다 다를 수 있습니다. 이 예에서는 [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html)를 사용합니다.
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
# RL 작업을 훈련하기 위해 RLEstimator를 사용합니다.
#
# 1. 환경, 사전 설정 및 훈련 코드가 업로드되는 소스 디렉토리를 지정합니다.
# 2. 엔트리포인트를 훈련 코드로 지정합니다.
# 3. RL 툴킷 및 프레임워크 선택을 지정합니다. RL 컨테이너의 ECR 경로로 자동 확인 가능합니다.
# 4. 모델 체크포인트 및 메타데이터를 저장하기 위해 인스턴스 수, 인스턴스 유형, 작업 이름, s3_bucket 및 s3_prefix와 같은 학습 파라메터들을 정의합니다. **현재 1개의 훈련 인스턴스 만 지원됩니다.**
# 5. 이 예에서는 RLCOACH_PRESET을 "deepracer"로 설정합니다.
# 6. 로그에서 캡처하려는 지표를 정의합니다. CloudWatch 및 SageMaker Notebook에서도 시각화할 수 있습니다.
# +
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
image_name=custom_image_name,
dependencies=["common/"],
role=sagemaker_role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
metric_definitions=metric_definitions,
train_max_run=job_duration_in_seconds,
hyperparameters={
"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"model_metadata_s3_key": "%s/model/model_metadata.json" % s3_prefix,
"reward_function_s3_source": "%s/customer_reward_function.py" % s3_prefix,
"batch_size": "64",
"num_epochs": "10",
"stack_size": "1",
"lr": "0.0003",
"exploration_type": "Categorical",
"e_greedy_value": "1",
"epsilon_steps": "10000",
"beta_entropy": "0.01",
"discount_factor": "0.999",
"loss_type": "Huber",
"num_episodes_between_training": "20"
},
subnets=deepracer_subnets,
security_group_ids=deepracer_security_groups,
)
estimator.fit(wait=False)
job_name = estimator.latest_training_job.job_name
print("Training job: %s" % job_name)
# -
training_job_arn = estimator.latest_training_job.describe()['TrainingJobArn']
# ### Kinesis 비디오 스트림 생성
# +
kvs_stream_name = "dr-kvs-{}".format(job_name)
# !aws --region {aws_region} kinesisvideo create-stream --stream-name {kvs_stream_name} --media-type video/h264 --data-retention-in-hours 24
print ("Created kinesis video stream {}".format(kvs_stream_name))
# -
# ### RoboMaker 작업 시작
robomaker = boto3.client("robomaker")
# ### Simulation 어플리케이션 생성
robomaker_s3_key = 'robomaker/simulation_ws.tar.gz'
robomaker_source = {'s3Bucket': s3_bucket,
's3Key': robomaker_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE',
'version': '1.x'}
# RoboMaker 서비스에서 제공하는 DeepRacer 번들을 다운로드하고 S3 버킷에 업로드하여 RoboMaker 시뮬레이션 애플리케이션을 작성하세요.
if not os.path.exists('./build/output.tar.gz'):
print("Using the latest simapp from public s3 bucket")
# Download Robomaker simApp for the deepracer public s3 bucket
simulation_application_bundle_location = "s3://deepracer-managed-resources-us-east-1/deepracer-simapp.tar.gz"
# !aws s3 cp {simulation_application_bundle_location} ./
# Remove if the Robomaker sim-app is present in s3 bucket
# !aws s3 rm s3://{s3_bucket}/{robomaker_s3_key}
# Uploading the Robomaker SimApp to your S3 bucket
# !aws s3 cp ./deepracer-simapp.tar.gz s3://{s3_bucket}/{robomaker_s3_key}
# Cleanup the locally downloaded version of SimApp
# !rm deepracer-simapp.tar.gz
else:
print("Using the simapp from build directory")
# !aws s3 cp ./build/output.tar.gz s3://{s3_bucket}/{robomaker_s3_key}
# +
app_name = "deepracer-notebook-application" + strftime("%y%m%d-%H%M%S", gmtime())
print(app_name)
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[robomaker_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
# -
# ### RoboMaker에서 시뮬레이션 작업 시작
#
# 환경을 시뮬레이션하고 훈련을 위해, 이 데이터를 SageMaker와 공유하는 [AWS RoboMaker](https://console.aws.amazon.com/robomaker/home#welcome) 시뮬레이션 작업을 생성합니다.
# +
s3_yaml_name="training_params.yaml"
world_name = "reInvent2019_track"
# !touch {s3_yaml_name}
# !echo "WORLD_NAME: \"{world_name}\"" | tee -a {s3_yaml_name}
# !echo "SAGEMAKER_SHARED_S3_BUCKET: \"{s3_bucket}\"" | tee -a {s3_yaml_name}
# !echo "SAGEMAKER_SHARED_S3_PREFIX: \"{s3_prefix}\"" | tee -a {s3_yaml_name}
# !echo "TRAINING_JOB_ARN: \"{training_job_arn}\"" | tee -a {s3_yaml_name}
# !echo "METRICS_S3_BUCKET: \"{s3_bucket}\"" | tee -a {s3_yaml_name}
# !echo "METRICS_S3_OBJECT_KEY: \"{s3_prefix}/training_metrics.json\"" | tee -a {s3_yaml_name}
# !echo "AWS_REGION: \"{aws_region}\"" | tee -a {s3_yaml_name}
# !echo "TARGET_REWARD_SCORE: \"None\"" | tee -a {s3_yaml_name}
# !echo "NUMBER_OF_EPISODES: \"0\"" | tee -a {s3_yaml_name}
# !echo "ROBOMAKER_SIMULATION_JOB_ACCOUNT_ID: \"{account_id}\"" | tee -a {s3_yaml_name}
# !echo "JOB_TYPE: \"TRAINING\"" | tee -a {s3_yaml_name}
# !echo "CHANGE_START_POSITION: \"true\"" | tee -a {s3_yaml_name}
# !echo "ALTERNATE_DRIVING_DIRECTION: \"false\"" | tee -a {s3_yaml_name}
# !echo "KINESIS_VIDEO_STREAM_NAME: \"{kvs_stream_name}\"" | tee -a {s3_yaml_name}
# !echo "REWARD_FILE_S3_KEY: \"{s3_prefix}/customer_reward_function.py\"" | tee -a {s3_yaml_name}
# !echo "MODEL_METADATA_FILE_S3_KEY: \"{s3_prefix}/model/model_metadata.json\"" | tee -a {s3_yaml_name}
# !echo "NUMBER_OF_OBSTACLES: \"0\"" | tee -a {s3_yaml_name}
# !echo "MIN_DISTANCE_BETWEEN_OBSTACLES: \"2.0\"" | tee -a {s3_yaml_name}
# !echo "RANDOMIZE_OBSTACLE_LOCATIONS: \"false\"" | tee -a {s3_yaml_name}
# !echo "PSEUDO_RANDOMIZE_OBSTACLE_LOCATIONS: \"false\"" | tee -a {s3_yaml_name}
# !echo "NUMBER_OF_PSEUDO_RANDOM_PLACEMENTS: \"2\"" | tee -a {s3_yaml_name}
# !echo "IS_OBSTACLE_BOT_CAR: \"false\"" | tee -a {s3_yaml_name}
# !echo "IS_LANE_CHANGE: \"false\"" | tee -a {s3_yaml_name}
# !echo "LOWER_LANE_CHANGE_TIME: \"3.0\"" | tee -a {s3_yaml_name}
# !echo "UPPER_LANE_CHANGE_TIME: \"5.0\"" | tee -a {s3_yaml_name}
# !echo "LANE_CHANGE_DISTANCE: \"1.0\"" | tee -a {s3_yaml_name}
# !echo "NUMBER_OF_BOT_CARS: \"6\"" | tee -a {s3_yaml_name}
# !echo "MIN_DISTANCE_BETWEEN_BOT_CARS: \"2.0\"" | tee -a {s3_yaml_name}
# !echo "RANDOMIZE_BOT_CAR_LOCATIONS: \"false\"" | tee -a {s3_yaml_name}
# !echo "BOT_CAR_SPEED: \"0.2\"" | tee -a {s3_yaml_name}
# !echo "CAR_COLOR: \"LightBlue\"" | tee -a {s3_yaml_name}
# !echo "FIRST_PERSON_VIEW: \"false\"" | tee -a {s3_yaml_name}
# !echo "CHANGE_START_POSITION: \"true\"" | tee -a {s3_yaml_name}
print("Upload yaml settings to S3")
# !aws s3 cp ./training_params.yaml {s3_location}/training_params.yaml
# !rm training_params.yaml
# +
num_simulation_workers = 1
envriron_vars = {
"S3_YAML_NAME": s3_yaml_name,
"SAGEMAKER_SHARED_S3_PREFIX": s3_prefix,
"SAGEMAKER_SHARED_S3_BUCKET": s3_bucket,
"WORLD_NAME": world_name,
"KINESIS_VIDEO_STREAM_NAME": kvs_stream_name,
"APP_REGION": aws_region,
"MODEL_METADATA_FILE_S3_KEY": "%s/model/model_metadata.json" % s3_prefix
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation_environment",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": deepracer_subnets,
"securityGroups": deepracer_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
client_request_token = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
response = robomaker.create_simulation_job(iamRole=sagemaker_role,
clientRequestToken=client_request_token,
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig
)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for response in responses:
print("Job ARN", response["arn"])
# -
# ### RoboMaker에서 시뮬레이션 시각화
# RoboMaker 콘솔을 방문하여 시뮬레이션을 시각화하거나 다음 셀을 실행하여 하이퍼링크를 생성할 수 있습니다.
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
# ### 임시 폴더 생성
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
# ### 훈련 작업의 지표 plot
# +
# %matplotlib inline
import pandas as pd
import json
training_metrics_file = "training_metrics.json"
training_metrics_path = "{}/{}".format(s3_bucket, training_metrics_file)
wait_for_s3_object(s3_bucket, training_metrics_path, tmp_dir)
json_file = "{}/{}".format(tmp_dir, training_metrics_file)
with open(json_file) as fp:
data = json.load(fp)
df = pd.DataFrame(data['metrics'])
x_axis = 'episode'
y_axis = 'reward_score'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
# -
# ### RoboMaker 및 SageMaker 훈련 작업 삭제
#
# RoboMaker 및 SageMaker 작업을 종료하려면 아래 셀을 실행하세요
# +
# # Cancelling robomaker job
# for job_arn in job_arns:
# robomaker.cancel_simulation_job(job=job_arn)
# # Stopping sagemaker training job
# sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
# -
# ### 평가 - ReInvent 트랙
# +
s3_yaml_name="evaluation_params.yaml"
world_name = "reInvent2019_track"
# !touch {s3_yaml_name}
# echo "WORLD_NAME: \"{world_name}\"" | tee {s3_yaml_name}
# echo "MODEL_S3_BUCKET: \"{s3_bucket}\"" | tee -a {s3_yaml_name}
# echo "MODEL_S3_PREFIX: \"{s3_prefix}\"" | tee -a {s3_yaml_name}
# echo "AWS_REGION: \"{aws_region}\"" | tee -a {s3_yaml_name}
# echo "METRICS_S3_BUCKET: \"{s3_bucket}\"" | tee -a {s3_yaml_name}
# echo "METRICS_S3_OBJECT_KEY: \"{s3_prefix}/evaluation_metrics.json\"" | tee -a {s3_yaml_name}
# echo "NUMBER_OF_TRIALS: \"10\"" | tee -a {s3_yaml_name}
# echo "ROBOMAKER_SIMULATION_JOB_ACCOUNT_ID: \"{account_id}\"" | tee -a {s3_yaml_name}
# echo "JOB_TYPE: \"EVALUATION\"" | tee -a {s3_yaml_name}
# echo "NUMBER_OF_OBSTACLES: \"0\"" | tee -a {s3_yaml_name}
# echo "MIN_DISTANCE_BETWEEN_OBSTACLES: \"2.0\"" | tee -a {s3_yaml_name}
# echo "RANDOMIZE_OBSTACLE_LOCATIONS: \"false\"" | tee -a {s3_yaml_name}
# echo "PSEUDO_RANDOMIZE_OBSTACLE_LOCATIONS: \"false\"" | tee -a {s3_yaml_name}
# echo "NUMBER_OF_PSEUDO_RANDOM_PLACEMENTS: \"2\"" | tee -a {s3_yaml_name}
# echo "IS_OBSTACLE_BOT_CAR: \"false\"" | tee -a {s3_yaml_name}
# echo "IS_LANE_CHANGE: \"false\"" | tee -a {s3_yaml_name}
# echo "LOWER_LANE_CHANGE_TIME: \"3.0\"" | tee -a {s3_yaml_name}
# echo "UPPER_LANE_CHANGE_TIME: \"5.0\"" | tee -a {s3_yaml_name}
# echo "LANE_CHANGE_DISTANCE: \"1.0\"" | tee -a {s3_yaml_name}
# echo "NUMBER_OF_BOT_CARS: \"6\"" | tee -a {s3_yaml_name}
# echo "MIN_DISTANCE_BETWEEN_BOT_CARS: \"2.0\"" | tee -a {s3_yaml_name}
# echo "RANDOMIZE_BOT_CAR_LOCATIONS: \"true\"" | tee -a {s3_yaml_name}
# echo "BOT_CAR_SPEED: \"0.2\"" | tee -a {s3_yaml_name}
# echo "CAR_COLOR: \"LightBlue\"" | tee -a {s3_yaml_name}
# !echo "CAR_COLOR: \"LightBlue\"" | tee -a {s3_yaml_name}
# !echo "FIRST_PERSON_VIEW: \"false\"" | tee -a {s3_yaml_name}
# !echo "CHANGE_START_POSITION: \"true\"" | tee -a {s3_yaml_name}
print("Upload yaml settings to S3")
# !aws s3 cp ./evaluation_params.yaml {s3_location}/evaluation_params.json
# +
sys.path.append("./src")
num_simulation_workers = 1
envriron_vars = {
"S3_YAML_NAME": s3_yaml_name,
"MODEL_S3_PREFIX": s3_prefix,
"MODEL_S3_BUCKET": s3_bucket,
"WORLD_NAME": world_name,
"KINESIS_VIDEO_STREAM_NAME": kvs_stream_name,
"APP_REGION": aws_region,
"MODEL_METADATA_FILE_S3_KEY": "%s/model/model_metadata.json" % s3_prefix
}
simulation_application = {
"application":simulation_app_arn,
"launchConfig": {
"packageName": "deepracer_simulation_environment",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars
}
}
vpcConfig = {"subnets": deepracer_subnets,
"securityGroups": deepracer_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(clientRequestToken=strftime("%<PASSWORD>", gmtime()),
outputLocation={
"s3Bucket": s3_bucket,
"s3Prefix": s3_prefix
},
maxJobDurationInSeconds=job_duration_in_seconds,
iamRole=sagemaker_role,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig)
responses.append(response)
# print("Created the following jobs:")
for response in responses:
print("Job ARN", response["arn"])
# -
# ### 임시 폴더 생성
# +
evaluation_metrics_file = "evaluation_metrics.json"
evaluation_metrics_path = "{}/{}".format(s3_bucket, evaluation_metrics_file)
wait_for_s3_object(s3_bucket, evaluation_metrics_path, tmp_dir)
json_file = "{}/{}".format(tmp_dir, evaluation_metrics_file)
with open(json_file) as fp:
data = json.load(fp)
df = pd.DataFrame(data['metrics'])
# Converting milliseconds to seconds
df['elapsed_time'] = df['elapsed_time_in_milliseconds']/1000
df = df[['trial', 'completion_percentage', 'elapsed_time']]
display(df)
# -
# ### 시뮬레이션 어플리케이션 리소스 정리
# +
# robomaker.delete_simulation_application(application=simulation_app_arn)
# -
# ### S3 버킷 삭제 (원하는 경우 아래 코드 셀의 awscli 명령들을 주석 해제하세요.)
# +
## Uncomment if you only want to clean the s3 bucket
# sagemaker_s3_folder = "s3://{}/{}".format(s3_bucket, s3_prefix)
# # !aws s3 rm --recursive {sagemaker_s3_folder}
# robomaker_s3_folder = "s3://{}/{}".format(s3_bucket, job_name)
# # !aws s3 rm --recursive {robomaker_s3_folder}
# robomaker_sim_app = "s3://{}/{}".format(s3_bucket, 'robomaker')
# # !aws s3 rm --recursive {robomaker_sim_app}
# model_output = "s3://{}/{}".format(s3_bucket, s3_bucket)
# # !aws s3 rm --recursive {model_output}
| rl_deepracer_robomaker_coach_gazebo/deepracer_rl.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NAIVE BAYES
# Naive Bayes adalah algoritma yang relatif mudah dalam menentukan klasifikasi. Algoritmanya menggunakan probabilitas matematika yang diobservasi pada tiap-tiap fitur, dan kemudian dianalisis dengan persamaan Teorema Bayes. Jadi, Naive Bayes tidak memerlukan training model seperti Classifier lainnya, karena Naive Bayes murni hanya perhitungan. Hasil klasifikasi akan ditentukan berdasarkan peluang tertinggi
# Misal kita mempunyai dataset yang berisi tentang kemungkinan kita akan bermain golf terhadap cuaca.
# 
# Kemudian kita convert dataset tersebut berdasarkan frekuensi.
# 
# Formula Posterior Probability
# 
# <br> $P(A)$ = Prior Probability, peluang hipotesis yang kita percaya akurat untuk dimodelkan
# <br> $P(B|A)$ = Likelihood, besaran untuk menentukan seberapa bagus kualitas model dalam memprediksi data
# <br> $P(B)$ = normalization, konstanta untuk menormalisasi sehingga nilai P(A|B) berkisar antara 0 sampai 1.
# <br> $P(A|B)$ = Posterior Probability, menunjukan seberapa besar probabilitas hipotesis kita berdasarkan bukti dari data yang telah disediakan
#
# Berikut adalah Likelihood table, yaitu $P(B|A)$
# ##### 
# Contoh soal:
# Apakah anak-anak lebih suka bermain jika cuaca cerah(sunny)?
#
# $P(Yes | Sunny) = P( Sunny | Yes) * P(Yes) / P (Sunny)$<br>
# $P(Yes | Sunny) = 3/9 * 9/14 / 5/14$<br>
# $P(Yes | Sunny) = 0.60$<br>
#
# dalam hasil diatas, peluang tersebut adalah 0.6, yang artinya anak-anak memang lebih suka bermain jika cuaca cerah.
# Dalam modelling, Classifier Naive Bayes akan memprediksi input-input berdasarkan data yang telah disediakan, dan hasil klasifikasinya akan ditentukan dengan peluang yang terbesar.
# Kelebihan
# 1. sangat cepat dan mudah dalam memprediksi kategori dalam dataset
# 2. performansi sangat bagus jika fitur-fitur mempunyai sifat categorical
# 3. berfungsi baik pada data yang minimalis
#
# Kekurangan
# 1. Zero frequency, yang terjadi jika ada label pada test data yang tidak diobservasi pada training data
# 2. kualitas masih tertinggal dibanding metode-metode klasifikasi yang lain
# 3. Independence Assumption, yang artinya Naive Bayes berasumsi bahwa satu fitur terhadap fitur yang lainnya tidak mempunyai korelasi, atau bersifat independen
#
# # CODING SECTION
#
# # Use Case
#
# Sebuah perusahaan mobil berinisial AVZ ingin mendongkrak penjualan mereka untuk tahun 2019. Sebagai Data Scientist, kita diberikan dataset laporan hasil iklan mobil yang berisi dengan fitur ID, umur, gaji, dan record pembelian. Diharapkan kita mampu memberikan insight untuk meningkatkan kemungkinan customer akan membeli mobil tersebut atau tidak.
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import warnings
warnings.filterwarnings("ignore")
# Importing dataset
df = pd.read_csv('data_iklan.csv')
df.head(10)
# +
# memisahkan dataset ke training dan test data
X = df.iloc[:, [2, 3]].values
y = df.iloc[:, 4].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# -
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# +
# Fitting Naive Bayes to the Training set
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# -
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
cm
accuracy_score(y_test, y_pred)
# Visualising the Training set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Training set)')
plt.xlabel('Umur')
plt.ylabel('Gaji')
plt.legend()
plt.show()
# Visualising the Test set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Test set)')
plt.xlabel('Umur')
plt.ylabel('Gaji')
plt.legend()
plt.show()
# # KESIMPULAN
# Naive Bayes mempunyai performance sangat bagus jika fitur-fitur yang bersifat kategori. Dalam membuat model klasifikasi, Naive Bayes sering digunakan pada dataset yang kecil, atau hanya bertujuan untuk memberi gambaran untuk modelling dengan lain yang lain. Namun, Naive Bayes sangat powerful dalam NLP karena proses komputasi yang relatif singkat dan tidak kalah akurat dari teknik lain.
| 3 Classification/3.4 Naive Bayes/Naive_Bayes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# ## Imports
import os
import warnings
import pickle
import pysam
import numpy as np
from dragonn.interpret import *
from dragonn.vis import *
from dragonn.utils import *
from keras.models import load_model
from concise.metrics import tpr, tnr, fpr, fnr, precision, f1
from keras.models import load_model
from kerasAC.metrics import recall, specificity, fpr, fnr, precision, f1
from kerasAC.custom_losses import ambig_binary_crossentropy, ambig_mean_squared_error
# ## Get Model (Regression)
# +
model_prefix="/srv/scratch/annashch/bias_correction/"
reg_uncorrected=model_prefix+"uncorrected/regression/K562/DNASE.K562.regressionlabels.7"
reg_gc_1kb=model_prefix+"gc_covariate/regression/K562/DNASE.K562.regressionlabels.withgc.7"
reg_gc_200bp=model_prefix+"gc_covariate_200bp_ave/regression/K562/DNASE.K562.regressionlabels.withgc.7"
reg_gc_110bp_window=model_prefix+"gc_covariate_110bpwindow/regression/K562/DNASE.K562.regressionlabels.withgc.7"
class_uncorrected=model_prefix+"uncorrected/classification/K562/DNASE.K562.classificationlabels.7"
class_gc_1kb=model_prefix+"gc_covariate/classification/K562/DNASE.K562.classificationlabels.withgc.7"
class_gc_200bp=model_prefix+"gc_covariate_200bp_ave/classification/K562/DNASE.K562.classificationlabels.withgc.7"
class_gc_110bp_window=model_prefix+"gc_covariate_110bpwindow/classification/K562/DNASE.K562.classificationlabels.withgc.7"
##load the model
custom_objects={"recall":recall,
"sensitivity":recall,
"specificity":specificity,
"fpr":fpr,
"fnr":fnr,
"precision":precision,
"f1":f1,
"ambig_binary_crossentropy":ambig_binary_crossentropy,
"ambig_mean_squared_error":ambig_mean_squared_error}
model_r_uncorrected=load_model(reg_uncorrected,custom_objects=custom_objects)
model_c_uncorrected=load_model(class_uncorrected,custom_objects=custom_objects)
model_r_gc_1kb=load_model(reg_gc_1kb,custom_objects=custom_objects)
model_c_gc_1kb=load_model(class_gc_1kb,custom_objects=custom_objects)
model_r_200bp=load_model(reg_gc_200bp,custom_objects=custom_objects)
model_c_200bp=load_model(class_gc_200bp,custom_objects=custom_objects)
model_r_110win=load_model(reg_gc_110bp_window,custom_objects=custom_objects)
model_c_110win=load_model(class_gc_110bp_window,custom_objects=custom_objects)
# -
# ## Get one-hot seq
import pandas as pd
def get_bed_centered_at_summit(fname,flank):
data=pd.read_csv(fname,header=None,sep='\t')
data['chrom']=data[0]
data['center']=data[1]+data[9]
data['start']=data['center']-flank
data['end']=data['center']+flank
subset=data[['chrom','start','end']]
inputs=[]
for index,row in subset.iterrows():
inputs.append((row['chrom'],row['start'],row['end']))
return inputs
import pysam
def get_seq_from_bed(regions,ref):
ref=pysam.FastaFile(ref)
seqs=[]
for region in regions:
seqs.append(ref.fetch(region[0],region[1],region[2]))
return seqs
flank=500
regions=get_bed_centered_at_summit("caprin_dnase_intersection.hg38.bed",flank)
ref_fasta="/users/annashch/GRCh38_no_alt_analysis_set_GCA_000001405.15.fasta"
seqs=get_seq_from_bed(regions,ref_fasta)
from dragonn.utils import one_hot_encode
seqs_onehot=one_hot_encode(seqs)
seqs_onehot.shape
# ## Get GC content
import tiledb
#from tiledb
gc_array="/mnt/data/annotations/gc_content_tracks/tiledb/gc_hg38_110bp_flank.chr11"
gc_array=tiledb.DenseArray(gc_array,'r')[:]['bigwig_track']
gc_110bp=[]
for entry in regions:
start_pos=entry[1]
end_pos=entry[2]
gc_content=gc_array[start_pos:end_pos][400:600].mean()
gc_110bp.append(gc_content)
gc_110bp=np.expand_dims(np.asarray(gc_110bp),axis=1)
def get_gc_content(seq):
seq=seq.upper()
return (seq.count('G')+seq.count('C'))/len(seq)
# +
gc_1kb=[get_gc_content(seq) for seq in seqs]
gc_200bp=[get_gc_content(seq[400:600]) for seq in seqs]
gc_1kb=np.expand_dims(np.asarray(gc_1kb),axis=1)
gc_200bp=np.expand_dims(np.asarray(gc_200bp),axis=1)
# -
# ## Get the gradxinput functions
# +
from keras import backend as K
def get_grad_func(cur_model,target_layer_idx):
return K.function(cur_model.inputs,K.gradients(cur_model.layers[target_layer_idx].output,cur_model.inputs))
# +
grad_func_model_r_uncorrected=get_grad_func(model_r_uncorrected,-1)
grad_func_model_c_uncorrected=get_grad_func(model_c_uncorrected,-2)
grad_func_model_r_gc_1kb=get_grad_func(model_r_gc_1kb,-1)
grad_func_model_c_gc_1kb=get_grad_func(model_c_gc_1kb,-2)
grad_func_model_r_gc_200bp=get_grad_func(model_r_200bp,-1)
grad_func_model_c_gc_200bp=get_grad_func(model_c_200bp,-2)
grad_func_model_r_gc_110win=get_grad_func(model_r_110win,-1)
grad_func_model_c_gc_110win=get_grad_func(model_c_110win,-2)
# -
grads_r_uncorrected=grad_func_model_r_uncorrected([seqs_onehot])[0]
grads_c_uncorrected=grad_func_model_c_uncorrected([seqs_onehot])[0]
grads_r_1kb=grad_func_model_r_gc_1kb([seqs_onehot,gc_1kb])[0]
grads_c_1kb=grad_func_model_c_gc_1kb([seqs_onehot,gc_1kb])[0]
grads_r_200bp=grad_func_model_r_gc_200bp([seqs_onehot,gc_200bp])[0]
grads_c_200bp=grad_func_model_c_gc_200bp([seqs_onehot,gc_200bp])[0]
grads_r_110win=grad_func_model_r_gc_110win([seqs_onehot,gc_110bp])[0]
grads_c_110win=grad_func_model_c_gc_110win([seqs_onehot,gc_110bp])[0]
all_grads=[grads_r_uncorrected,
grads_c_uncorrected,
grads_r_1kb,
grads_c_1kb,
grads_r_200bp,
grads_c_200bp,
grads_r_110win,
grads_c_110win]
subtitles=['reg. uncorrected',
'class. uncorrected',
'reg. 1kb mean gc',
'class. 1kb mean gc',
'reg. 200bp mean gc',
'class. 200bp mean gc',
'reg. 110 bp smoothed',
'class. 110 bp smoothed']
def plot_seq_importance(subtitles, grads, x, xlim=None, ylim=None, figsize=(25, 6),title="",snp_pos=0,axes=None):
"""Plot sequence importance score
Args:
grads: either deeplift or gradientxinput score matrix
x: one-hot encoded DNA sequence
xlim: restrict the plotted xrange
figsize: matplotlib figure size
"""
f,axes=plt.subplots(len(grads),dpi=80,figsize=figsize)
grads=[i.squeeze() for i in grads]
x=x.squeeze()
seq_len = x.shape[0]
vals_to_plot=[i*x for i in grads]
if xlim is None:
xlim = (0, seq_len)
if ylim is None:
ymin=min([np.amin(i) for i in vals_to_plot])
ymax=max([np.amax(i) for i in vals_to_plot])
ylim=(ymin,ymax)
for i in range(len(grads)):
axes[i]=plot_bases_on_ax(vals_to_plot[i],axes[i],show_ticks=True)
axes[i].set_xlim(xlim)
axes[i].set_ylim(ylim)
axes[i].set_title(subtitles[i],fontsize=20)
axes[i].axvline(x=snp_pos, color='k', linestyle='--')
plt.xticks(list(range(xlim[0], xlim[1], 5)))
plt.subplots_adjust(hspace=0.4)
plt.savefig(title+'.png',dpi=300)
plt.show()
for i in range(len(regions)):
region=regions[i]
cur_title='DNAse:'+':'.join([str(j) for j in region])
plot_seq_importance(subtitles,
grads[i],
seqs_onehot[i],
title=cur_title,
xlim=(400,600))
print(i)
| interpretation/.ipynb_checkpoints/interpret_caprin_gc_corrected_with_and_without_autoscale_plot-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
# <script>
# window.dataLayer = window.dataLayer || [];
# function gtag(){dataLayer.push(arguments);}
# gtag('js', new Date());
#
# gtag('config', 'UA-59152712-8');
# </script>
#
# # The Spinning Effective One-Body Hamiltonian
#
# ## Author: <NAME>
# ### Formatting improvements courtesy <NAME>
#
# ## This module documents the reduced spinning effective one-body Hamiltonian as numerically implemented in LALSuite's SEOBNRv3 gravitational waveform approximant.
#
#
# **Notebook Status:** <font color='green'><b> Validated </b></font>
#
# **Validation Notes:** This module has been validated against the LALSuite [SEOBNRv3/SEOBNRv3_opt code]( https://git.ligo.org/lscsoft/lalsuite.) that was reviewed and approved for LIGO parameter estimation by the LIGO Scientific Collaboration. That is, the value $H_{\rm real}$ output from this notebook agrees to roundoff error with the value of $H_{\rm real}$ computed by the LALSuite function XLALSimIMRSpinPrecEOBHamiltonian().
#
# ### NRPy+ Source Code for this module: [SEOBNR_v3_Hamiltonian.py](../edit/SEOBNR/SEOBNR_v3_Hamiltonian.py)
#
# <a id='intro'></a>
#
# ## Introduction
# $$\label{intro}$$
#
# ### The Physical System of Interest
#
# Consider two black holes with masses $m_{1}$, $m_{2}$ and spins ${\bf S}_{1}$, ${\bf S}_{2}$ in a binary system. The spinning effective one-body ("SEOB") Hamiltonian $H_{\rm real}$ (defined in [this cell](#hreal)) describes the dynamics of this system; we will define $H_{\rm real}$ as in [Barausse and Buonanno (2010)](https://arxiv.org/abs/0912.3517) Section VE. There, $H_{\rm real}$ is canonically transformed and mapped to an effective Hamiltonian $H_{\rm eff}$ (defined in [this cell](#heff)) describing the motion of a test particle of mass $\mu$ (defined in [this cell](#mu)) and spin ${\bf S}^{*}$ (defined in [this cell](#sstar)) moving in a defomred Kerr background. Here we seek to break up $H_{\rm real}$ and document the terms in such a way that the resulting Python code can be used to numerically evaluate $H_{\rm real}$.
#
# We write $H_{\rm real}$ in terms of Cartesian quasi-isotropic coordinates $x$, $y$, and $z$ (see [Barausse and Buonanno (2010)](https://arxiv.org/abs/0912.3517) Section III). The spatial coordinates $r$, $\theta$, and $\phi$ referenced throughout are [Boyer-Lindquist coordinates](https://en.wikipedia.org/wiki/Boyer%E2%80%93Lindquist_coordinates) (see [Barausse and Buonanno (2010)](https://arxiv.org/abs/0912.3517) Section IV).
#
# Please note that throughout this notebook we adpot the following conventions:
#
# 1. $c = 1$ where $c$ is the speed of light in a vacuum,
# 1. spacial tensor indicies are denoted by lowercase Latin letters,
# 1. repeated indices indicate Einstein summation notation, and
# 1. we normalize $M=1$ in all terms except for $\eta$ and $\mu$ for agreement with LALSuite. Nonetheless, $M$ appears in other text cells for comparison with the cited literature.
#
# Running this notebook to completion will generate a file called Hreal_on_bottom.py. This file contains the Python function compute_Hreal(), which takes as input m1, m2 (each in solar masses), the value of the [Euler-Mascheroni constant](https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant), a tortoise coordinate, and values for all twelve dynamic variables (3 components of the separation vector, three components of the momentum vector, and three spin components of each compact object). Note that the spin components should be dimensionless.
#
# ### Citations
# Throughout this module, we will refer to
# * [<NAME> (2010)](https://arxiv.org/abs/0912.3517) as BB2010,
# * [<NAME> (2011)](https://arxiv.org/abs/1107.2904) as BB2011,
# * [Steinhoff, Hinderer, Buonanno, et al (2016)](https://arxiv.org/abs/1608.01907) as SH2016,
# * [<NAME>, Buchman, et. al. 2010](https://arxiv.org/abs/0912.3466v2) as P2010,
# * [Taracchini, Buonanno, Pan, et al (2014)](https://arxiv.org/abs/1311.2544) as T2014,
# * [Taracchini, <NAME>, et al (2012)](https://arxiv.org/abs/1202.0790) as T2012, and
# * [<NAME> Schaefer (2000)](https://arxiv.org/abs/gr-qc/0005034) as D2000.
# <a id='toc'></a>
#
# # Table of Contents
# $$\label{toc}$$
#
# This notebook is organized as follows:
#
# 1. [Step 0](#outputcreation): Creating the output directory for SEOBNR
# 1. [Step 1](#hreal): The Real Hamiltonian $H_{\rm real}$
# 1. [Step 2](#heff): The Effective Hamiltonian $H_{\rm eff}$
# 1. [Step 3](#heff_terms): Terms of $H_{\rm eff}$
# 1. [Step 3.a](#hs): Leading Order Spin Effects $H_{\rm S}$
# 1. [Step 3.b](#hns): The Nonspinning Hamiltonian $H_{\rm NS}$
# 1. [Step 3.c](#hd): The Quadrupole Deformation $H_{\rm D}$
# 1. [Step 4](#hso): The Spin-Orbit Term $H_{\rm SO}$
# 1. [Step 4.a](#hsoterm1): $H_{\rm SO}$ Term 1
# 1. [Step 4.b](#hsoterm2coeff): $H_{\rm SO}$ Term 2 Coefficient
# 1. [Step 4.c](#hsoterm2): $H_{\rm SO}$ Term 2
# 1. [Step 4.c.i](#hsoterm2a): $H_{\rm SO}$ Term 2a
# 1. [Step 4.c.ii](#hsoterm2b): $H_{\rm SO}$ Term 2b
# 1. [Step 4.c.iii](#hsoterm2c): $H_{\rm SO}$ Term 2c
# 1. [Step 5](#hss): The Spin-Spin Term $H_{\rm SS}$
# 1. [Step 5.a](#hssterm1): $H_{\rm SS}$ Term 1
# 1. [Step 5.b](#hssterm2coeff): $H_{\rm SS}$ Term 2 coefficient
# 1. [Step 5.c](#hssterm2): $H_{\rm SS}$ Term 2
# 1. [Step 5.d](#hssterm3coeff): $H_{\rm SS}$ Term 3 coefficient
# 1. [Step 5.e](#hssterm3): $H_{\rm SS}$ Term 3
# 1. [Step 6](#hnsterms): The $H_{\rm NS}$ Terms
# 1. [Step 6.a](#betapsum): $\beta p$ Sum
# 1. [Step 6.b](#alpha): $\alpha$
# 1. [Step 6.c](#hnsradicand): $H_{\rm NS}$ Radicand
# 1. [Step 6.c.i](#gammappsum): $\gamma p$ Sum
# 1. [Step 6.c.ii](#q4): ${\cal Q}_{4}$
# 1. [Step 7](#hdterms): The $H_{\rm D}$ Terms
# 1. [Step 7.a](#hdcoeff): $H_{\rm D}$ Coefficient
# 1. [Step 7.b](#hdsum): $H_{\rm D}$ Sum
# 1. [Step 7.b.i](#hdsumterm1): $H_{\rm D}$ Sum Term 1
# 1. [Step 7.b.ii](#hdsumterm2): $H_{\rm D}$ Sum Term 2
# 1. [Step 8](#dotproducts): Common Dot Products
# 1. [Step 8.a](#sdotxi): ${\bf S} \cdot \boldsymbol{\xi}$
# 1. [Step 8.b](#sdotv): ${\bf S} \cdot {\bf v}$
# 1. [Step 8.c](#sdotn): ${\bf S} \cdot {\bf n}$
# 1. [Step 8.d](#sdotskerrhat): ${\bf S} \cdot \hat{\bf S}_{\rm Kerr}$
# 1. [Step 8.e](#sstardotn): ${\bf S}^{*} \cdot {\bf n}$
# 1. [Step 9](#hreal_spin_combos): $H_{\rm real}$ Spin Combination ${\bf S}^{*}$
# 1. [Step 9a](#sstar): ${\bf S}^{*}$
# 1. [Step 9b](#deltasigmastar): $\Delta_{\sigma^{*}}$
# 1. [Step 9c](#sigmastarcoeff): $\sigma^{*}$ Coefficient
# 1. [Step 9c i](#sigmastarcoeffterm1): $\sigma^{*}$ Coefficient Term 1
# 1. [Step 9c ii](#sigmastarcoeffterm2): $\sigma^{*}$ Coefficient Term 2
# 1. [Step 9d](#sigmacoeff): $\sigma$ Coefficient
# 1. [Step 9d i](#sigmacoeffterm1): $\sigma$ Coefficient Term 1
# 1. [Step 9d ii](#sigmacoeffterm2): $\sigma$ Coefficient Term 2
# 1. [Step 9d iii](#sigmacoeffterm3): $\sigma$ Coefficient Term 3
# 1. [Step 10](#metpotderivs): Derivatives of the Metric Potential
# 1. [Step 10.a](#omegar): $\omega_{r}$
# 1. [Step 10.b](#nur): $\nu_{r}$
# 1. [Step 10.c](#mur): $\mu_{r}$
# 1. [Step 10.d](#omegacostheta): $\omega_{\cos\theta}$
# 1. [Step 10.e](#nucostheta): $\nu_{\cos\theta}$
# 1. [Step 10.f](#mucostheta): $\mu_{\cos\theta}$
# 1. [Step 10.g](#lambdatprm): $\Lambda_{t}^{\prime}$
# 1. [Step 10.h](#omegatildeprm): $\tilde{\omega}_{\rm fd}^{\prime}$
# 1. [Step 11](#metpots): The Deformed and Rescaled Metric Potentials
# 1. [Step 11.a](#omega): $\omega$
# 1. [Step 11.b](#exp2nu): $e^{2 \nu}$
# 1. [Step 11.c](#btilde): $\tilde{B}$
# 1. [Step 11.d](#brtilde): $\tilde{B}_{r}$
# 1. [Step 11.e](#exp2mu): $e^{2 \tilde{\mu}}$
# 1. [Step 11.f](#jtilde): $\tilde{J}$
# 1. [Step 11.g](#q): $Q$
# 1. [Step 11.g.i](#drsipn2): $\frac{ \Delta_{r} }{ \Sigma } \left( \hat{\bf p} \cdot {\bf n} \right)^{2}$
# 1. [Step 11.g.ii](#qcoeff1): Q Coefficient 1
# 1. [Step 11.g.iii](#qcoeff2): Q Coefficient 2
# 1. [Step 12](#tort): Tortoise terms
# 1. [Step 12.a](#pphi): $p_{\phi}$
# 1. [Step 12.b](#pdotvr): $\hat{\bf p} \cdot {\bf v} r$
# 1. [Step 12.c](#pdotn): $\hat{\bf p} \cdot {\bf n}$
# 1. [Step 12.d](#pdotxir): $\hat{\bf p} \cdot \boldsymbol{\xi} r$
# 1. [Step 12.e](#hatp): $\hat{\bf p}$
# 1. [Step 12.f](#prt): prT
# 1. [Step 12.g](#csi2): csi2
# 1. [Step 12.h](#csi1): csi1
# 1. [Step 12.i](#csi): csi
# 1. [Step 13](#metric): Metric Terms
# 1. [Step 13.a](#lambdat): $\Lambda_{t}$
# 1. [Step 13.b](#deltar): $\Delta_{r}$
# 1. [Step 13.c](#deltat): $\Delta_{t}$
# 1. [Step 13.d](#deltatprm): $\Delta_{t}^{\prime}$
# 1. [Step 13.e](#deltau): $\Delta_{u}$
# 1. [Step 13.e.i](#deltaubar): $\bar{\Delta}_{u}$
# 1. [Step 13.e.ii](#deltaucalib): $\Delta_{u}$ Calibration Term
# 1. [Step 13.e.iii](#calib_coeffs): Calibration Coefficients
# 1. [Step 13.e.iv](#k): $K$
# 1. [Step 13.f](#omegatilde): $\tilde{\omega}_{\rm fd}$
# 1. [Step 13.g](#dinv): $D^{-1}$
# 1. [Step 14](#coord): Terms Dependent on Coordinates
# 1. [Step 14.a](#usigma): $\Sigma$
# 1. [Step 14.b](#w2): $\varpi^{2}$
# 1. [Step 14.d](#sin2theta): $\sin^{2}\theta$
# 1. [Step 14.e](#costheta): $\cos\theta$
# 1. [Step 15](#vectors): Important Vectors
# 1. [Step 15.a](#v): ${\bf v}$
# 1. [Step 15.b](#xi): $\boldsymbol{\xi}$
# 1. [Step 15.c](#e3): ${\bf e}_{3}$
# 1. [Step 15.d](#n): ${\bf n}$
# 1. [Step 16](#spin_combos): Spin Combinations $\boldsymbol{\sigma}$, $\boldsymbol{\sigma}^{*}$, and ${\bf S}_{\rm Kerr}$
# 1. [Step 16.a](#a): $a$
# 1. [Step 16.b](#skerrhat): $\hat{\bf S}_{\rm Kerr}$
# 1. [Step 16.c](#skerrmag): $\left\lvert {\bf S}_{\rm Kerr} \right\rvert$
# 1. [Step 16.d](#skerr): ${\bf S}_{\rm Kerr}$
# 1. [Step 16.e](#sigma): $\boldsymbol{\sigma}$
# 1. [Step 16.f](#sigmastar): $\boldsymbol{\sigma}^{*}$
# 1. [Step 17](#fundquant): Fundamental Quantities
# 1. [Step 17.a](#u): $u$
# 1. [Step 17.b](#r): $r$
# 1. [Step 17.c](#eta): $\eta$
# 1. [Step 17.d](#mu): $\mu$
# 1. [Step 17.e](#m): $M$
# 1. [Step 18](#validation): Validation
# 1. [Step 19](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
# <a id='outputcreation'></a>
#
# # Step 0: Creating the output directory for SEOBNR \[Back to [top](#toc)\]
# $$\label{outputcreation}$$
#
# First we create the output directory for SEOBNR (if it does not already exist):
# +
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
# Create C code output directory:
Ccodesdir = "SEOBNR"
# Then create an output directory in case it does not exist
cmd.mkdir(Ccodesdir)
# -
# <a id='hreal'></a>
#
# # Step 1: The real Hamiltonian $H_{\textrm{real}}$ \[Back to [top](#toc)\]
# $$\label{hreal}$$
#
# The SEOB Hamiltonian $H_{\rm real}$ is given by [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.69):
#
# \begin{equation*}
# H_{\rm real} = M \sqrt{ 1 + 2 \eta \left( \frac{ H_{\rm eff} }{ \mu } - 1 \right) }.
# \end{equation*}
#
# Here $H_{\rm eff}$ (defined in [this cell](#heff)) is an *effective* Hamiltonian (see [this cell](#intro)) and $M$ (defined in [this cell](#m)), $\mu$ (defined in [this cell](#mu)), and $\eta$ (defined in [this cell](#eta)) are constants determined by $m_{1}$ and $m_{2}$.
# %%writefile $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Hreal = sp.sqrt(1 + 2*eta*(Heff - 1))
# <a id='heff'></a>
#
# # Step 2: The Effective Hamiltonian $H_{\rm eff}$ \[Back to [top](#toc)\]
# $$\label{heff}$$
#
# The effective Hamiltonian $H_{\rm eff}$ is given by [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.70):
#
# \begin{equation*}
# H_{\rm eff} = H_{\rm S} + \underbrace{ \beta^{i} p_{i} + \alpha \sqrt{ \mu^{2} + \gamma^{ij} p_{i} p_{j} + {\cal Q}_{4} } }_{ H_{\rm NS} } - \underbrace{ \frac{ \mu }{ 2 M r^{3} } \left( \delta^{ij} - 3 n^{i} n^{j} \right) S^{*}_{i} S^{*}_{j} }_{ H_{\rm D} }.
# \end{equation*}
#
# Here $H_{\rm S}$ (considered further in [this cell](#hs)) denotes leading order effects of spin-spin and spin-orbit coupling, $H_{\rm NS}$ (considered further in [this cell](#hns)) is the Hamiltonian for a nonspinning test particle, and $H_{\rm D}$ (considered further in [this cell](#hd)) describes quadrupole deformation of the coupling of the particle's spin with itself to leading order. [T2014](https://arxiv.org/abs/1311.2544) adds to $H_{\rm eff}$ a 3PN spin-spin term given by
#
# \begin{equation*}
# \frac{d_{\rm SS} \eta }{ r^{4} } \left( {\bf S}_{1}^{2} + {\bf S}_{2}^{2} \right)
# \end{equation*}
#
# where $d_{\rm SS}$ is an adjustable parameter determined by fitting to numerical relativity results. We take $u \equiv \frac{1}{r}$ (as described in [this cell](#u)) and define $\eta$ in [this cell](#eta).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Heff = Hs + Hns - Hd + dSS*eta*u*u*u*u*(S1x*S1x + S1y*S1y + S1z*S1z + S2x*S2x + S2y*S2y + S2z*S2z)
dSS = 8.127 - 154.2*eta + 830.8*eta*eta
# -
# <a id='heff_terms'></a>
#
# # Step 3: Terms of $H_{\rm eff}$ \[Back to [top](#toc)\]
# $$\label{heff_terms}$$
#
# In this step, we break down each of the terms $H_{\rm S}$ (defined in [this cell](#hs)), $H_{\rm NS}$ (defined in [this cell](#hns)), and $H_{\rm D}$ (defined in [this cell](#hd)) in $H_{\rm eff}$ (defined in [this cell](#heff)).
# <a id='hs'></a>
#
# ## Step 3.a: Leading Order Spin Effects $H_{\rm S}$ \[Back to [top](#toc)\]
# $$\label{hs}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.17),
#
# \begin{equation*}
# H_{\rm S} = H_{\rm SO} + H_{\rm SS}
# \end{equation*}
#
# where $H_{\rm SO}$ (defined in [this cell](#hso)) includes spin-orbit terms and $H_{\rm SS}$ (defined in [this cell](#hss)) includes spin-spin terms.
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Hs = Hso + Hss
# -
# <a id='hns'></a>
#
# ## Step 3.b: The Nonspinning Hamiltonian $H_{\rm NS}$ \[Back to [top](#toc)\]
# $$\label{hns}$$
#
# We defined $H_{\rm NS}$ in [this cell](#heff) as
#
# \begin{equation*}
# H_{\rm NS} = \underbrace{ \beta^{i} p_{i} }_{ \beta\ p\ \rm sum } + \alpha \sqrt{ \smash[b]{ \underbrace{ \mu^{2} + \gamma^{ij} p_{i} p_{j} + {\cal Q}_{4} }_{ H_{\rm NS}\ \rm radicand } } }.
# \end{equation*}
#
# We compute $\beta\ p$ sum in [this cell](#betapsum), $\alpha$ in [this cell](#alpha), and $H_{\rm NS}$ radicand in [this cell](#hnsradicand).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Hns = betapsum + alpha*sp.sqrt(Hnsradicand)
# -
# <a id='hd'></a>
#
# ## Step 3.c: The Quadrupole Deformation $H_{\rm D}$ \[Back to [top](#toc)\]
# $$\label{hd}$$
#
# We defined $H_{\rm D}$ in [this cell](#heff) as:
#
# \begin{equation*}
# H_{\rm D} = \underbrace{ \frac{ \mu }{ 2 M r^{3} } }_{H_{\rm D}\ {\rm coefficient}} \underbrace{ \left( \delta^{ij} - 3 n^{i} n^{j} \right) S^{*}_{i} S^{*}_{j} }_{H_{\rm D}\ {\rm sum}}
# \end{equation*}
#
# We compute $H_{\rm D}$ coefficient in [this cell](#hdcoeff) and $H_{\rm D}$ sum in [this cell](#hdsum).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Hd = Hdcoeff*Hdsum
# -
# <a id='hso'></a>
#
# # Step 4: The Spin-Orbit Term $H_{\rm SO}$ \[Back to [top](#toc)\]
# $$\label{hso}$$
#
# We will write [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.18) as:
#
# \begin{align*}
# H_{\rm SO} = H_{\rm SO}\ {\rm Term\ 1} + H_{\rm SO}\ {\rm Term\ 2\ coefficient} * H_{\rm SO}\ {\rm Term\ 2}.
# \end{align*}
#
# We define and consider $H_{\rm SO}$ Term 1 in [this cell](#hsoterm1), $H_{\rm SO}$ Term 2 coefficient in [this cell](#hsoterm2coeff), and $H_{\rm SO}$ Term 2 in [this cell](#hsoterm2).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Hso = HsoTerm1 + HsoTerm2coeff*HsoTerm2
# -
# <a id='hsoterm1'></a>
#
# ## Step 4.a: $H_{\rm SO}$ Term 1 \[Back to [top](#toc)\]
# $$\label{hsoterm1}$$
#
# Combining our notation $H_{\rm SO}$ (defined in [this cell](#hso)) with [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.18), we have
#
# \begin{equation*}
# H_{\rm SO}\ {\rm Term\ 1} = \frac{ e^{2 \nu - \tilde{\mu} } \left( e^{\tilde{\mu} + \nu} - \tilde{B} \right) \left( \hat{\bf p} \cdot \boldsymbol{\xi} r \right) \left( {\bf S} \cdot \hat{\bf S}_{\rm Kerr} \right) }{ \tilde{B}^{2} \sqrt{Q} \xi^{2} }.
# \end{equation*}
#
# We will write
#
# \begin{equation*}
# H_{\rm SO}\ {\rm Term\ 1} = \frac{ e^{2 \nu} \left( e^{\tilde{\mu}} e^{\nu} - \tilde{B} \right) \left( \hat{\bf p} \cdot \boldsymbol{\xi} r \right) \left( {\bf S} \cdot \hat{\bf S}_{\rm Kerr} \right) }{ e^{ \tilde{\mu} } \tilde{B}^{2} \sqrt{Q} \xi^{2} }.
# \end{equation*}
#
# We define $e^{\tilde{\mu}}$ in [this cell](#exp2mu), $e^{\nu}$ in [this cell](#exp2nu), $\tilde{B}$ in [this cell](#btilde), $\hat{\bf p} \cdot \boldsymbol{\xi} r$ in [this cell](#pdotxir), ${\bf S} \cdot \hat{\bf S}_{\rm Kerr}$ in [this cell](#sdotskerrhat), $Q$ in [this cell](#q), and $\boldsymbol{\xi}^{2}$ in [this cell](#sin2theta).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HsoTerm1 = exp2nu*(expmu*expnu - Btilde)*pdotxir*SdotSkerrhat/(expmu*Btilde*Btilde*sp.sqrt(Q)*xisq)
# -
# <a id='hsoterm2coeff'></a>
#
# ## Step 4.b: $H_{\rm SO}$ Term 2 Coefficient \[Back to [top](#toc)\]
# $$\label{hsoterm2coeff}$$
#
# Combining our notation $H_{\rm SO}$ (defined in [this cell](#hso)) with [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.18), we have
#
# \begin{equation*}
# H_{\rm SO}\ {\rm Term\ 2\ coefficient} = \frac{ e^{\nu - 2 \tilde{\mu}} }{ \tilde{B}^{2} \left( \sqrt{Q} + 1 \right) \sqrt{Q} \xi^{2} }
# \end{equation*}
#
# which we write in the form
#
# \begin{equation*}
# H_{\rm SO}\ {\rm Term\ 2\ coefficient} = \frac{ e^{\nu} }{ e^{2 \tilde{\mu}} \tilde{B}^{2} \left( Q + \sqrt{Q} \right) \xi^{2} }.
# \end{equation*}
#
# We define and consider $e^{\nu}$ in [this cell](#exp2nu), $e^{\tilde{\mu}}$ in [this cell](#exp2mu), $\tilde{B}$ in [this cell](#btilde), $Q$ in [this cell](#q), and $\xi^{2}$ in [this cell](#sin2theta).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HsoTerm2coeff = expnu/(exp2mu*Btilde*Btilde*(Q + sp.sqrt(Q))*xisq)
# -
# <a id='hsoterm2'></a>
#
# ## Step 4.c: $H_{\rm SO}$ Term 2 \[Back to [top](#toc)\]
# $$\label{hsoterm2}$$
#
# Combining our notation $H_{\rm SO}$ (defined in [this cell](#hso)) with [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.18), we have
#
# \begin{align*}
# H_{\rm SO}\ {\rm Term\ 2} &= \underbrace{ \left( {\bf S} \cdot \boldsymbol{\xi} \right) \tilde{J} \left[ \mu_r \left( \hat{\bf p} \cdot {\bf v} r \right) \left( \sqrt{Q} + 1 \right) - \mu_{\cos \theta} \left( \hat{\bf p} \cdot {\bf n} \right) \xi^{2} -\sqrt{Q} \left( \nu_r \left( \hat{\bf p} \cdot {\bf v} r \right) + \left( \mu_{\cos \theta} - \nu_{\cos \theta} \right) \left( \hat{\bf p} \cdot {\bf n} \right) \xi^{2} \right) \right] \tilde{B}^{2} }_{H_{\rm SO}\ {\rm Term\ 2a}} \\
# &\ \ \ \ \ + \underbrace{ e^{\tilde{\mu} + \nu} \left( \hat{\bf p} \cdot \boldsymbol{\xi} r \right) \left( 2 \sqrt{Q} + 1 \right) \left[ \tilde{J} \nu_r \left( {\bf S} \cdot {\bf v} \right) - \nu_{\cos \theta} \left( {\bf S} \cdot {\bf n} \right) \xi^{2} \right] \tilde{B} }_{H_{\rm SO}\ {\rm Term\ 2b}} - \underbrace{ \tilde{J} \tilde{B}_{r} e^{\tilde{\mu} + \nu} \left( \hat{\bf p} \cdot \boldsymbol{\xi} r \right) \left( \sqrt{Q} + 1 \right) \left( {\bf S} \cdot {\bf v} \right) }_{H_{\rm SO}\ {\rm Term\ 2c}}
# \end{align*}
#
# We compute $H_{\rm SO}$ Term 2a in [this cell](#hsoterm2a), $H_{\rm SO}$ Term 2b in [this cell](#hsoterm2b), and $H_{\rm SO}$ Term 2c in [this cell](#hsoterm2c).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HsoTerm2 = HsoTerm2a + HsoTerm2b - HsoTerm2c
# -
# <a id='hsoterm2a'></a>
#
# ### Step 4.c.i: $H_{\rm SO}$ Term 2a \[Back to [top](#toc)\]
# $$\label{hsoterm2a}$$
#
# We defined $H_{\rm S0}$ Term 2a in [this cell](#hsoterm2) as
#
# \begin{equation*}
# H_{\rm SO}\ {\rm Term\ 2a} = \left( {\bf S} \cdot \boldsymbol{\xi} \right) \tilde{J} \left[ \mu_r \left( \hat{\bf p} \cdot {\bf v} r \right) \left( \sqrt{Q} + 1 \right) - \mu_{\cos \theta} \left( \hat{\bf p} \cdot {\bf n} \right) \xi^{2} -\sqrt{Q} \left( \nu_r \left( \hat{\bf p} \cdot {\bf v} r \right) + \left( \mu_{\cos \theta} - \nu_{\cos \theta} \right) \left( \hat{\bf p} \cdot {\bf n} \right) \xi^{2} \right) \right] \tilde{B}^{2}.
# \end{equation*}
#
# We define ${\bf S} \cdot \boldsymbol{\xi}$ in [this cell](#sdotxi), $\tilde{J}$ in [this cell](#jtilde), $\mu_{r}$ in [this cell](#mur), $\hat{\bf p} \cdot {\bf v} r$ in [this cell](#pdotvr), $Q$ in [this cell](#q), $\mu_{\cos \theta}$ in [this cell](#mucostheta), $\hat{\bf p} \cdot {\bf n}$ in [this cell](#pdotn), $\xi^{2}$ in [this cell](#sin2theta), $\nu_{r}$ in [this cell](#nur), $\nu_{\cos\theta}$ in [this cell](#nucostheta), and $\tilde{B}$ in [this cell](#btilde).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HsoTerm2a = Sdotxi*Jtilde*(mur*pdotvr*(sp.sqrt(Q) + 1) - mucostheta*pdotn*xisq
- sp.sqrt(Q)*(nur*pdotvr + (mucostheta - nucostheta)*pdotn*xisq))*Btilde*Btilde
# -
# <a id='hsoterm2b'></a>
#
# ### Step 4.c.ii: $H_{\rm SO}$ Term 2b \[Back to [top](#toc)\]
# $$\label{hsoterm2b}$$
#
# We defined $H_{\rm S0}$ Term 2b in [this cell](#hsoterm2) as
#
# \begin{equation*}
# H_{\rm SO}\ {\rm Term\ 2b} = e^{\tilde{\mu} + \nu} \left( \hat{\bf p} \cdot \boldsymbol{\xi} r \right) \left( 2 \sqrt{Q} + 1 \right) \left[ \tilde{J} \nu_r \left( {\bf S} \cdot {\bf v} \right) - \nu_{\cos \theta} \left( {\bf S} \cdot {\bf n} \right) \xi^{2} \right] \tilde{B}.
# \end{equation*}
#
# We define $e^{\tilde{\mu}}$ in [this cell](#exp2mu), $e^{\nu}$ in [this cell](#exp2nu), $\hat{\bf p} \cdot \xi r$ in [this cell](#pdotxir), $Q$ in [this cell](#q), $\tilde{J}$ in [this cell](#jtilde), $\nu_{r}$ in [this cell](#nur), ${\bf S} \cdot {\bf v}$ in [this cell](#sdotv), $\nu_{\cos\theta}$ in [this cell](#nucostheta), ${\bf S} \cdot {\bf n}$ in [this cell](#sdotn), $\xi^{2}$ in [this cell](#sin2theta), and $\tilde{B}$ in [this cell](#btilde).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HsoTerm2b = expmu*expnu*pdotxir*(2*sp.sqrt(Q) + 1)*(Jtilde*nur*Sdotv - nucostheta*Sdotn*xisq)*Btilde
# -
# <a id='hsoterm2c'></a>
#
# ### Step 4.c.iii: $H_{\rm SO}$ Term 2c \[Back to [top](#toc)\]
# $$\label{hsoterm2c}$$
#
# We defined $H_{\rm S0}$ Term 2c in [this cell](#hsoterm2) as
#
# \begin{equation*}
# H_{\rm SO}\ {\rm Term\ 2c} = \tilde{J} \tilde{B}_{r} e^{\tilde{\mu} + \nu} \left( \hat{\bf p} \cdot \boldsymbol{\xi} r \right) \left( \sqrt{Q} + 1 \right) \left( {\bf S} \cdot {\bf v} \right)
# \end{equation*}
#
# We define $\tilde{J}$ in [this cell](#jtilde), $\tilde{B}_{r}$ in [this cell](#brtilde), $e^{\tilde{\mu}}$ in [this cell](#exp2mu), $e^{\nu}$ in [this cell](#exp2nu), $\hat{\bf p} \cdot \xi r$ in [this cell](#pdotxir), $Q$ in [this cell](#q), and ${\bf S} \cdot {\bf v}$ in [this cell](#sdotv).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HsoTerm2c = Jtilde*Brtilde*expmu*expnu*pdotxir*(sp.sqrt(Q) + 1)*Sdotv
# -
# <a id='hss'></a>
#
# # Step 5: The Spin-Spin Term $H_{\rm SS}$ \[Back to [top](#toc)\]
# $$\label{hss}$$
#
# We will write [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.19) as
#
# \begin{equation*}
# H_{\rm SS} = H_{\rm SS}\ {\rm Term\ 1} + H_{\rm SS}\ {\rm Term\ 2\ coefficient} * H_{\rm SS}\ {\rm Term\ 2} + H_{\rm SS}\ {\rm Term\ 3\ coefficient} * H_{\rm SS}\ {\rm Term\ 3}.
# \end{equation*}
#
# We define $H_{\rm SS}$ Term 1 in [this cell](#hssterm1), $H_{\rm SS}$ Term 2 coefficient in [this cell](#hssterm2coeff), $H_{\rm SS}$ Term 2 in [this cell](#hssterm2), $H_{\rm SS}$ Term 3 coefficient in [this cell](#hssterm3coeff), and $H_{\rm SS}$ Term 3 in [this cell](#hssterm3).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Hss = HssTerm1 + HssTerm2coeff*HssTerm2 + HssTerm3coeff*HssTerm3
# -
# <a id='hssterm1'></a>
#
# ## Step 5.a: $H_{\rm SS}$ Term 1 \[Back to [top](#toc)\]
# $$\label{hssterm1}$$
#
# Combining [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.19) with our definition of $H_{\rm SS}$ Term 1 in [this cell](#hss), we have
#
# \begin{equation*}
# H_{\rm SS}\ {\rm Term\ 1} = \omega \left( {\bf S} \cdot \hat{\bf S}_{\rm Kerr} \right).
# \end{equation*}
#
# We define $\omega$ in [this cell](#omega) and ${\bf S} \cdot \hat{\bf S}_{\rm Kerr}$ in [this cell](#sdotskerrhat).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HssTerm1 = omega*SdotSkerrhat
# -
# <a id='hssterm2coeff'></a>
#
# ## Step 5.b: $H_{\rm SS}$ Term 2 Coefficient \[Back to [top](#toc)\]
# $$\label{hssterm2coeff}$$
#
# Combining [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.19) with ore definition of $H_{\rm SS}$ Term 2 coefficient in [this cell](#hss), we have
#
# \begin{equation*}
# H_{\rm SS}\ {\rm Term\ 2\ coefficient} = \frac{ e^{-3 \tilde{\mu} -\nu} \tilde{J} \omega_{r} }{ 2 \tilde{B} \left( \sqrt{Q} + 1 \right) \sqrt{Q} \xi^{2} }
# \end{equation*}
#
# which we write as
#
# \begin{equation*}
# H_{\rm SS}\ {\rm Term\ 2\ coefficient} = \frac{ \tilde{J} \omega_{r} }{ 2 e^{2 \tilde{\mu}} e^{\tilde{\mu}} e^{\nu} \tilde{B} \left( Q + \sqrt{Q} \right) \xi^{2} }.
# \end{equation*}
#
# We define $\tilde{J}$ in [this cell](#jtilde), $\omega_{r}$ in [this cell](#omegar), $e^{\tilde{\mu}}$ in [this cell](#exp2mu), $e^{\nu}$ in [this cell](#exp2nu), $\tilde{B}$ in [this cell](#btilde), $Q$ in [this cell](#q), and $\xi^{2}$ in [this cell](#sin2theta).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HssTerm2coeff = Jtilde*omegar/(2*exp2mu*expmu*expnu*Btilde*(Q + sp.sqrt(Q))*xisq)
# -
# <a id='hssterm2'></a>
#
# ## Step 5.c: $H_{\rm SS}$ Term 2 \[Back to [top](#toc)\]
# $$\label{hssterm2}$$
#
# Combining [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.19) with our definition of $H_{\rm SS}$ Term 2 in [this cell](#hss), we have
#
# \begin{equation*}
# H_{\rm SS}\ {\rm Term\ 2} = -e^{\tilde{\mu} + \nu} \left( {\bf \hat{p}} \cdot {\bf v} r \right) \left( {\bf \hat{p}} \cdot {\bf \xi} r \right) \left( {\bf S} \cdot {\bf \xi} \right)
# \tilde{B} + e^{2 \left( \tilde{\mu} + \nu \right)} \left( {\bf \hat{p}} \cdot {\bf \xi} r \right)^2 \left( {\bf S}
# \cdot {\bf v} \right) + e^{2 \tilde{\mu}} \left( 1 + \sqrt{Q} \right) \sqrt{Q} \left( {\bf S} \cdot {\bf v} \right)\xi^2 \tilde{B}^{2} + \tilde{J} \left( {\bf \hat{p}} \cdot {\bf n} \right) \left[ \left( {\bf \hat{p}} \cdot {\bf v} r \right)
# \left( {\bf S} \cdot {\bf n}\right) - \tilde{J} \left( {\bf \hat{p}} \cdot {\bf n} \right) \left( {\bf S} \cdot {\bf v} \right)\right] \xi^{2} \tilde{B}^{2}
# \end{equation*}
#
# which we write as
#
# \begin{align*}
# H_{\rm SS}\ {\rm Term\ 2} &= e^{\tilde{\mu}} \left( {\bf \hat{p}} \cdot {\bf \xi} r \right) \left[ e^{\tilde{\mu}} e^{2 \nu} \left( {\bf \hat{p}} \cdot {\bf \xi} r \right) \left( {\bf S} \cdot {\bf v} \right) - e^{\nu} \left( {\bf \hat{p}} \cdot {\bf v} r \right) \left( {\bf S} \cdot {\bf \xi} \right)
# \tilde{B} \right] \\
# &\ \ \ \ \ + \xi^2 \tilde{B}^{2} \left\{ e^{2 \tilde{\mu}} \left( \sqrt{Q} + Q \right) \left( {\bf S} \cdot {\bf v} \right) + \tilde{J} \left( {\bf \hat{p}} \cdot {\bf n} \right) \left[ \left( {\bf \hat{p}} \cdot {\bf v} r \right)
# \left( {\bf S} \cdot {\bf n}\right) - \tilde{J} \left( {\bf \hat{p}} \cdot {\bf n} \right) \left( {\bf S} \cdot {\bf v} \right)\right] \right\}
# \end{align*}
#
# We define $e^{\tilde{\mu}}$ in [this cell](#exp2mu), $\hat{\bf p} \cdot \boldsymbol{\xi} r$ in [this cell](#pdotxir), $e^{\nu}$ in [this cell](#exp2nu), ${\bf S} \cdot {\bf v}$ in [this cell](#sdotv), $\hat{\bf p} \cdot {\bf v} r$ in [this cell](#pdotvr), ${\bf S} \cdot \boldsymbol{\xi}$ in [this cell](#sdotxi), $\tilde{B}$ in [this cell](#btilde), $Q$ in [this cell](#q), $\tilde{J}$ in [this cell](#jtilde), $\hat{\bf p} \cdot {\bf n}$ in [this cell](#pdotn), ${\bf S} \cdot {\bf n}$ in [this cell](#sdotn), and $\xi^{2}$ in [this cell](#sin2theta).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HssTerm2 = expmu*pdotxir*(expmu*exp2nu*pdotxir*Sdotv - expnu*pdotvr*Sdotxi*Btilde)
+ xisq*Btilde*Btilde*(exp2mu*(sp.sqrt(Q) + Q)*Sdotv
+ Jtilde*pdotn*(pdotvr*Sdotn - Jtilde*pdotn*Sdotv))
# -
# <a id='hssterm3coeff'></a>
#
# ## Step 5.d: $H_{\rm SS}$ Term 3 Coefficient \[Back to [top](#toc)\]
# $$\label{hssterm3coeff}$$
#
# Combining [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.19) with our definition of $H_{\rm SS}$ Term 3 coefficient in [this cell](#hss), we have
#
# \begin{equation*}
# H_{\rm SS}\ {\rm Term\ 3\ coefficient} = \frac{ e^{-3 \tilde{\mu} - \nu} \omega_{\cos\theta} }{ 2 \tilde{B} \left( \sqrt{Q} + 1 \right) \sqrt{Q} }
# \end{equation*}
#
# which we write as
#
# \begin{equation*}
# H_{\rm SS}\ {\rm Term\ 3\ coefficient} = \frac{ \omega_{\cos\theta} }{ 2 e^{2 \tilde{\mu}} e^{\tilde{\mu}} e^{\nu} \tilde{B} \left( Q + \sqrt{Q} \right) }.
# \end{equation*}
#
# We define $\omega_{\cos\theta}$ in [this cell](#omegacostheta), $e^{\tilde{\mu}}$ in [this cell](#exp2mu), $e^{\nu}$ in [this cell](#exp2nu), and $\tilde{B}$ in [this cell](#btilde), $Q$ in [this cell](#q).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HssTerm3coeff = omegacostheta/(2*exp2mu*expmu*expnu*Btilde*(Q + sp.sqrt(Q)))
# -
# <a id='hssterm3'></a>
#
# ## Step 5.e: $H_{\rm SS}$ Term 3 \[Back to [top](#toc)\]
# $$\label{hssterm3}$$
#
# Combining [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.19) with our definition of $H_{\rm SS}$ Term 3 in [this cell](#hss), we have
#
# \begin{align*}
# H_{\rm SS}\ {\rm Term\ 3} &= -e^{2 \left( \tilde{\mu} + \nu \right)} \left( \hat{\bf p} \cdot {\bf \xi} r \right)^{2} \left( {\bf S} \cdot {\bf n} \right) + e^{\tilde{\mu} +\nu} \tilde{J} \left( {\bf \hat{p}} \cdot {\bf n} \right) \left( {\bf \hat{p}} \cdot {\bf \xi} r \right) \left( {\bf S} \cdot {\bf \xi} \right) \tilde{B} \\
# &\ \ \ \ \ + \left[ \left( {\bf S} \cdot {\bf n} \right) \left( {\bf \hat{p}} \cdot {\bf v} r \right)^{2} - \tilde{J} \left( {\bf \hat{p}} \cdot {\bf n} \right) \left( {\bf S} \cdot {\bf v} \right) \left( {\bf \hat{p}} \cdot {\bf v} r\right) - e^{2 \tilde{\mu}} \left( 1 + \sqrt{Q} \right) \sqrt{Q} \left( {\bf S} \cdot {\bf n} \right) \xi^{2} \right] \tilde{B}^{2}
# \end{align*}
#
# which we write as
#
# \begin{align*}
# H_{\rm SS}\ {\rm Term\ 3} &= e^{\tilde{\mu}} e^{\nu} \left( \hat{\bf p} \cdot {\bf \xi} r \right) \left[ \tilde{J} \left( {\bf \hat{p}} \cdot {\bf n} \right) \left( {\bf S} \cdot {\bf \xi} \right) \tilde{B} - e^{\tilde{\mu}} e^{\nu} \left( \hat{\bf p} \cdot {\bf \xi} r \right) \left( {\bf S} \cdot {\bf n} \right) \right] \\
# &\ \ \ \ \ + \left\{ \left( {\bf \hat{p}} \cdot {\bf v} r \right) \left[ \left( {\bf S} \cdot {\bf n} \right) \left( {\bf \hat{p}} \cdot {\bf v} r \right) - \tilde{J} \left( {\bf \hat{p}} \cdot {\bf n} \right) \left( {\bf S} \cdot {\bf v} \right) \right] - e^{2 \tilde{\mu}} \left( \sqrt{Q} + Q \right) \left( {\bf S} \cdot {\bf n} \right) \xi^{2} \right\} \tilde{B}^{2}
# \end{align*}
#
# We define $e^{\tilde{\mu}}$ in [this cell](#exp2mu), $e^{\nu}$ in [this cell](#exp2nu), $\hat{\bf p} \cdot \boldsymbol{\xi} r$ in [this cell](#pdotxir), $\tilde{J}$ in [this cell](#jtilde), $\hat{\bf p} \cdot {\bf n}$ in [this cell](#pdotn), ${\bf S} \cdot \boldsymbol{\xi}$ in [this cell](#sdotxi), $\tilde{B}$ in [this cell](#btilde), ${\bf S} \cdot {\bf n}$ in [this cell](#sdotn), $\hat{\bf p} \cdot {\bf v} r$ in [this cell](#pdotvr), ${\bf S} \cdot {\bf v}$ in [this cell](#sdotv), $Q$ in [this cell](#q), and $\xi^{2}$ in [this cell](#sin2theta).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HssTerm3 = expmu*expnu*pdotxir*(Jtilde*pdotn*Sdotxi*Btilde - expmu*expnu*pdotxir*Sdotn)
+ (pdotvr*(Sdotn*pdotvr - Jtilde*pdotn*Sdotv) - exp2mu*(sp.sqrt(Q) + Q)*Sdotn*xisq)*Btilde*Btilde
# -
# <a id='hnsterms'></a>
#
# # Step 6: $H_{\rm NS}$ Terms \[Back to [top](#toc)\]
# $$\label{hnsterms}$$
#
# We collect here the terms in $H_{\rm NS}$ (defined in [this cell](#hns)).
# <a id='betapsum'></a>
#
# ## Step 6.a: $\beta p$ sum \[Back to [top](#toc)\]
# $$\label{betapsum}$$
#
# We defined the term $\beta p$ sum in [this cell](#hns) as
#
# \begin{equation*}
# \beta p\ {\rm sum} = \beta^{i} p_{i}.
# \end{equation*}
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.45), we have
#
# \begin{equation*}
# \beta^{i} = \frac{ g^{ti} }{ g^{tt} },
# \end{equation*}
#
# but from [BB2010](https://arxiv.org/abs/0912.3517) Equations (5.36) we see that $g^{tr} = g^{t \theta} = 0$. Thus only $\beta^{\phi}$ is nonzero. Combining [BB2010](https://arxiv.org/abs/0912.3517) Equations (5.45), (5.36e), and (5.36a), we find
#
# \begin{equation*}
# \beta^{\phi} = \frac{ -\frac{ \tilde{\omega}_{\rm fd} }{ \Delta_{t} \Sigma } }{ -\frac{ \Lambda_{t} }{ \Delta_{t} \Sigma } } = \frac{ \tilde{\omega}_{\rm fd} }{ \Lambda_{t} }
# \end{equation*}
#
# Therefore
#
# \begin{equation*}
# \beta^{i} p_{i} = \frac{ \tilde{\omega}_{\rm fd} }{ \Lambda_{t} } p_{\phi}.
# \end{equation*}
#
# We define $\tilde{\omega}_{\rm fd}$ in [this cell](#omegatilde), $\Lambda_{t}$ in [this cell](#lambdat), and $p_{\phi}$ in [this cell](#pphi).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
betapsum = omegatilde*pphi/Lambdat
# -
# <a id='alpha'></a>
#
# ## Step 6.b: $\alpha$ \[Back to [top](#toc)\]
# $$\label{alpha}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.44), we have
# \begin{equation*}
# \alpha = \frac{ 1 }{ \sqrt{ -g^{tt}} },
# \end{equation*}
#
# and from [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.36a) we have
#
# \begin{equation*}
# g^{tt} = -\frac{ \Lambda_{t} }{ \Delta_{t} \Sigma }.
# \end{equation*}
#
# Therefore
#
# \begin{equation*}
# \alpha = \sqrt{ \frac{ \Delta_{t} \Sigma }{ \Lambda_{t} } }.
# \end{equation*}
#
# We define $\Delta_{t}$ in [this cell](#deltat), $\Sigma$ in [this cell](#usigma), and $\Lambda_{t}$ in [this cell](#lambdat).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
alpha = sp.sqrt(Deltat*Sigma/Lambdat)
# -
# <a id='hnsradicand'></a>
#
# ## Step 6.c: $H_{\rm NS}$ radicand \[Back to [top](#toc)\]
# $$\label{hnsradicand}$$
#
# Recall that we defined $H_{\rm NS}$ radicand in [this cell](#hns) as
#
# \begin{equation*}
# H_{\rm NS}\ {\rm radicand} = \mu^{2} + \underbrace{\gamma^{ij} p_{i} p_{j}}_{\gamma p\ \rm sum} + {\cal Q}_{4}
# \end{equation*}
#
# We define $\mu$ in [this cell](#mu), $\gamma p$ sum in [this cell](#gammappsum), and ${\cal Q}_{4}$ in [this cell](#q4).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Hnsradicand = 1 + gammappsum + Q4
# -
# <a id='gammappsum'></a>
#
# ### Step 6.c.i: $\gamma^{ij} p_{i} p_{j}$ \[Back to [top](#toc)\]
# $$\label{gammappsum}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.46), we have
#
# \begin{equation*}
# \gamma^{ij} = g^{ij} - \frac{ g^{ti} g^{tj} }{ g^{tt} }.
# \end{equation*}
#
# Combining this result with [BB2010](https://arxiv.org/abs/0912.3517) Equations 5.36, we have
#
# \begin{equation*}
# \gamma^{r\theta} = \gamma^{r\phi} = \gamma^{\theta r} = \gamma^{\theta\phi} = \gamma^{\phi r} = \gamma^{\phi\theta} = 0
# \end{equation*}
#
# and
#
# \begin{align*}
# \gamma^{rr} &= g^{rr} = \frac{ \Delta_{r} }{ \Sigma } \\
# \gamma^{\theta\theta} &= g^{\theta\theta} = \frac{ 1 }{ \Sigma } \\
# \gamma^{\phi\phi} &= \frac{ \Sigma }{ \Lambda_{t} \sin^{2} \theta }.
# \end{align*}
#
# Therefore
#
# \begin{align*}
# \gamma^{ij} p_{i} p_{j} &= \gamma^{rr} p_{r} p_{r} + \gamma^{\theta\theta} p_{\theta} p_{\theta} + \gamma^{\phi\phi} p_{\phi} p_{\phi} \\
# &= \frac{ \Delta_{r} }{ \Sigma } p_{r}^{2} + \frac{ 1 }{ \Sigma } p_{\theta}^{2} + \frac{ \Sigma }{ \Lambda_{t} \sin^{2} \theta } p_{\phi}^{2}.
# \end{align*}
#
# Converting Boyer-Lindquist coordinates to tortoise coordinates (the transformation for which is found in the Appendix of [P2010](https://arxiv.org/abs/0912.3466v2)), we have
#
# \begin{align*}
# p_{r} &= \hat{\bf p} \cdot {\bf n} \\
# p_{\theta} &= \hat{\bf p} \cdot {\bf v} \frac{ r }{ \sin \theta } \\
# p_{\phi} &= \hat{\bf p} \cdot \boldsymbol{\xi} r.
# \end{align*}
#
# Therefore
#
# \begin{equation*}
# \gamma^{ij} p_{i} p_{j} = \frac{ \Delta_{r} }{ \Sigma } \left( \hat{\bf p} \cdot {\bf n} \right)^{2} + \Sigma^{-1} \left( \hat{\bf p} \cdot {\bf v} \frac{ r }{ \sin \theta } \right)^{2} + \frac{ \Sigma }{ \Lambda_{t} \sin^{2} \theta } \left( \hat{\bf p} \cdot \boldsymbol{\xi} r \right)^{2}.
# \end{equation*}
#
# We define $\Delta_{r}$ in [this cell](#deltar), $\Sigma$ in [this cell](#sigma), $\hat{\bf p} \cdot {\bf n}$ in [this cell](#pdotn), $\hat{\bf p} \cdot {\bf v} r$ in [this cell](#pdotvr), $\sin^{2} \theta$ in [this cell](#sin2theta), $\Lambda_{t}$ in [this cell](#lambdat), and $\hat{\bf p} \cdot \boldsymbol{\xi} r$ in [this cell](#pdotxir).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
gammappsum = Deltar/Sigma*pdotn*pdotn + 1/Sigma*pdotvr*pdotvr/sin2theta + Sigma/Lambdat/sin2theta*pdotxir*pdotxir
# -
# <a id='q4'></a>
#
# ### Step 6.c.ii: ${\cal Q}_{4}$ \[Back to [top](#toc)\]
# $$\label{q4}$$
#
# From [T2012](https://arxiv.org/abs/1202.0790) Equation (15),
#
# \begin{equation*}
# {\cal Q}_{4} \propto \frac{ p_{r^{*}}^{4} }{ r^{2} } \left( r^{2} + \chi_{\rm Kerr}^{2} \right)^{4}.
# \end{equation*}
#
# We denote $p_{r^{*}}$ by prT. Converting from tortoise coordinates to physical coordinates(the transformation for which is found in the Appendix of [P2010](https://arxiv.org/abs/0912.3466v2)), we find
#
# \begin{equation*}
# {\cal Q}_{4} = \frac{ prT^{4} }{ r^{2} } z_{3}
# \end{equation*}
#
# where $z_{3}$ is found in [D2000](https://arxiv.org/abs/gr-qc/0005034) Equation (4.34):
#
# \begin{equation*}
# z_{3} = 2 \left( 4 - 3 \nu \right) \nu.
# \end{equation*}
#
# In the notation of [BB2010](https://arxiv.org/abs/0912.3517), $\nu = \eta$ (see discussion after [T2012](https://arxiv.org/abs/1202.0790) Equation (2)). Thus
#
# \begin{equation*}
# {\cal Q}_{4} = 2 prT^{4} u^{2} \left( 4 - 3 \eta \right) \eta.
# \end{equation*}
#
# We define prT in [this cell](#prt), $u$ in [this cell](#u), and $\eta$ in [this cell](#eta) below.
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Q4 = 2*prT*prT*prT*prT*u*u*(4 - 3*eta)*eta
# -
# <a id='hdterms'></a>
#
# # Step 7: The $H_{\rm D}$ Terms \[Back to [top](#toc)\]
# $$\label{hdterms}$$
#
# Recall we defined $H_{\rm D}$ in [this cell](#hd) as
#
# \begin{equation*}
# H_{\rm D} = H_{\rm D}\ {\rm coeffecient} * H_{\rm D}\ {\rm sum}.
# \end{equation*}
#
# In this step we break down each of $H_{\rm D}$ coefficient (defined in [this cell](#hdcoeff)) and $H_{\rm D}$ sum (defined in [this cell](#hdsum)).
# <a id='hdcoeff'></a>
#
# ## Step 7.a: $H_{\rm D}$ Coefficient \[Back to [top](#toc)\]
# $$\label{hdcoeff}$$
#
# From our definition of $H_{\rm D}$ in [this cell](#hd), we have
#
# \begin{equation*}
# H_{\rm D}\ {\rm coefficient} = \frac{ \mu }{ 2 M r^{3} },
# \end{equation*}
#
# and recalling the definition of [$\eta$](#eta) we'll write
#
# \begin{equation*}
# H_{\rm D}\ {\rm coefficient} = \frac{ \eta }{ 2 r^{3} }.
# \end{equation*}
#
# We define $\eta$ in [this cell](#eta) and $r$ in [this cell](#r).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Hdcoeff = sp.Rational(1,2)/(r*r*r)
# -
# <a id='hdsum'></a>
#
# ## Step 7.b: $H_{\rm D}$ Sum \[Back to [top](#toc)\]
# $$\label{hdsum}$$
#
# From our definition of $H_{\rm D}$ in [this cell](#hd), we have
#
# \begin{align*}
# H_{\rm D}\ {\rm sum} &= \left( \delta^{ij} - 3 n^{i} n^{j} \right) S^{*}_{i} S^{*}_{j} \\
# &= \underbrace{\delta^{ij} S^{*}_{i} S^{*}_{j}}_{\rm Term\ 1} - \underbrace{3 n^{i} n^{j} S^{*}_{i} S^{*}_{j}}_{\rm Term\ 2}.
# \end{align*}
#
# We compute $H_{\rm D}$ Term 1 in [this cell](#hdsumterm1) and $H_{\rm D}$ Term 2 in [this cell](#hdsumterm2).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Hdsum = HdsumTerm1 - HdsumTerm2
# -
# <a id='hdsumterm1'></a>
#
# ### Step 7.b.i: $H_{\rm D}$ Sum Term 1 \[Back to [top](#toc)\]
# $$\label{hdsumterm1}$$
#
# From our definition of $H_{\rm D}$ sum Term 1 in [this cell](#hdsum), we have
#
# \begin{equation*}
# H_{\rm D}\ {\rm sum\ Term\ 1} = \delta^{ij} S^{*}_{i} S^{*}_{j}
# \end{equation*}
#
# where $\delta^{ij}$ is the Kronecker delta:
#
# \begin{equation*}
# \delta_{ij} = \left\{ \begin{array}{cc}
# 0, & i \not= j \\
# 1, & i = j. \end{array} \right.
# \end{equation*}
#
# Thus we have
#
# \begin{equation*}
# H_{\rm D}\ {\rm sum\ Term\ 1} = S^{*}_{1} S^{*}_{1} + S^{*}_{2} S^{*}_{2} + S^{*}_{3} S^{*}_{3}
# \end{equation*}
#
# We define ${\bf S}^{*}$ in [this cell](#hreal_spin_combos).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HdsumTerm1 = Sstar1*Sstar1 + Sstar2*Sstar2 + Sstar3*Sstar3
# -
# <a id='hdsumterm2'></a>
#
# ### Step 7.b.ii: $H_{\rm D}$ Sum Term 2 \[Back to [top](#toc)\]
# $$\label{hdsumterm2}$$
#
# From our definition of $H_{\rm D}$ sum Term 2 in [this cell](#hdsum), we have
#
# \begin{align*}
# H_{\rm D}\ {\rm sum\ Term\ 2} &= 3 n^{i} n^{j} S^{*}_{i} S^{*}_{j} \\
# &= 3 \left( {\bf S}^{*} \cdot {\bf n} \right)^{2} \\
# \end{align*}
#
#
# We define ${\bf S}^{*} \cdot {\bf n}$ in [this cell](#sstardotn).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
HdsumTerm2 = 3*Sstardotn*Sstardotn
# -
# <a id='dotproducts'></a>
#
# # Step 8: Common Dot Products \[Back to [top](#toc)\]
# $$\label{dotproducts}$$
#
# What follows are definitions of many common dot products.
# <a id='sdotxi'></a>
#
# ## Step 8.a: ${\bf S} \cdot \boldsymbol{\xi}$ \[Back to [top](#toc)\]
# $$\label{sdotxi}$$
#
# We have
#
# \begin{equation*}
# {\bf S} \cdot \boldsymbol{\xi} = S^{1} \xi^{1} + S^{2} \xi^{2} + S^{3} \xi^{3}
# \end{equation*}
#
# We define $\xi$ in [this cell](#xi).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Sdotxi = S1*xi1 + S2*xi2 + S3*xi3
# -
# <a id='sdotv'></a>
#
# ## Step 8.b: ${\bf S} \cdot {\bf v}$ \[Back to [top](#toc)\]
# $$\label{sdotv}$$
#
# We have
#
# \begin{equation*}
# {\bf S} \cdot {\bf v} = S^{1} v^{1} + S^{2} v^{2} + S^{3} v^{3}.
# \end{equation*}
#
# We define ${\bf v}$ in [this cell](#v).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Sdotv = S1*v1 + S2*v2 + S3*v3
# -
# <a id='sdotn'></a>
#
# ## Step 8.c: ${\bf S} \cdot {\bf n}$ \[Back to [top](#toc)\]
# $$\label{sdotn}$$
#
# We have
#
# \begin{equation*}
# {\bf S} \cdot {\bf n} = S^{1} n^{1} + S^{2} n^{2} + S^{3} n^{3}.
# \end{equation*}
#
# We define ${\bf n}$ in [this cell](#n).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Sdotn = S1*n1 + S2*n2 + S3*n3
# -
# <a id='sdotskerrhat'></a>
#
# ## Step 8.d: ${\bf S} \cdot \hat{\bf S}_{\rm Kerr}$ \[Back to [top](#toc)\]
# $$\label{sdotskerrhat}$$
#
# We have
#
# \begin{equation*}
# {\bf S} \cdot \hat{\bf S}_{\rm Kerr} = S^{1} \hat{S}_{\rm Kerr}^{1} + S^{2} \hat{S}_{\rm Kerr}^{2} + S^{3} \hat{S}_{\rm Kerr}^{3}.
# \end{equation*}
#
# We define $\hat{\bf S}_{\rm Kerr}$ in [this cell](#skerrhat).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
SdotSkerrhat = S1*Skerrhat1 + S2*Skerrhat2 + S3*Skerrhat3
# -
# <a id='sstardotn'></a>
#
# ## Step 8.e: ${\bf S}^{*} \cdot {\bf n}$ \[Back to [top](#toc)\]
# $$\label{sstardotn}$$
#
# We have
#
# \begin{equation*}
# {\bf S}^{*} \cdot {\bf n} = {\bf S}^{*}_{1} n_{1} + {\bf S}^{*}_{2} n_{2} + {\bf S}^{*}_{3} n_{3}.
# \end{equation*}
#
# We define ${\bf S}^{*}$ in [this cell](#sstar) and ${\bf n}$ in [this cell](#n).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Sstardotn = Sstar1*n1 + Sstar2*n2 + Sstar3*n3
# -
# <a id='hreal_spin_combos'></a>
#
# # Step 9: $H_{\rm real}$ Spin Combination ${\bf S}^{*}$ \[Back to [top](#toc)\]
# $$\label{hreal_spin_combos}$$
#
# We collect here terms defining and containing ${\bf S}^{*}$.
# <a id='sstar'></a>
#
# ## Step 9.a: ${\bf S}^{*}$ \[Back to [top](#toc)\]
# $$\label{sstar}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.63):
#
# \begin{equation*}
# {\bf S}^{*} = \boldsymbol{\sigma}^{*} + \frac{ 1 }{ c^{2} } \boldsymbol{\Delta}_{\sigma^{*}}.
# \end{equation*}
#
# We define $\boldsymbol{\sigma}^{*}$ in [this cell](#sigmastar) and $\boldsymbol{\Delta}_{\sigma^{*}}$ in [this cell](#deltasigmastar).
#
# Please note: after normalization, ${\bf S} = {\bf S}^{*}$. See [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.26).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
S1 = Sstar1
S2 = Sstar2
S3 = Sstar3
Sstar1 = sigmastar1 + Deltasigmastar1
Sstar2 = sigmastar2 + Deltasigmastar2
Sstar3 = sigmastar3 + Deltasigmastar3
# -
# <a id='deltasigmastar'></a>
#
# ## Step 9.b: $\boldsymbol{\Delta}_{\sigma^{*}}$ \[Back to [top](#toc)\]
# $$\label{deltasigmastar}$$
#
# We can write $\boldsymbol{\Delta}_{\sigma^{*}}$ as
#
# \begin{equation*}
# \boldsymbol{\Delta}_{\sigma^{*}} = \boldsymbol{\sigma}^{*} \left( \boldsymbol{\sigma}^{*}\ {\rm coefficient} \right) + \boldsymbol{\sigma} \left( \boldsymbol{\sigma}\ {\rm coefficient} \right)
# \end{equation*}
#
# For further dissection, see $\boldsymbol{\sigma}^{*}$ in [this cell](#sigmastar), $\boldsymbol{\sigma}^{*}$ coefficient in [this cell](#sigmastarcoeff), $\boldsymbol{\sigma}$ in [this cell](#sigma), and $\boldsymbol{\sigma}$ coefficient in [this cell](#sigmacoeff).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Deltasigmastar1 = sigmastar1*sigmastarcoeff + sigma1*sigmacoeff
Deltasigmastar2 = sigmastar2*sigmastarcoeff + sigma2*sigmacoeff
Deltasigmastar3 = sigmastar3*sigmastarcoeff + sigma3*sigmacoeff
# -
# <a id='sigmastarcoeff'></a>
#
# ## Step 9.c: $\boldsymbol{\sigma}^{*}$ coefficient \[Back to [top](#toc)\]
# $$\label{sigmastarcoeff}$$
#
# We will break down $\boldsymbol{\sigma}^{*}\ {\rm coefficient}$ into two terms:
#
# \begin{equation*}
# \boldsymbol{\sigma}^{*}\ {\rm coefficient} = \boldsymbol{\sigma}^{*}\ {\rm coefficient\ Term\ 1} + \boldsymbol{\sigma}^{*}\ {\rm coefficient\ Term\ 2}
# \end{equation*}
#
# We compute $\boldsymbol{\sigma}^{*}$ coefficient Term 1 in [this cell](#sigmastarcoeffterm1) and $\boldsymbol{\sigma}^{*}$ coefficient Term 2 in [this cell](#sigmastarcoeffterm2).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
sigmastarcoeff = sigmastarcoeffTerm1 + sigmastarcoeffTerm2
# -
# <a id='sigmastarcoeffterm1'></a>
#
# ### Step 9.c.i: $\boldsymbol{\sigma}^{*}$ Coefficient Term 1 \[Back to [top](#toc)\]
# $$\label{sigmastarcoeffterm1}$$
#
# We build this term from [BB2011](https://arxiv.org/abs/1107.2904) Equation (51) with $b_{0} = 0$ (see discussion preceeding [T2012](https://arxiv.org/abs/1202.0790) Equation (4)), where what is listed below is the coefficient on $\boldsymbol{\sigma}^{*}$:
#
# \begin{align*}
# \boldsymbol{\sigma}^{*}\ {\rm coefficient\ Term\ 1} &= \frac{7}{6} \eta \frac{M}{r} + \frac{1}{3} \eta \left( Q - 1 \right) - \frac{5}{2} \eta \frac{ \Delta_r }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \\
# &= \frac{ \eta }{ 12 } \left( 14 \frac{ M }{ r } + 4 \left( Q - 1 \right) - 30 \frac{ \Delta_r }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \right)
# \end{align*}
#
# We group together and compute $Q-1$ in [this cell](#q) and $\frac{ \Delta_r }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2}$ in [this cell](#drsipn2); we define $r$ in [this cell](#r), $\eta$ in [this cell](#eta), and $M$ in [this cell](#m) below.
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
sigmastarcoeffTerm1 = eta/12*(14/r + 4*Qminus1 - 30*DrSipn2)
# -
# <a id='sigmastarcoeffterm2'></a>
#
# ### Step 9.c.ii: $\boldsymbol{\sigma}^{*}$ Coefficient Term 2 \[Back to [top](#toc)\]
# $$\label{sigmastarcoeffterm2}$$
#
# We build this term from [BB2011](https://arxiv.org/abs/1107.2904) Equation (52) with all $b_{i} = 0$, $i \in \left\{0, 1, 2, 3\right\}$ (see discussion preceeding [T2012](https://arxiv.org/abs/1202.0790) Equation (4)), and just the coefficient on $\boldsymbol{\sigma}^{*}$. In the LALSuite code this is the variable 'sMultiplier1':
#
# \begin{align*}
# \boldsymbol{\sigma}^{*}\ {\rm coefficient\ Term\ 2} &= \frac{1}{36} \left( 353 \eta - 27 \eta^2 \right) \left( \frac{M}{r} \right)^{2} + \frac{5}{3} \left( 3 \eta^2 \right) \frac{ \Delta_{r}^{2} }{ \Sigma^{2} } \left( {\bf n} \cdot \hat{\bf p} \right)^{4} \\
# &\ \ \ \ \ + \frac{1}{72} \left( -23 \eta -3 \eta^{2} \right) \left( Q - 1 \right)^{2} + \frac{1}{36} \left( -103 \eta + 60 \eta^{2} \right) \frac{M}{r} \left( Q - 1 \right) \\
# &\ \ \ \ \ + \frac{1}{12} \left( 16 \eta - 21 \eta^{2} \right) \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \left( Q - 1 \right) + \frac{1}{12} \left( 47 \eta - 54 \eta^{2} \right) \frac{M}{r} \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \\
# &= \frac{ \eta }{ 72 r^{2} } \left[ \left( 706 - 54 \eta \right) M^{2} + 360 \eta r^{2} \frac{ \Delta_{r}^{2} }{ \Sigma^{2} } \left( {\bf n} \cdot \hat{\bf p} \right)^{4} + r^{2} \left( -23 - 3 \eta \right) \left( Q - 1 \right)^{2} + \left( -206 + 120 \eta \right) M r \left( Q - 1 \right) \right. \\
# &\ \ \ \ \ + \left. \left( 96 - 126 \eta \right) r^{2} \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \left( Q - 1 \right) + \left( 282 - 324 \eta \right) M r \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \right] \\
# &= \frac{ \eta }{ 72 r^{2} } \left[ 706 + r \left( -206 M \left( Q - 1 \right) + 282 M \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} + r \left( Q -1 \right) \left( 96 \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} - 23 \left( Q - 1 \right) \right) \right) \right. \\
# &\ \ \ \ \ + \left. \eta \left( -54 M^{2} + r \left( 120 M \left( Q -1 \right) - 324 M \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \right.\right.\right. \\
# &\ \ \ \ \ + \left.\left.\left. r \left( 360 \frac{ \Delta_{r}^{2} }{ \Sigma^{2} } \left( {\bf n} \cdot \hat{\bf p} \right)^{4} + \left( Q - 1 \right) \left( -126 \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} - 3 \left( Q - 1 \right) \right) \right)\right) \right) \right]
# \end{align*}
#
# We define $r$ in [this cell](#r), $\eta$ in [this cell](#eta), and $M$ in [this cell](#m); we group together and define $Q - 1$ in [this cell](#q), and $\frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2}$ in [this cell](#drsipn2).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
sigmastarcoeffTerm2 = eta/(72*r*r)*(706 + r*(-206*Qminus1 + 282*DrSipn2 + r*Qminus1*(96*DrSipn2 - 23*Qminus1))
+ eta*(-54 + r*(120*Qminus1 - 324*DrSipn2
+ r*(360*DrSipn2*DrSipn2 + Qminus1*(-126*DrSipn2 - 3*Qminus1)))))
# -
# <a id='sigmacoeff'></a>
#
# ## Step 9.d: $\boldsymbol{\sigma}$ coefficient \[Back to [top](#toc)\]
# $$\label{sigmacoeff}$$
#
# We will break down $\boldsymbol{\sigma}\ {\rm coefficient}$ into three terms:
#
# \begin{equation*}
# \boldsymbol{\sigma}\ {\rm coefficient} = \boldsymbol{\sigma}\ {\rm coefficient\ Term\ 1} + \boldsymbol{\sigma}\ {\rm coefficient\ Term\ 2} + \boldsymbol{\sigma}\ {\rm coefficient\ Term\ 3}
# \end{equation*}
#
# We compute $\boldsymbol{\sigma}$ coefficient Term 1 in [this cell](#sigmacoeffterm1), $\boldsymbol{\sigma}$ coefficient Term 2 in [this cell](#sigmacoeffterm2), and $\boldsymbol{\sigma}$ coefficient Term 3 in [this cell](#sigmacoeffterm3).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
sigmacoeff = sigmacoeffTerm1 + sigmacoeffTerm2 + sigmacoeffTerm3
# -
# <a id='sigmacoeffterm1'></a>
#
# ### Step 9.d.i: $\boldsymbol{\sigma}$ Coefficient Term 1 \[Back to [top](#toc)\]
# $$\label{sigmacoeffterm1}$$
#
# We build this term from [BB2011](https://arxiv.org/abs/1107.2904) Equation (51) with $a_{0} = 0$ (see discussion preceeding [T2012](https://arxiv.org/abs/1202.0790) Equation (4)), where what is listed below is the coefficient on $\boldsymbol{\sigma}$:
#
# \begin{align*}
# \boldsymbol{\sigma}\ {\rm coefficient\ Term\ 1} &= -\frac{2}{3} \eta \frac{ M }{ r } + \frac{1}{4} \eta \left( Q - 1 \right) - 3 \eta \frac{ \Delta_r }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \\
# &= \frac{ \eta }{ 12 } \left( -8 \frac{ M }{ r } + 3 \left( Q - 1 \right) - 36 \smash[b]{\underbrace{ \frac{ \Delta_r }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} }_{\rm DrSipn2}} \vphantom{\underbrace{a}_{b}} \right)
# \end{align*}
#
# We define $\eta$ in [this cell](#eta), $M$ in [this cell](#m), $Q-1$ in [this cell](#q), and $\frac{ \Delta_r }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2}$ in [this cell](#drsipn2).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
sigmacoeffTerm1 = eta/12*(-8/r + 3*Qminus1 - 36*DrSipn2)
# -
# <a id='sigmacoeffterm2'></a>
#
# ### Step 9.d.ii: $\boldsymbol{\sigma}$ Coefficient Term 2 \[Back to [top](#toc)\]
# $$\label{sigmacoeffterm2}$$
#
# We build this term from [BB2011](https://arxiv.org/abs/1107.2904) Equation (52) with all $a_{i} = 0$, $i \in \left\{0, 1, 2, 3\right\}$ (see discussion preceeding [T2012](https://arxiv.org/abs/1202.0790) Equation (4)), and just the coefficient on $\boldsymbol{\sigma}$:
#
# \begin{align*}
# \boldsymbol{\sigma}\ {\rm coefficient\ Term\ 2} &= \frac{1}{9} \left( -56 \eta -21 \eta^{2} \right) \left( \frac{ M }{ r } \right)^{2} + \frac{5}{24} \left( 27 \eta^{2} \right) \frac{ \Delta_r^{2} }{ \Sigma^{2} } \left( {\bf n} \cdot \hat{\bf p} \right)^{4} \\
# &\ \ \ \ \ + \frac{1}{144} \left(-45 \eta \right) \left( Q - 1 \right)^{2} + \frac{1}{36} \left( -109 \eta + 51 \eta^{2} \right) \frac{ M }{ r } \left( Q - 1 \right) \\
# &\ \ \ \ \ + \frac{1}{24} \left( 6 \eta - 39\eta^{2} \right) \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \left( Q - 1 \right) + \frac{1}{24} \left( -16 \eta - 147 \eta^{2} \right) \frac{ M }{ r } \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \\
# &= \frac{ \eta }{ 144 r^{2} } \left[ -896 M^{2} + r \left( -436 M \left( Q - 1 \right) - 96 M \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \right.\right. \\
# &\ \ \ \ \ \left.\left. + r \left( -45 \left( Q - 1 \right)^{2} + 36 \left( Q - 1 \right) \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \right) \right) + \eta \left( -336 M^{2} + r \left( 204 M \left( Q -1 \right) - 882 M \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \right.\right.\right. \\
# &\ \ \ \ \ \left.\left.\left. + r \left( 810 \frac{ \Delta_{r}^{2} }{ \Sigma^{2} } \left( {\bf n} \cdot \hat{\bf p} \right)^{4} - 234 \left( Q - 1 \right) \frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2} \right) \right) \right) \right]
# \end{align*}
#
# We define $\eta$ in [this cell](#eta), $M$ in [this cell](#M), $Q - 1$ in [this cell](#q), and $\frac{ \Delta_{r} }{ \Sigma } \left( {\bf n} \cdot \hat{\bf p} \right)^{2}$ in [this cell](#drsipn2).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
sigmacoeffTerm2 = eta/(144*r*r)*(-896 + r*(-436*Qminus1 - 96*DrSipn2 + r*(-45*Qminus1*Qminus1
+ 36*Qminus1*DrSipn2)) + eta*(-336 + r*(204*Qminus1 - 882*DrSipn2
+ r*(810*DrSipn2*DrSipn2 - 234*Qminus1*DrSipn2))))
# -
# <a id='sigmacoeffterm3'></a>
#
# ### Step 9.d.iii: $\boldsymbol{\sigma}$ Coefficient Term 3 \[Back to [top](#toc)\]
# $$\label{sigmacoeffterm3}$$
#
# From Section II of [T2014)](https://arxiv.org/abs/1311.2544),
#
# \begin{equation*}
# \boldsymbol{\sigma}\ {\rm coefficient\ Term\ 3} = \eta dSO u^{3}.
# \end{equation*}
#
# where $d_{\rm SO}$ is a fitting parameter. We define $\eta$ in [this cell](#eta), $u$ in [this cell](#u).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
sigmacoeffTerm3 = eta*dSO*u*u*u
dSO = -74.71 - 156.*eta + 627.5*eta*eta
# -
# <a id='metpotderivs'></a>
#
# # Step 10: Derivatives of the Metric Potential \[Back to [top](#toc)\]
# $$\label{metpotderivs}$$
#
# We collect here terms dependent on derivatives of the metric potential (see [BB2010](https://arxiv.org/abs/0912.3517) Equations (5.47)).
# <a id='omegar'></a>
#
# ## Step 10.a: $\omega_{r}$ \[Back to [top](#toc)\]
# $$\label{omegar}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.47b) we have
#
# \begin{equation*}
# \omega_{r} = \frac{ \Lambda_{t} \tilde{\omega}_{\rm fd}^{\prime} - \Lambda_{t}^{\prime} \tilde{\omega}_{\rm fd} }{ \Lambda_{t}^{2} }.
# \end{equation*}
#
# We define $\Lambda_{t}$ in [this cell](#lambdat), $\tilde{\omega}_{\rm fd}^{\prime}$ in [this cell](#omegatildeprm), $\Lambda_{t}^{\prime}$ in [this cell](#lambdatprm), and $\tilde{\omega}_{\rm fd}$ in [this cell](#omegatilde).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
omegar = (Lambdat*omegatildeprm - Lambdatprm*omegatilde)/(Lambdat*Lambdat)
# -
# <a id='nur'></a>
#
# ## Step 10.b: $\nu_{r}$ \[Back to [top](#toc)\]
# $$\label{nur}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.47c) we have
#
# \begin{equation*}
# \nu_{r} = \frac{ r }{ \Sigma } + \frac{ \varpi^{2} \left( \varpi^{2} \Delta^{\prime}_{t} - 4 r \Delta_{t} \right) }{ 2 \Lambda_{t} \Delta_{t} }.
# \end{equation*}
#
# We define $r$ in [this cell](#r), $\Sigma$ in [this cell](#usigma), $\varpi^{2}$ in [this cell](#w2), $\Delta_{t}^{\prime}$ in [this cell](#deltatprm), $\Delta_{t}$ in [this cell](#deltat), and $\Lambda_{t}$ in [this cell](#lambdat).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
nur = r/Sigma + w2*(w2*Deltatprm - 4*r*Deltat)/(2*Lambdat*Deltat)
# -
# <a id='mur'></a>
#
# ## Step 10.c: $\mu_{r}$ \[Back to [top](#toc)\]
# $$\label{mur}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.47d) we have
#
# \begin{equation*}
# \mu_{r} = \frac{ r }{ \Sigma } - \frac{ 1 }{ \sqrt{ \Delta_{r} } }.
# \end{equation*}
#
# We define $r$ in [this cell](#r), $\Sigma$ in [this cell](#usigma), and $\Delta_{r}$ in [this cell](#deltar).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
mur = r/Sigma - 1/sp.sqrt(Deltar)
# -
# <a id='omegacostheta'></a>
#
# ## Step 10.d: $\omega_{\cos\theta}$ \[Back to [top](#toc)\]
# $$\label{omegacostheta}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.47f), we have
#
# \begin{equation*}
# \omega_{\cos\theta} = -\frac{ 2 a^{2} \cos\theta \Delta_{t} \tilde{\omega}_{\rm fd} }{ \Lambda_{t}^{2} }.
# \end{equation*}
#
# We define $a$ in [this cell](#a), $\cos\theta$ in [this cell](#costheta), $\Delta_{t}$ in [this cell](#deltat), $\tilde{\omega}_{\rm fd}$ in [this cell](#omegatilde), and $\Lambda_{t}$ in [this cell](#lambdat).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
omegacostheta = -2*a*a*costheta*Deltat*omegatilde/(Lambdat*Lambdat)
# -
# <a id='nucostheta'></a>
#
# ## Step 10.e: $\nu_{\cos\theta}$ \[Back to [top](#toc)\]
# $$\label{nucostheta}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.47g) we have
#
# \begin{equation*}
# \nu_{\cos\theta} = \frac{ a^{2} \varpi^{2} \cos\theta \left( \varpi^{2} - \Delta_{t} \right) }{ \Lambda_{t} \Sigma }.
# \end{equation*}
#
# We define $a$ in [this cell](#a), $\varpi^{2}$ in [this cell](#w2), $\cos\theta$ in [this cell](#costheta), $\Delta_{t}$ in [this cell](#deltat), $\Lambda_{t}$ in [this cell](#lambdat), and $\Sigma$ in [this cell](#usigma).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
nucostheta = a*a*w2*costheta*(w2 - Deltat)/(Lambdat*Sigma)
# -
# <a id='mucostheta'></a>
#
# ## Step 10.f: $\mu_{\cos \theta}$ \[Back to [top](#toc)\]
# $$\label{mucostheta}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.47h) we have
#
# \begin{equation*}
# \mu_{\cos \theta} = \frac{ a^{2} \cos \theta }{ \Sigma }.
# \end{equation*}
#
# We define $a$ in [this cell](#a), $\cos \theta$ in [this cell](#costheta), and $\Sigma$ in [this cell](#usigma) below.
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
mucostheta = a*a*costheta/Sigma
# -
# <a id='lambdatprm'></a>
#
# ## Step 10.g: $\Lambda_{t}^{\prime}$ \[Back to [top](#toc)\]
# $$\label{lambdatprm}$$
#
# From the discussion after [BB2010](https://arxiv.org/abs/0912.3517) Equations (5.47), we know that the prime notation indicates a derivative with respect to $r$. Using the definiton of $\Lambda_{t}$ in [this cell](#lambdat), we have
#
# \begin{equation*}
# \Lambda_{t}^{\prime} = 4 \left( a^{2} + r^{2} \right) r - a^{2} \Delta_{t}^{\prime} \sin^{2} \theta.
# \end{equation*}
#
# We define $a$ in [this cell](#a), $r$ in [this cell](#r), $\Delta_{u}$ in [this cell](#deltau), and $\sin^{2}\theta$ in [this cell](#sin2theta).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Lambdatprm = 4*(a*a + r*r)*r - 2*a*a*Deltatprm*sin2theta
# -
# <a id='omegatildeprm'></a>
#
# ## Step 10.h: $\tilde{\omega}_{\rm fd}^{\prime}$ \[Back to [top](#toc)\]
# $$\label{omegatildeprm}$$
#
# From the discussion after [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.47), we know that the prime notation indicates a derivative with respect to $r$. Using the definiton of $\tilde{\omega}_{\rm fd}$ in [this cell](#omegatilde), we have
#
# \begin{equation*}
# \tilde{\omega}_{\rm fd}^{\prime} = 2 a M.
# \end{equation*}
#
# We define $a$ in [this cell](#a) and $M$ in [this cell](#m).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
omegatildeprm = 2*a
# -
# <a id='metpots'></a>
#
# # Step 11: The Deformed and Rescaled Metric Potentials \[Back to [top](#toc)\]
# $$\label{metpots}$$
#
# We collect here terms of the deformed and scaled metric potentils. See [BB2010](https://arxiv.org/abs/0912.3517) Equations (5.30)--(5.34) and (5.48)--(5.52).
# <a id='omega'></a>
#
# ## Step 11.a: $\omega$ \[Back to [top](#toc)\]
# $$\label{omega}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.31) we have
#
# \begin{equation*}
# \omega = \frac{ \tilde{\omega}_{\rm fd} }{ \Lambda_{t} }.
# \end{equation*}
#
# We define $\tilde{\omega}_{\rm fd}$ in [this cell](#omegatilde) and $\Lambda_{t}$ in [this cell](#lambdat).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
omega = omegatilde/Lambdat
# -
# <a id='exp2nu'></a>
#
# ## Step 11.b: $e^{2\nu}$ and $e^{\nu}$ \[Back to [top](#toc)\]
# $$\label{exp2nu}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.32), we have
#
# \begin{equation*}
# e^{2 \nu} = \frac{ \Delta_{t} \Sigma }{ \Lambda_t }.
# \end{equation*}
#
# It follows that
#
# \begin{equation*}
# e^{\nu} = \sqrt{ \frac{ \Delta_{t} \Sigma }{ \Lambda_t } }.
# \end{equation*}
#
# We define $\Delta_{t}$ in [this cell](#deltat), $\Sigma$ in [this cell](#usigma), and $\Lambda_{t}$ in [this cell](#lambdat).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
expnu = sp.sqrt(exp2nu)
exp2nu = Deltat*Sigma/Lambdat
# -
# <a id='btilde'></a>
#
# ## Step 11.c: $\tilde{B}$ \[Back to [top](#toc)\]
# $$\label{btilde}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.48), we have
#
# \begin{equation*}
# \tilde{B} = \sqrt{ \Delta_{t} }.
# \end{equation*}
#
# We define $\Delta_{t}$ in [this cell](#deltat).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Btilde = sp.sqrt(Deltat)
# -
# <a id='brtilde'></a>
#
# ## Step 11.d: $\tilde{B}_{r}$ \[Back to [top](#toc)\]
# $$\label{brtilde}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.49), we have
#
# \begin{equation*}
# \tilde{B}_{r} = \frac{ \sqrt{ \Delta_{r} } \Delta_{t}^{\prime} - 2 \Delta_{t} }{ 2 \sqrt{ \Delta_{r} \Delta_{t} } }.
# \end{equation*}
#
# We define $\Delta_{r}$ in [this cell](#deltar), $\Delta_{t}^{\prime}$ in [this cell](#deltatprm), and $\Delta_{t}$ in [this cell](#deltat).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Brtilde = (sp.sqrt(Deltar)*Deltatprm - 2*Deltat)/(2*sp.sqrt(Deltar*Deltat))
# -
# <a id='exp2mu'></a>
#
# ## Step 11.e: $e^{2\tilde{\mu}}$ and $e^{\tilde{\mu}}$ \[Back to [top](#toc)\]
# $$\label{exp2mu}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.50), we have
#
# \begin{equation*}
# e^{2 \tilde{\mu}} = \Sigma.
# \end{equation*}
#
# It follows that
#
# \begin{equation*}
# e^{\tilde{\mu}} = \sqrt{ \Sigma }.
# \end{equation*}
#
#
# We define $\Sigma$ in [this cell](#usigma).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
expmu = sp.sqrt(exp2mu)
exp2mu = Sigma
# -
# <a id='jtilde'></a>
#
# ## Step 11.f: $\tilde{J}$ \[Back to [top](#toc)\]
# $$\label{jtilde}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.51) we have
#
# \begin{equation*}
# \tilde{J} = \sqrt{ \Delta_{r} }.
# \end{equation*}
#
# We define $\Delta_{r}$ in [this cell](#deltar) below.
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Jtilde = sp.sqrt(Deltar)
# -
# <a id='q'></a>
#
# ## Step 11.g: $Q$ \[Back to [top](#toc)\]
# $$\label{q}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.52),
#
# \begin{equation*}
# Q = 1 + \underbrace{ \frac{ \Delta_{r} }{ \Sigma } \left( \hat{\bf p} \cdot {\bf n} \right)^{2} }_{\rm DrSipn2} + \underbrace{ \frac{ \Sigma }{ \Lambda_t \sin^{2} \theta } }_{\rm Q\ coefficient\ 1} \left( \smash[b]{ \underbrace{ \hat{\bf p} \cdot \boldsymbol{\xi} r }_{\rm pdotxir} } \right)^{2} + \underbrace{ \frac{ 1 }{ \Sigma \sin^{2} \theta } }_{\rm Q\ coefficient\ 2} \left( \smash[b]{ \underbrace{ \hat{\bf p} \cdot {\bf v} r }_{\rm pdotvr} } \right)^{2};
# \end{equation*}
#
# We group togther and compute $\frac{ \Delta_{r} }{ \Sigma } \left( \hat{\bf p} \cdot {\bf n} \right)^{2}$ in [this cell](#drsipn2), $\frac{ \Sigma }{ \Lambda_t \sin^{2} \theta }$ in [this cell](#qcoeff1), $\hat{\bf p} \cdot \boldsymbol{\xi} r$ in [this cell](#pdotxir), $\frac{ 1 }{ \Sigma \sin^{2} \theta }$ in [this cell](#qcoeff2), and $\hat{\bf p} \cdot {\bf v} r$ in [this cell](#pdotvr).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Qminus1 = Q - 1
Q = 1 + DrSipn2 + Qcoeff1*pdotxir*pdotxir + Qcoeff2*pdotvr*pdotvr
# -
# <a id='drsipn2'></a>
#
# ### Step 11.g.i: $\frac{ \Delta_{r} }{ \Sigma } \left( \hat{\bf p} \cdot {\bf n} \right)^{2}$ \[Back to [top](#toc)\]
# $$\label{drsipn2}$$
#
# We define $\Delta_{r}$ in [this cell](#deltar), $\Sigma$ in [this cell](#usigma), and $\hat{\bf p} \cdot {\bf n}$ in [this cell](#pdotn).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
DrSipn2 = Deltar*pdotn*pdotn/Sigma
# -
# <a id='qcoeff1'></a>
#
# ### Step 11.g.ii: Q Coefficient 1 \[Back to [top](#toc)\]
# $$\label{qcoeff1}$$
#
# We defined $Q$ coefficient 1 in [this cell](#q) as
#
# \begin{equation*}
# Q\ {\rm coefficient\ 1} = \frac{ \Sigma }{ \Lambda_t \sin^{2} \theta }
# \end{equation*}
#
# We define $\Sigma$ in [this cell](#usigma), $\Lambda_{t}$ in [this cell](#lambdat), and $\sin^{2} \theta$ in [this cell](#sin2theta).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Qcoeff1 = Sigma/(Lambdat*sin2theta)
# -
# <a id='qcoeff2'></a>
#
# ### Step 11.g.iii: Q Coefficient 2 \[Back to [top](#toc)\]
# $$\label{qcoeff2}$$
#
# We defined $Q$ coefficient 2 in [this cell](#q) as
#
# \begin{equation*}
# Q\ {\rm coefficient\ 2} = \frac{ 1 }{ \Sigma \sin^{2} \theta }
# \end{equation*}
#
# We define $\Sigma$ in [this cell](#usigma) and $\sin^{2} \theta$ in [this cell](#sin2theta).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Qcoeff2 = 1/(Sigma*sin2theta)
# -
# <a id='tort'></a>
#
# # Step 12: Tortoise Terms \[Back to [top](#toc)\]
# $$\label{tort}$$
#
# We collect here terms related to the conversion from Boyer-Lindquist coordinates to tortoise coordinates. Details of the converstion are given in the appendix of [P2010](https://arxiv.org/abs/0912.3466v2).
# <a id='pphi'></a>
#
# ## Step 12.a: $p_{\phi}$ \[Back to [top](#toc)\]
# $$\label{pphi}$$
#
# From the discussion preceding [BB2010](https://arxiv.org/abs/0912.3517) Equation (3.41), the phi component of the tortoise momentum $p_{\phi}$ is given by
#
# \begin{equation*}
# p_{\phi} = \hat{\bf p} \cdot \boldsymbol{\xi} r.
# \end{equation*}
#
# We define $\hat{\bf p} \cdot \boldsymbol{\xi} r$ in [this cell](#pdotxir).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
pphi = pdotxir
# -
# <a id='pdotvr'></a>
#
# ## Step 12.b: $\hat{\bf p} \cdot {\bf v} r$ \[Back to [top](#toc)\]
# $$\label{pdotvr}$$
#
# We have
#
# \begin{equation*}
# \hat{\bf p} \cdot {\bf v} r = \left( \hat{p}_{1} v_{1} + \hat{p}_{2} v_{2} + \hat{p}_{3} v_{3} \right) r
# \end{equation*}
#
# We define $\hat{\bf p}$ in [this cell](#hatp), ${\bf v}$ in [this cell](#v), and $r$ in [this cell](#r).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
pdotvr = (phat1*v1 + phat2*v2 + phat3*v3)*r
# -
# <a id='pdotn'></a>
#
# ## Step 12.c: $\hat{\bf p} \cdot {\bf n}$ \[Back to [top](#toc)\]
# $$\label{pdotn}$$
#
# We have
#
# \begin{equation*}
# \hat{\bf p} \cdot {\bf n} = \hat{p}_{1} n_{1} + \hat{p}_{2} n_{2} + \hat{p}_{3} n_{3}
# \end{equation*}
#
# We define $\hat{\bf p}$ in [this cell](#hatp) and ${\bf n}$ in [this cell](#n).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
pdotn = phat1*n1 + phat2*n2 + phat3*n3
# -
# <a id='pdotxir'></a>
#
# ## Step 12.d: $\hat{\bf p} \cdot \boldsymbol{\xi} r$ \[Back to [top](#toc)\]
# $$\label{pdotxir}$$
#
# We have
#
# \begin{equation*}
# \hat{\bf p} \cdot \boldsymbol{\xi} r = \left( \hat{p}_{1} \xi_{1} + \hat{p}_{2} \xi_{2} + \hat{p}_{3} \xi_{3} \right) r
# \end{equation*}
#
# We define $\hat{\bf p}$ in [this cell](#hatp), $\boldsymbol{\xi}$ in [this cell](#xi), and $r$ in [this cell](#r).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
pdotxir = (phat1*xi1 + phat2*xi2 + phat3*xi3)*r
# -
# <a id='hatp'></a>
#
# ## Step 12.e: $\hat{\bf p}$ \[Back to [top](#toc)\]
# $$\label{hatp}$$
#
# From the discussion after [BB2010](https://arxiv.org/abs/0912.3517) Equation (3.41), we have $\hat{\bf p} = {\bf p}/m$ where $m$ is the mass of a nonspinning test particle and ${\bf p}$ is *conjugate* momentum. Following Lines 319--321 of LALSimIMRSpinEOBHamiltonianPrec.c, we convert the Boyer-Lindquist momentum ${\bf p}$ to the tortoise momentum (see the appendix of [P2010](https://arxiv.org/abs/0912.3466v2)) via
#
# \begin{align*}
# \hat{\bf p} = {\bf p} + {\rm prT} \left( 1 - \frac{1}{\rm csi1} \right) {\bf n}
# \end{align*}
#
# We define prT in [this cell](#prt), csi1 in [this cell](#csi1), and ${\bf n}$ in [this cell](#n).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
phat1 = p1 + prT*(1 - 1/csi1)*n1
phat2 = p2 + prT*(1 - 1/csi1)*n2
phat3 = p3 + prT*(1 - 1/csi1)*n3
# -
# <a id='prt'></a>
#
# ## Step 12.f: prT \[Back to [top](#toc)\]
# $$\label{prt}$$
#
# The first component of the momenum vector, after conversion to tortoise coordinates (see the Appendix of [P2010](https://arxiv.org/abs/0912.3466v2)), is
#
# \begin{align*}
# {\rm prT} = {\rm csi2}\left( p_{1} n_{1} + p_{2} n_{2} + p_{3} n_{3} \right)
# \end{align*}
#
# We define csi2 in [this cell](#csi2) and ${\bf n}$ in [this cell](#n).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
prT = csi2*(p1*n1 + p2*n2 + p3*n3)
# -
# <a id='csi2'></a>
#
# ## Step 12.g: csi2 \[Back to [top](#toc)\]
# $$\label{csi2}$$
#
# From the transformation to tortoise coordinates in the Appendix of [P2010](https://arxiv.org/abs/0912.3466v2),
#
# \begin{equation*}
# {\rm csi2} = 1 + \left( \frac{1}{2} - \frac{1}{2}{\rm sign}\left( \frac{3}{2} - \tau \right) \right) \left( {\rm csi} - 1 \right)
# \end{equation*}
#
# We define csi in [this cell](#csi); $\tau$ is a tortoise coordinate ($\tau \in \left\{ 0, 1 ,2 \right\}$).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
csi2 = 1 + (sp.Rational(1,2) - sp.Rational(1,2)*sp.sign(sp.Rational(3,2) - tortoise))*(csi - 1)
# -
# <a id='csi1'></a>
#
# ## Step 12.h: csi1 \[Back to [top](#toc)\]
# $$\label{csi1}$$
#
# From the transformation to tortoise coordinates in the Appendix of [P2010](https://arxiv.org/abs/0912.3466v2),
#
# \begin{equation*}
# {\rm csi1} = 1 + \left( 1 - \left\lvert 1 - \tau \right\rvert \right) \left( {\rm csi} - 1 \right)
# \end{equation*}
#
# We define csi in [this cell](#csi); $\tau$ is a tortoise coordinate ($\tau \in \left\{ 0, 1 ,2 \right\}$).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
csi1 = 1 + (1 - sp.abs(1-tortoise))*(csi - 1)
# -
# <a id='csi'></a>
#
# ## Step 12.i: csi \[Back to [top](#toc)\]
# $$\label{csi}$$
#
# From the transformation to tortoise coordinates in the Appendix of [P2010](https://arxiv.org/abs/0912.3466v2),
#
# \begin{equation*}
# {\rm csi} = \frac{ \sqrt{ \Delta_{t} \Delta_{r} } }{ \varpi^{2} }.
# \end{equation*}
#
# We define $\Delta_{t}$ in [this cell](#deltat), $\Delta_{r}$ in [this cell](#deltar), and $\varpi^{2}$ in [this cell](#w2).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
csi = sp.sqrt(Deltar*Deltat)/w2
# -
# <a id='metric'></a>
#
# # Step 13: Metric Terms \[Back to [top](#toc)\]
# $$\label{metric}$$
#
# We collect here terms used to define the deformed Kerr metric. See [BB2010](https://arxiv.org/abs/0912.3517) Equations (5.38)--(5.40) and (5.71)--(5.75).
# <a id='lambdat'></a>
#
# ## Step 13.a: $\Lambda_{t}$ \[Back to [top](#toc)\]
# $$\label{lambdat}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.39),
#
# \begin{equation*}
# \Lambda_{t} = \varpi^{4} - a^{2} \Delta_{t} \sin^{2} \theta.
# \end{equation*}
#
# We define $\varpi^{2}$ in [this cell](#w2), $a$ in [this cell](#a), $\Delta_{t}$ in [this cell](#deltat), and $\sin^{2}\theta$ in [this cell](#sin2theta).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Lambdat = w2*w2 - a*a*Deltat*sin2theta
# -
# <a id='deltar'></a>
#
# ## Step 13.b: $\Delta_{r}$ \[Back to [top](#toc)\]
# $$\label{deltar}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.38),
#
# \begin{equation*}
# \Delta_{r} = \Delta_{t} D^{-1}.
# \end{equation*}
#
# We define $\Delta_{t}$ in [this cell](#deltat) and $D^{-1}$ in [this cell](#dinv).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Deltar = Deltat*Dinv
# -
# <a id='deltat'></a>
#
# ## Step 13.c: $\Delta_{t}$ \[Back to [top](#toc)\]
# $$\label{deltat}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.71), we have
#
# \begin{equation*}
# \Delta_{t} = r^{2} \Delta_{u}.
# \end{equation*}
#
# We define $\Delta_{u}$ in [this cell](#deltau) and $r$ in [this cell](#r).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Deltat = r*r*Deltau
# -
# <a id='deltatprm'></a>
#
# ## Step 13.d: $\Delta_{t}^{\prime}$ \[Back to [top](#toc)\]
# $$\label{deltatprm}$$
#
# From the discussion after [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.47), we know that the prime notation indicates a derivative with respect to $r$. Using the definition of [$\Delta_{t}$](#deltat), we have
#
# \begin{equation*}
# \Delta_{t}^{\prime} = 2 r \Delta_{u} + r^{2} \Delta_{u}^{\prime}.
# \end{equation*}
#
# We define $\Delta_{u}$ in [this cell](#deltau) and $r$ in [this cell](#r).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Deltatprm = 2*r*Deltau + r*r*Deltauprm
# -
# <a id='deltau'></a>
#
# ## Step 13.e: $\Delta_{u}$ \[Back to [top](#toc)\]
# $$\label{deltau}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.73), we have
#
# \begin{equation*}
# \Delta_u = \bar{\Delta}_{u} \left[ \smash[b]{\underbrace{ 1 + \eta \Delta_{0} + \eta \log \left( 1 + {\rm logarg} \right) }_{\Delta_{u}\ {\rm calibration\ term}}} \vphantom{\underbrace{1}_{n}} \right]
# \end{equation*}
#
# We compute $\bar{\Delta}_{u}$ in [this cell](#deltaubar) and $\Delta_{u}$ calibration term and logarg in [this cell](#deltaucalib). From the discussion after [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.47), we know that primes denote derivatives with respect to $r$. We have
#
# \begin{equation*}
# \Delta_u = \bar{\Delta}^{\prime}_{u} \left( \Delta_{u}\ {\rm calibration\ term} \right) + \bar{\Delta}_{u} \left( \Delta_{u}\ {\rm calibration\ term} \right)^{\prime}
# \end{equation*}
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Deltauprm = Deltaubarprm*Deltaucalib + Deltaubar*Deltaucalibprm
Deltau = Deltaubar*Deltaucalib
# -
# <a id='deltaubar'></a>
#
# ### Step 13.e.i: $\bar{\Delta}_{u}$ \[Back to [top](#toc)\]
# $$\label{deltaubar}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.75), we have
#
# \begin{equation*}
# \bar{\Delta}_u = \frac{ a^{2} u^{2} }{ M^{2} } + \frac{ 1 }{ \eta K - 1 } \left( 2 u + \frac{ 1 }{ \eta K - 1 } \right).
# \end{equation*}
#
# We define $a$ in [this cell](#a), $u$ in [this cell](#u), $M$ in [this cell](#m), $\eta$ in [this cell](#eta), and $K$ in [this cell](#k). From the discussion after [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.47), we know that primes denote derivatives with respect to $r$. We have
#
# \begin{equation*}
# \bar{\Delta}^{\prime}_u = \frac{ -2 a^{2} u^{3} }{ M^{2} } - \frac{ 2 u^{2} }{ \eta K - 1 }.
# \end{equation*}
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Deltaubarprm = -2*a*a*u*u*u - 2*u*u/(etaKminus1)
Deltaubar = a*a*u*u + (2*u + 1/etaKminus1)/etaKminus1
# -
# <a id='deltaucalib'></a>
#
# ### Step 13.e.ii: $\Delta_{u}$ Calibration Term \[Back to [top](#toc)\]
# $$\label{deltaucalib}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.73), we have
#
# \begin{align*}
# \Delta_u\ {\rm calibration\ term} &= 1 + \eta \Delta_{0} + \eta \log \left( 1 + \Delta_{1} u + \Delta_{2} u^{2} + \Delta_{3} u^{3} + \Delta_{4} u^{4} \right) \\
# &= 1 + \eta \left[ \Delta_{0} + \log \left( 1 + \Delta_{1} u + \Delta_{2} u^{2} + \Delta_{3} u^{3} + \Delta_{4} u^{4} \right) \right].
# \end{align*}
#
# In [T2014](https://arxiv.org/pdf/1311.2544.pdf) Equation (2) an additional term is and is defined in Equation (A2) of [this paper](https://arxiv.org/abs/1608.01907v2). We then have
#
# \begin{equation*}
# \Delta_u\ {\rm calibration\ term} = 1 + \eta \left[ \Delta_{0} + \log \left( 1 + \Delta_{1} u + \Delta_{2} u^{2} + \Delta_{3} u^{3} + \Delta_{4} u^{4} + \Delta_{5} u^{5} \right) \right].
# \end{equation*}
#
# The first term in the brackets of [SH2016](https://arxiv.org/abs/1608.01907) Equation (A2c) is $\Delta_{5\ell}$. That bring us to
#
# \begin{equation*}
# \Delta_u\ {\rm calibration\ term} = 1 + \eta \left[ \Delta_{0} + \log \left( 1 + \underbrace{ \Delta_{1} u + \Delta_{2} u^{2} + \Delta_{3} u^{3} + \Delta_{4} u^{4} + \Delta_{5} u^{5} + \Delta_{5\ell} u^{5} \ln\left(u\right) }_{ \rm logarg } \right) \right].
# \end{equation*}
#
# Note our notation for logarg. We define $u$ in [this cell](#u), $\eta$ in [this cell](#eta), and the calibration coefficients $\Delta_{i}$, $i \in \left\{0, 1, 2, 3, 4\right\}$, in [this cell](#calib_coeffs).
#
# From the discussion after [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.47), we know that primes denote derivatives with respect to $r$. We have
# \begin{equation*}
# \left( \Delta_u\ {\rm calibration\ term} \right)^{\prime} = \frac{ -\eta u^{2} \left( \Delta_{1} + 2 \Delta_{2} u + 3 \Delta_{3} u^{2} + 4 \Delta_{4} u^{3} + 5 \Delta_{5} u^{4} + 5 \Delta_{5\ell} u^{4} \ln\left( u \right) + \Delta_{5\ell} u^{5} u^{-1} \right) }{ 1 + {\rm logarg} }.
# \end{equation*}
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Deltaucalibprm = -eta*u*u*(Delta1 + u*(2*Delta2 + u*(3*Delta3
+ u*(4*Delta4 + u*(5*(Delta5 + Delta5l*sp.log(u)))))))/(1 + logarg)
Deltaucalib = 1 + eta*(Delta0 + sp.log(1 + logarg))
logarg = u*(Delta1 + u*(Delta2 + u*(Delta3 + u*(Delta4 + u*(Delta5 + Delta5l*sp.log(u))))))
# -
# <a id='calib_coeffs'></a>
#
# ### Step 13.e.iii: Calibration Coefficients $\Delta_{i}$, $i \in \left\{0, 1, 2, 3, 4\right\}$ \[Back to [top](#toc)\]
# $$\label{calib_coeffs}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equations (5.77)--(5.81), we have
#
# \begin{align*}
# \Delta_{0} &= K \left( \eta K - 2 \right) \\
# \Delta_{1} &= -2 \left( \eta K - 1 \right) \left( K + \Delta_{0} \right) \\
# \Delta_{2} &= \frac{1}{2} \Delta_{1} \left( -4 \eta K + \Delta_{1} + 4 \right) - \frac{ a^{2} }{ M^{2} } \left( \eta K - 1 \right)^{2} \Delta_{0}\\
# \Delta_{3} &= \frac{1}{3} \left[ -\Delta_{1}^{3} + 3 \left( \eta K - 1 \right) \Delta_{1}^{2} + 3 \Delta_{2} \Delta_{1} - 6 \left( \eta K - 1 \right) \left( -\eta K + \Delta_{2} + 1 \right) - 3 \frac{ a^{2} }{ M^{2} } \left( \eta K - 1 \right)^{2} \Delta_{1} \right] \\
# &= -\frac{1}{3}\Delta_{1}^{3} + \left( \eta K - 1 \right) \Delta_{1}^{2} + \Delta_{2} \Delta_{1} - 2 \left( \eta K - 1 \right) \left( \Delta_{2}- \left( \eta K - 1 \right) \right) - \frac{ a^{2} }{ M^{2} } \left( \eta K - 1 \right)^{2} \Delta_{1} \\
# \Delta_{4} &= \frac{1}{12} \left\{ 6 \frac{ a^{2} }{ M^{2} } \left( \Delta_{1}^{2} - 2 \Delta_{2} \right) \left( \eta K - 1 \right)^{2} + 3 \Delta_{1}^{4} - 8 \left( \eta K - 1 \right) \Delta_{1}^{3} - 12 \Delta_{2} \Delta_{1}^{2} + 12 \left[ 2 \left( \eta K - 1 \right) \Delta_{2} + \Delta_{3} \right] \Delta_{1} \right.\\
# &\left.\ \ \ \ \ + 12 \left( \frac{94}{3} - \frac{41}{32} \pi^{2} \right) \left( \eta K - 1 \right)^{2} + 6 \left[ \Delta_{2}^{2} - 4 \Delta_{3} \left( \eta K - 1 \right) \right] \right\} \\
# \Delta_{5} &= \left( \eta K - 1 \right)^{2} \left( -\frac{4237}{60} + \frac{128}{5}\gamma + \frac{2275}{512} \pi^{2} - \frac{1}{3} a^{2} \left\{ \Delta_{1}^{3} - 3 \Delta_{1} \Delta_{2} + 3 \Delta_{3} \right\} \right. \\
# &\ \ \ \ \ - \frac{ \Delta_{1}^{5} - 5 \Delta_{1}^{3} \Delta_{2} + 5 \Delta_{1} \Delta_{2}^{2} + 5 \Delta_{1}^{2} \Delta_{3} - 5 \Delta_{2} \Delta_{3} - 5 \Delta_{1} \Delta_{4} }{ 5 \left( \eta K - 1 \right)^{2} } \\
# &\left.\ \ \ \ \ + \frac{ \Delta_{1}^{4} - 4 \Delta_{1}^{2} \Delta_{2} + 2 \Delta_{2}^{2} + 4 \Delta_{1} \Delta_{3} - 4 \Delta_{4} }{ 2\left( \eta K - 1 \right) } + \frac{256}{5} \log(2) \right) \\
# \Delta_{5\ell} &= \frac{64}{5} \left( \eta K - 1 \right)^{2}.
# \end{align*}
#
# We define $K$ in [this cell](#k), $\eta$ in [this cell](#eta), $a$ in [this cell](#a), and $M$ in [this cell](#m). Note that the constant $\gamma$ is the Euler-Mascheroni, and the value is taken from the [LALSuite documentation](https://lscsoft.docs.ligo.org/lalsuite/lal/group___l_a_l_constants__h.html). In the Python code we donote $\gamma$ by EMgamma.
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Delta5l = etaKminus1*etaKminus1*sp.Rational(64,5)
Delta5 = etaKminus1*etaKminus1*((sp.Rational(-4237,60) + sp.Rational(128,5)*EMgamma
+ sp.Rational(2275,512)*sp.pi*sp.pi - sp.Rational(1,3)*a*a*(Delta1*Delta1*Delta1
- 3*Delta1*Delta2 + 3*Delta3) - (Delta1*Delta1*Delta1*Delta1*Delta1
- 5*Delta1*Delta1*Delta1*Delta2 + 5*Delta1*Delta2*Delta2 + 5*Delta1*Delta1*Delta3
- 5*Delta2*Delta3 - 5*Delta1*Delta4)/(5*etaKminus1*etaKminus1)
+ (Delta1*Delta1*Delta1*Delta1 - 4*Delta1*Delta1*Delta2 + 2*Delta2*Delta2
+ 4*Delta1*Delta3 - 4*Delta4)/(2*etaKminus1) + sp.Rational(256,5)*sp.log(2)))
Delta4 = sp.Rational(1,12)*(6*a*a*(Delta1*Delta1 - 2*Delta2)*etaKminus1*etaKminus1 + 3*Delta1*Delta1*Delta1*Delta1
- 8*etaKminus1*Delta1*Delta1*Delta1 -12*Delta2*Delta1*Delta1 + 12*(2*etaKminus1*Delta2
+ Delta3)*Delta1 + 12*(sp.Rational(94,3)
- sp.Rational(41,32)*sp.pi*sp.pi)*etaKminus1*etaKminus1 + 6*(Delta2*Delta2
- 4*Delta3*etaKminus1))
Delta3 = -sp.Rational(1,3)*Delta1*Delta1*Delta1 + etaKminus1*Delta1*Delta1 + Delta2*Delta1
-2*etaKminus1*(Delta2 - etaKminus1) - a*a*etaKminus1*etaKminus1*Delta1
Delta2 = sp.Rational(1,2)*Delta1*(Delta1 - 4*etaKminus1) - a*a*etaKminus1*etaKminus1*Delta0
Delta1 = -2*etaKminus1*(K + Delta0)
Delta0 = K*(eta*K - 2)
# -
# <a id='k'></a>
#
# ### Step 13.e.iv: $K$ \[Back to [top](#toc)\]
# $$\label{k}$$
#
# The calibration constant $K$ is defined in [T2014](https://arxiv.org/pdf/1311.2544.pdf) Section II:
#
# \begin{equation*}
# K = 1.712 - 1.804 \eta - 39.77 \eta^2 + 103.2 \eta^3.
# \end{equation*}
#
# The term $\eta K - 1$ is sufficiently common that we also define it:
#
# \begin{equation*}
# {\rm etaKminus1} = \eta K - 1.
# \end{equation*}
#
# We define $\eta$ in [this cell](#eta).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
etaKminus1 = eta*K - 1
K = 1.712 - 1.803949138004582*eta - 39.77229225266885*eta*eta + 103.16588921239249*eta*eta*eta
# -
# <a id='omegatilde'></a>
#
# ## Step 13.f: $\tilde{\omega}_{\rm fd}$ \[Back to [top](#toc)\]
# $$\label{omegatilde}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.40), we have
#
# \begin{equation*}
# \tilde{\omega}_{\rm fd} = 2 a M r + \omega_{1}^{\rm fd} \eta \frac{ a M^{3} }{ r } + \omega_{2}^{\rm fd} \eta \frac{ M a^{3} }{ r }.
# \end{equation*}
#
# From discussion after [BB2010](https://arxiv.org/abs/0912.3517) Equation (6.7), we set $\omega_{1}^{\rm fd} = \omega_{2}^{\rm fd} = 0$. Thus
#
# \begin{equation*}
# \tilde{\omega}_{\rm fd} = 2 a M r.
# \end{equation*}
#
# We define $a$ in [this cell](#a), $M$ in [this cell](#m), and $r$ in [this cell](#r) below.
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
omegatilde = 2*a*r
# -
# <a id='dinv'></a>
#
# ## Step 13.g: $D^{-1}$ \[Back to [top](#toc)\]
# $$\label{dinv}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.83),
#
# \begin{equation*}
# D^{-1} = 1 + \log \left[ 1 + 6 \eta u^{2} + 2 \left( 26 - 3 \eta \right) \eta u^{3} \right].
# \end{equation*}
#
# We define $\eta$ in [this cell](#eta) and $u$ in [this cell](#u).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Dinv = 1 + sp.log(1 + 6*eta*u*u + 2*(26 - 3*eta)*eta*u*u*u)
# -
# <a id='coord'></a>
#
# # Step 14: Terms Dependent on Coordinates \[Back to [top](#toc)\]
# $$\label{coord}$$
#
# We collect here terms directly dependend on the coordinates. See [BB2010](https://arxiv.org/abs/0912.3517) Equations (4.5) and (4.6).
# <a id='usigma'></a>
#
# ## Step 14.a: $\Sigma$ \[Back to [top](#toc)\]
# $$\label{usigma}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.5), we have
#
# \begin{equation*}
# \Sigma = r^{2} + a^{2} \cos^{2} \theta.
# \end{equation*}
#
# We define $r$ in [this cell](#r), $a$ in [this cell](#a), and $\cos \theta$ in [this cell](#costheta).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Sigma = r*r + a*a*costheta*costheta
# -
# <a id='w2'></a>
#
# ## Step 14.b: $\varpi^{2}$ \[Back to [top](#toc)\]
# $$\label{w2}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.7),
#
# \begin{equation*}
# \varpi^{2} = a^{2} + r^{2}.
# \end{equation*}
#
# We define $a$ in [this cell](#a) and $r$ in [this cell](#r).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
w2 = a*a + r*r
# -
# <a id='sin2theta'></a>
#
# ## Step 14.d: $\sin^{2} \theta$ \[Back to [top](#toc)\]
# $$\label{sin2theta}$$
#
# Using a common trigonometric idenitity,
#
# \begin{equation*}
# \sin^{2} \theta = 1 - \cos^{2} \theta.
# \end{equation*}
#
# We define $\cos \theta$ in [this cell](#costheta). Note that by construction (from discussion after [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.52))
#
# \begin{equation*}
# \xi^{2} = \sin^{2} \theta.
# \end{equation*}
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
xisq = sin2theta
sin2theta = 1 - costheta*costheta
# -
# <a id='costheta'></a>
#
# ## Step 14.e: $\cos \theta$ \[Back to [top](#toc)\]
# $$\label{costheta}$$
#
# From the discussion in [BB2010](https://arxiv.org/abs/0912.3517) after equation (5.52) (noting that ${\bf e}_{3} = \hat{\bf S}_{\rm Kerr}$),
#
# \begin{equation*}
# \cos \theta = {\bf e}_{3} \cdot {\bf n} = {\bf e}_{3}^{1} n^{1} + {\bf e}_{3}^{2} n^{2} + {\bf e}_{3}^{3} n^{3}.
# \end{equation*}
#
# We define ${\bf e}_{3}$ in [this cell](#e3) and ${\bf n}$ in [this cell](#n).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
costheta = e31*n1 + e32*n2 + e33*n3
# -
# <a id='vectors'></a>
#
# # Step 15: Important Vectors \[Back to [top](#toc)\]
# $$\label{vectors}$$
#
# We collect the vectors common for computing $H_{\rm real}$ (defined in [this cell](#hreal)) below.
# <a id='v'></a>
#
# ## Step 15.a: ${\bf v}$ \[Back to [top](#toc)\]
# $$\label{v}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (3.39), we have
#
# \begin{equation*}
# {\bf v} = {\bf n} \times \boldsymbol{\xi}.
# \end{equation*}
#
# We define ${\bf n}$ in [this cell](#n) and $\boldsymbol{\xi}$ in [this cell](#xi).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
v1 = n2*xi3 - n3*xi2
v2 = n3*xi1 - n1*xi3
v3 = n1*xi2 - n2*xi1
# -
# <a id='xi'></a>
#
# ## Step 15.b: $\boldsymbol{\xi}$ \[Back to [top](#toc)\]
# $$\label{xi}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (3.38), we have
#
# \begin{equation*}
# \boldsymbol{\xi} = {\bf e}_{3} \times {\bf n}.
# \end{equation*}
#
# We define ${\bf e}_{3}$ in [this cell](#e3) and ${\bf n}$ in [this cell](#n).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
xi1 = e32*n3 - e33*n2
xi2 = e31*n3 + e33*n1
xi3 = e31*n2 - e32*n1
# -
# <a id='e3'></a>
#
# ## Step 15.c: ${\bf e}_{3}$ \[Back to [top](#toc)\]
# $$\label{e3}$$
#
# From the discussion in [BB2010](https://arxiv.org/abs/0912.3517) after equation (5.52),
#
# \begin{equation*}
# {\bf e}_{3} = \hat{\bf S}_{\rm Kerr}.
# \end{equation*}
#
# We define $\hat{\bf S}_{\rm Kerr}$ in [this cell](#skerrhat).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
e31 = Skerrhat1
e32 = Skerrhat2
e33 = Skerrhat3
# -
# <a id='n'></a>
#
# ## Step 15.d: ${\bf n}$ \[Back to [top](#toc)\]
# $$\label{n}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (3.37), we have
#
# \begin{equation*}
# {\bf n} = \frac{\bf x }{ r }
# \end{equation*}
#
# where ${\bf x} = (x, y, z)$. We define $r$ in [this cell](#r).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
n1 = x/r
n2 = y/r
n3 = z/r
# -
# <a id='spin_combos'></a>
#
# # Step 16: Spin Combinations $\boldsymbol{\sigma}$, $\boldsymbol{\sigma}^{*}$, and ${\bf S}_{\rm Kerr}$ \[Back to [top](#toc)\]
# $$\label{spin_combos}$$
#
# We collect here various combinations of the spins.
# <a id='a'></a>
#
# ## Step 16.a: $a$ \[Back to [top](#toc)\]
# $$\label{a}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.9), we have
#
# \begin{equation*}
# a = \frac{ \left\lvert {\bf S}_{\rm Kerr} \right\rvert }{ M }.
# \end{equation*}
#
# We define $\left\lvert{\bf S}_{\rm Kerr}\right\rvert$ in [this cell](#skerrmag) and $M$ in [this cell](#m).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
a = Skerrmag
# -
# <a id='skerrhat'></a>
#
# ## Step 16.b: $\hat{\bf S}_{\rm Kerr}$ \[Back to [top](#toc)\]
# $$\label{skerrhat}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (4.24), we have
#
# \begin{equation*}
# \hat{\bf S}_{\rm Kerr} = \frac{ {\bf S}_{\rm Kerr} }{ \left\lvert {\bf S}_{\rm Kerr} \right\rvert }.
# \end{equation*}
#
# We define ${\bf S}_{\rm Kerr}$ in [this cell](#skerr).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Skerrhat1 = Skerr1/Skerrmag
Skerrhat2 = Skerr2/Skerrmag
Skerrhat3 = Skerr3/Skerrmag
# -
# <a id='skerrmag'></a>
#
# ## Step 16.c: $\left\lvert {\bf S}_{\rm Kerr} \right\rvert$ \[Back to [top](#toc)\]
# $$\label{skerrmag}$$
#
# We have
#
# \begin{equation*}
# \left\lvert {\bf S}_{\rm Kerr} \right\rvert = \sqrt{ {\bf S}_{\rm Kerr}^{1} {\bf S}_{\rm Kerr}^{1} + {\bf S}_{\rm Kerr}^{2} {\bf S}_{\rm Kerr}^{2} + {\bf S}_{\rm Kerr}^{3} {\bf S}_{\rm Kerr}^{3} }.
# \end{equation*}
#
# We define ${\bf S}_{\rm Kerr}$ in [this cell](#skerr).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Skerrmag = sp.sqrt(Skerr1*Skerr1 + Skerr2*Skerr2 + Skerr3*Skerr3)
# -
# <a id='skerr'></a>
#
# ## Step 16.d: ${\bf S}_{\rm Kerr}$ \[Back to [top](#toc)\]
# $$\label{skerr}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.64):
#
# \begin{equation*}
# {\bf S}_{\rm Kerr} = \boldsymbol{\sigma} + \frac{ 1 }{ c^{2} } \boldsymbol{\Delta}_{\sigma}.
# \end{equation*}
#
# In [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.67), $\boldsymbol{\Delta}_{\sigma} = 0$. Thus
#
# \begin{equation*}
# {\bf S}_{\rm Kerr} = \boldsymbol{\sigma}.
# \end{equation*}
#
# We define $\boldsymbol{\sigma}$ in [this cell](#sigma).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
Skerr1 = sigma1
Skerr2 = sigma2
Skerr3 = sigma3
# -
# <a id='sigma'></a>
#
# ## Step 16.e: $\boldsymbol{\sigma}$ \[Back to [top](#toc)\]
# $$\label{sigma}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.2):
#
# \begin{equation*}
# \boldsymbol{\sigma} = {\bf S}_{1} + {\bf S}_{2}.
# \end{equation*}
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
sigma1 = S1x + S2x
sigma2 = S1y + S2y
sigma3 = S1z + S2z
# -
# <a id='sigmastar'></a>
#
# ## Step 16.f: $\boldsymbol{\sigma}^{*}$ \[Back to [top](#toc)\]
# $$\label{sigmastar}$$
#
# From [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.3):
#
# \begin{equation*}
# \boldsymbol{\sigma}^{*} = \frac{ m_{2} }{ m_{1} } {\bf S}_{1} + \frac{ m_{1} }{ m_{2} }{\bf S}_{2}.
# \end{equation*}
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
sigmastar1 = m2/m1*S1x + m1/m2*S2x
sigmastar2 = m2/m1*S1y + m1/m2*S2y
sigmastar3 = m2/m1*S1z + m1/m2*S2z
# -
# <a id='fundquant'></a>
#
# # Step 17: Fundamental Quantities \[Back to [top](#toc)\]
# $$\label{fundquant}$$
#
# We collect here fundamental quantities from which we build $H_{\rm real}$ (defined in [this cell](#Hreal)).
# <a id='u'></a>
#
# ## Step 17.a: $u$ \[Back to [top](#toc)\]
# $$\label{u}$$
#
# From the discussion after [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.40),
#
# \begin{equation*}
# u = \frac{ M }{ r }.
# \end{equation*}
#
# We define $M$ in [this cell](#m) and $r$ in [this cell](#r).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
u = 1/r
# -
# <a id='r'></a>
#
# ## Step 17.b: $r$ \[Back to [top](#toc)\]
# $$\label{r}$$
#
# From the discussion after [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.52),
#
# \begin{equation*}
# r = \sqrt{ x^{2} + y^{2} + z^{2} }.
# \end{equation*}
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
r = sp.sqrt(x*x + y*y + z*z)
# -
# <a id='eta'></a>
#
# ## Step 17.c: $\eta$ \[Back to [top](#toc)\]
# $$\label{eta}$$
#
# From the discussion preceding [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.1),
#
# \begin{equation*}
# \eta = \frac{ \mu }{ M }.
# \end{equation*}
#
# We define $\mu$ in [this cell](#mu).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
eta = mu/M
# -
# <a id='mu'></a>
#
# ## Step 17.d: $\mu$ \[Back to [top](#toc)\]
# $$\label{mu}$$
#
# From the discussion preceding [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.1),
#
# \begin{equation*}
# \mu = \frac{ m_{1} m_{2} }{ M }.
# \end{equation*}
#
# We define $M$ in [this cell](#m).
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
mu = m1*m2/M
# -
# <a id='m'></a>
#
# ## Step 17.e: $M$ \[Back to [top](#toc)\]
# $$\label{m}$$
#
# From the discussion preceding [BB2010](https://arxiv.org/abs/0912.3517) Equation (5.1),
#
# \begin{equation*}
# M = m_{1} + m_{2}.
# \end{equation*}
# +
# %%writefile -a $Ccodesdir/Hamiltonian-Hreal_on_top.txt
M = m1 + m2
# -
# <a id='validation'></a>
#
# # Step 18: Validation \[Back to [top](#toc)\]
# $$\label{validation}$$
#
# The following code cell reverses the order of the expressions output to SEOBNR/Hamiltonian_on_top.txt and creates a Python function to validate the value of $H_{\rm real}$ against the SEOBNRv3 Hamiltonian value computed in LALSuite git commit bba40f21e9 for command-line input parameters
#
# -M 23 -m 10 -f 20 -X 0.01 -Y 0.02 -Z -0.03 -x 0.04 -y -0.05 -z 0.06.
# +
import numpy as np
import difflib, sys, os
# The subterms in the Hamiltonian expression are sometimes written on more than
# one line for readability in this Jupyter notebook. We first create a file of
# one-line expressions, Hamiltonian-Hreal_one_line_expressions.txt.
with open(os.path.join(Ccodesdir,"Hamiltonian-Hreal_one_line_expressions.txt"), "w") as output:
count = 0
# Read output of this notebook
for line in list(open("SEOBNR/Hamiltonian-Hreal_on_top.txt")):
# Read the first line
if count == 0:
prevline=line
#Check if prevline is a complete expression
elif "=" in prevline and "=" in line:
output.write("%s\n" % prevline.strip('\n'))
prevline=line
# Check if line needs to be adjoined to prevline
elif "=" in prevline and not "=" in line:
prevline = prevline.strip('\n')
prevline = (prevline+line).replace(" ","")
# Be sure to print the last line.
if count == len(list(open("SEOBNR/Hamiltonian-Hreal_on_top.txt")))-1:
if not "=" in line:
print("ERROR. Algorithm not robust if there is no equals sign on the final line. Sorry.")
sys.exit(1)
else:
output.write("%s" % line)
count = count + 1
# Now reverse the expressions and write them in a function
# This formulation is used to check that we get a reasonable H_real value
with open(os.path.join(Ccodesdir,"Hreal_on_bottom.py"), "w") as output:
output.write("import numpy as np\ndef compute_Hreal(m1=23., m2=10., EMgamma=0.577215664901532860606512090082402431, tortoise=1, x=2.129681018601393e+01, y=0.000000000000000e+00, z=0.000000000000000e+00, p1=0.000000000000000e+00, p2=2.335391115580442e-01, p3=-4.235164736271502e-22, S1x=4.857667584940312e-03, S1y=9.715161660389764e-03, S1z=-1.457311842632286e-02, S2x=3.673094582185491e-03, S2y=-4.591302628615413e-03, S2z=5.509696538546906e-03):\n")
for line in reversed(list(open("SEOBNR/Hamiltonian-Hreal_one_line_expressions.txt"))):
output.write(" %s\n" % line.rstrip().replace("sp.sqrt", "np.sqrt").replace("sp.Rational",
"np.divide").replace("sp.abs", "np.abs").replace("sp.log",
"np.log").replace("sp.sign", "np.sign").replace("sp.pi",
"np.pi"))
output.write(" return Hreal")
# Now reverse the expressions in a standalone text file
# This formulation is used as a harsher validation check that all expressions agree with a trusted list
with open(os.path.join(Ccodesdir,"Hamiltonian_expressions.txt-VALIDATION"), "w") as output:
for line in reversed(list(open("SEOBNR/Hamiltonian-Hreal_one_line_expressions.txt"))):
output.write("%s\n" % line.rstrip().replace("sp.sqrt", "np.sqrt").replace("sp.Rational",
"np.divide").replace("sp.abs", "np.abs").replace("sp.log",
"np.log").replace("sp.sign", "np.sign").replace("sp.pi",
"np.pi"))
print("Printing difference between notebook output and a trusted list of expressions...")
# Open the files to compare
file = "Hamiltonian_expressions.txt"
outfile = "Hamiltonian_expressions.txt-VALIDATION"
print("Checking file " + outfile)
with open(os.path.join(Ccodesdir,file), "r") as file1, open(os.path.join(Ccodesdir,outfile), "r") as file2:
# Read the lines of each file
file1_lines=[]
file2_lines=[]
for line in file1.readlines():
file1_lines.append(line.replace(" ", ""))
for line in file2.readlines():
file2_lines.append(line.replace(" ", ""))
num_diffs = 0
for line in difflib.unified_diff(file1_lines, file2_lines, fromfile=os.path.join(Ccodesdir,file), tofile=os.path.join(Ccodesdir,outfile)):
sys.stdout.writelines(line)
num_diffs = num_diffs + 1
if num_diffs == 0:
print("No difference. TEST PASSED!")
else:
print("ERROR: Disagreement found with the trusted file. See differences above.")
sys.exit(1)
# Import the new Hamiltonian function and the trusted Hamiltonian function
import SEOBNR.SEOBNR_v3_Hamiltonian as Hreal_trusted
import SEOBNR.Hreal_on_bottom as Hreal_new
# Compute the trusted and new Hamiltonian values; compare; exit if they disagree!
Hreal = Hreal_trusted.compute_Hreal()
Hreal_temp = Hreal_new.compute_Hreal()
if(np.abs(Hreal-Hreal_temp)>1e-14):
print("ERROR. You have broken the Hamiltonian computation!")
print("Hreal_trusted was ",Hreal)
print("...and Hreal is now ", Hreal_temp)
sys.exit(1)
# -
# <a id='latex_pdf_output'></a>
#
# # Step 19: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
# $$\label{latex_pdf_output}$$
#
# The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
# [Tutorial-SEOBNR_Documentation.pdf](Tutorial-SEOBNR_Documentation.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-SEOBNR_Documentation")
| Tutorial-SEOBNR_v3_Hamiltonian.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Django Shell-Plus
# language: python
# name: django_extensions
# ---
x=Rent.objects.get(pk=1)
print(x.customer)
print(x.car)
x=Customer.objects.get(pk=1)
print(x)
x.rent_set.all()
print(Customer.objects.get(pk=1).rent_set.all().query)
print(Car.objects.get(pk=1).rent_set.all().query)
#รถคันที่1มีใครยืมบ้าง
for i in Car.objects.get(pk=1).rent_set.all():
print(i.customer)
#รถคันที่2มีใครยืมบ้าง
for i in Car.objects.get(pk=2).rent_set.all():
print(i.customer)
#รถคันที่1มีใครยืมบ้าง
for i in Rent.objects.filter(car__pk=1):
print(i.customer)
# %%timeit -n1
for i in Car.objects.get(pk=1).rent_set.all():
print(i.customer)
# %%timeit -n1 #จับเวลา
for i in Rent.objects.filter(car__pk=1):
print(i.customer)
print(Rent.objects.filter(car__pk=1).query)
#ลูกค้าคนแรกยืมรถไปเป็นเงินทั้งหมดเท่าไหร่
sum=0
for i in Customer.objects.get(pk=1).rent_set.all():
sum+=i.cost
print(sum)
#ลูกค้าคนที่3ยืมรถไปเป็นเงินทั้งหมดเท่าไหร่
sum=0
for i in Customer.objects.get(pk=3).rent_set.all():
sum+=i.cost
print(sum)
#ลูกค้าคนแรกยืมรถไปเป็นเงินทั้งหมดเท่าไหร่ แบบที่2
sum=0
for i in Rent.objects.filter(customer__pk=1):
sum+=i.cost
print(sum)
Rent.objects.values('customer')
Rent.objects.values('customer').annotate(Sum('cost'))
Rent.objects.filter(customer__pk=1).values('customer').annotate(Sum('cost'))
print(Rent.objects.values('customer').annotate(Sum('cost')).query)
print(Rent.objects.filter(customer__pk=1).values('customer').annotate(Sum('cost')).query)
x=Rent.objects.filter(customer__pk=1).annotate(Sum('cost'))
for i in x:
print(i.cost__sum)
Rent.objects.filter(customer__pk=1).values('customer').annotate(Sum('cost'))
| rent_my_car/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="6bYaCABobL5q"
# ##### Copyright 2021 The TensorFlow Authors.
# + cellView="form" id="FlUw7tSKbtg4"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="_-fogOi3K7nR"
# # Use TF1.x models in TF2 workflows
#
# + [markdown] id="MfBg1C5NB3X0"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/guide/migrate/model_mapping"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/model_mapping.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/model_mapping.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/model_mapping.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="7-GwECUqrkqT"
# This guide provides an overview and examples of a [modeling code shim](https://en.wikipedia.org/wiki/Shim_(computing)) that you can employ to use your existing TF1.x models in TF2 workflows such as eager execution, `tf.function`, and distribution strategies with minimal changes to your modeling code.
# + [markdown] id="k_ezCbogxaqt"
# ## Scope of usage
#
# The shim described in this guide is designed for TF1.x models that rely on:
# 1. `tf.compat.v1.get_variable` and `tf.compat.v1.variable_scope` to control variable creation and reuse, and
# 1. Graph-collection based APIs such as `tf.compat.v1.global_variables()`, `tf.compat.v1.trainable_variables`, `tf.compat.v1.losses.get_regularization_losses()`, and `tf.compat.v1.get_collection()` to keep track of weights and regularization losses
#
# This includes most models built on top of `tf.compat.v1.layer`, `tf.contrib.layers` APIs, and [TensorFlow-Slim](https://github.com/google-research/tf-slim).
#
# The shim is **NOT** necessary for the following TF1.x models:
#
# 1. Stand-alone Keras models that already track all of their trainable weights and regularization losses via `model.trainable_weights` and `model.losses` respectively.
# 1. `tf.Module`s that already track all of their trainable weights via `module.trainable_variables`, and only create weights if they have not already been created.
#
# These models are likely to work in TF2 with eager execution and `tf.function`s out-of-the-box.
# + [markdown] id="3OQNFp8zgV0C"
# ## Setup
#
# Import TensorFlow and other dependencies.
# + id="EG2n3-qlD5mA"
# !pip uninstall -y -q tensorflow
# + id="mVfR3MBvD9Sc"
# !pip install -q tf-nightly
# + id="PzkV-2cna823"
import tensorflow as tf
import tensorflow.compat.v1 as v1
import sys
import numpy as np
from unittest import mock
from contextlib import contextmanager
# + [markdown] id="Ox4kn0DK8H0f"
# ## The `track_tf1_style_variables` decorator
#
# The key shim described in this guide is `tf.compat.v1.keras.utils.track_tf1_style_variables`, a decorator that you can use within methods belonging to `tf.keras.layers.Layer` and `tf.Module` to track TF1.x-style weights and capture regularization losses.
#
# Decorating a `tf.keras.layers.Layer`'s or `tf.Module`'s call methods with `tf.compat.v1.keras.utils.track_tf1_style_variables` allows variable creation and reuse via `tf.compat.v1.get_variable` (and by extension `tf.compat.v1.layers`) to work correctly inside of the decorated method rather than always creating a new variable on each call. It will also cause the layer or module to implicitly track any weights created or accessed via `get_variable` inside the decorated method.
#
# In addition to tracking the weights themselves under the standard
# `layer.variable`/`module.variable`/etc. properties, if the method belongs
# to a `tf.keras.layers.Layer`, then any regularization losses specified via the
# `get_variable` or `tf.compat.v1.layers` regularizer arguments will get
# tracked by the layer under the standard `layer.losses` property.
#
# This tracking mechanism enables using large classes of TF1.x-style model-forward-pass code inside of Keras layers or `tf.Module`s in TF2 even with TF2 behaviors enabled.
#
# + [markdown] id="Sq6IqZILmGmO"
# ## Usage examples
#
# The usage examples below demonstrate the modeling shims used to decorate `tf.keras.layers.Layer` methods, but except where they are specifically interacting with Keras features they are applicable when decorating `tf.Module` methods as well.
# + [markdown] id="YWGPh6KmkHq6"
# ### Layer built with tf.compat.v1.get_variable
#
# Imagine you have a layer implemented directly on top of `tf.compat.v1.get_variable` as follows:
#
# ```python
# def dense(self, inputs, units):
# out = inputs
# with tf.compat.v1.variable_scope("dense"):
# # The weights are created with a `regularizer`,
# kernel = tf.compat.v1.get_variable(
# shape=[out.shape[-1], units],
# regularizer=tf.keras.regularizers.L2(),
# initializer=tf.compat.v1.initializers.glorot_normal,
# name="kernel")
# bias = tf.compat.v1.get_variable(
# shape=[units,],
# initializer=tf.compat.v1.initializers.zeros,
# name="bias")
# out = tf.linalg.matmul(out, kernel)
# out = tf.compat.v1.nn.bias_add(out, bias)
# return out
# ```
# + [markdown] id="6sZWU7JSok2n"
# Use the shim to turn it into a layer and call it on inputs.
# + id="Q3eKkcKtS_N4"
class DenseLayer(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
out = inputs
with tf.compat.v1.variable_scope("dense"):
# The weights are created with a `regularizer`,
# so the layer should track their regularization losses
kernel = tf.compat.v1.get_variable(
shape=[out.shape[-1], self.units],
regularizer=tf.keras.regularizers.L2(),
initializer=tf.compat.v1.initializers.glorot_normal,
name="kernel")
bias = tf.compat.v1.get_variable(
shape=[self.units,],
initializer=tf.compat.v1.initializers.zeros,
name="bias")
out = tf.linalg.matmul(out, kernel)
out = tf.compat.v1.nn.bias_add(out, bias)
return out
layer = DenseLayer(10)
x = tf.random.normal(shape=(8, 20))
layer(x)
# + [markdown] id="JqXAlWnYgwcq"
# Access the tracked variables and the captured regularization losses like a standard Keras layer.
# + id="ZNz5HmkXg0B5"
layer.trainable_variables
layer.losses
# + [markdown] id="W0z9GmRlhM9X"
# To see that the weights get reused each time you call the layer, set all the weights to zero and call the layer again.
# + id="ZJ4vOu2Rf-I2"
print("Resetting variables to zero:", [var.name for var in layer.trainable_variables])
for var in layer.trainable_variables:
var.assign(var * 0.0)
# Note: layer.losses is not a live view and
# will get reset only at each layer call
print("layer.losses:", layer.losses)
print("calling layer again.")
out = layer(x)
print("layer.losses: ", layer.losses)
out
# + [markdown] id="WwEprtA-lOh6"
# You can use the converted layer directly in Keras functional model construction as well.
# + id="7E7ZCINHlaHU"
inputs = tf.keras.Input(shape=(20))
outputs = DenseLayer(10)(inputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
x = tf.random.normal(shape=(8, 20))
model(x)
# Access the model variables and regularization losses
model.weights
model.losses
# + [markdown] id="ew5TTEyZkZGU"
# ### Model built with `tf.compat.v1.layers`
#
# Imagine you have a layer or model implemented directly on top of `tf.compat.v1.layers` as follows:
#
# ```python
# def model(self, inputs, units):
# with tf.compat.v1.variable_scope('model'):
# out = tf.compat.v1.layers.conv2d(
# inputs, 3, 3,
# kernel_regularizer="l2")
# out = tf.compat.v1.layers.flatten(out)
# out = tf.compat.v1.layers.dense(
# out, units,
# kernel_regularizer="l2")
# return out
# ```
# + [markdown] id="gZolXllfpVx6"
# Use the shim to turn it into a layer and call it on inputs.
# + id="cBpfSHWTTTCv"
class CompatV1LayerModel(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
with tf.compat.v1.variable_scope('model'):
out = tf.compat.v1.layers.conv2d(
inputs, 3, 3,
kernel_regularizer="l2")
out = tf.compat.v1.layers.flatten(out)
out = tf.compat.v1.layers.dense(
out, self.units,
kernel_regularizer="l2")
return out
layer = CompatV1LayerModel(10)
x = tf.random.normal(shape=(8, 5, 5, 5))
layer(x)
# + [markdown] id="OkG9oLlblfK_"
# Warning: For safety reasons, make sure to put all `tf.compat.v1.layers` inside of a non-empty-string `variable_scope`. This is because `tf.compat.v1.layers` with auto-generated names will always auto-increment the name outside of any variable scope. This means the requested variable names will mismatch each time you call the layer/module. So, rather than reusing the already-made weights it will create a new set of variables every call.
# + [markdown] id="zAVN6dy3p7ik"
# Access the tracked variables and captured regularization losses like a standard Keras layer.
# + id="HTRF99vJp7ik"
layer.trainable_variables
layer.losses
# + [markdown] id="kkNuEcyIp7ik"
# To see that the weights get reused each time you call the layer, set all the weights to zero and call the layer again.
# + id="4dk4XScdp7il"
print("Resetting variables to zero:", [var.name for var in layer.trainable_variables])
for var in layer.trainable_variables:
var.assign(var * 0.0)
out = layer(x)
print("layer.losses: ", layer.losses)
out
# + [markdown] id="7zD3a8PKzU7S"
# You can use the converted layer directly in Keras functional model construction as well.
# + id="Q88BgBCup7il"
inputs = tf.keras.Input(shape=(5, 5, 5))
outputs = CompatV1LayerModel(10)(inputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
x = tf.random.normal(shape=(8, 5, 5, 5))
model(x)
# + id="2cioB6Zap7il"
# Access the model variables and regularization losses
model.weights
model.losses
# + [markdown] id="NBNODOx9ly6r"
# ### Capture batch normalization updates and model `training` args
#
# In TF1.x, you perform batch normalization like this:
#
# ```python
# x_norm = tf.compat.v1.layers.batch_normalization(x, training=training)
#
# # ...
#
# update_ops = tf.compat.v1.get_collection(tf.GraphKeys.UPDATE_OPS)
# train_op = optimizer.minimize(loss)
# train_op = tf.group([train_op, update_ops])
# ```
# Note that:
# 1. The batch normalization moving average updates are tracked by `get_collection` which was called separately from the layer
# 2. `tf.compat.v1.layers.batch_normalization` requires a `training` argument (generally called `is_training` when using TF-Slim batch normalization layers)
#
# In TF2, due to [eager execution](https://www.tensorflow.org/guide/eager) and automatic control dependencies, the batch normalization moving average updates will be executed right away. There is no need to separately collect them from the updates collection and add them as explicit control dependencies.
#
# Additionally, if you give your `tf.keras.layers.Layer`'s forward pass method a `training` argument, Keras will be able to pass the current training phase and any nested layers to it just like it does for any other layer. See the API docs for `tf.keras.Model` for more information on how Keras handles the `training` argument.
#
# If you are decorating `tf.Module` methods, you need to make sure to manually pass all `training` arguments as needed. However, the batch normalization moving average updates will still be applied automatically with no need for explicit control dependencies.
#
# The following code snippets demonstrate how to embed batch normalization layers in the shim and how using it in a Keras model works (applicable to `tf.keras.layers.Layer`).
# + id="<KEY>"
class CompatV1BatchNorm(tf.keras.layers.Layer):
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
print("Forward pass called with `training` =", training)
with v1.variable_scope('batch_norm_layer'):
return v1.layers.batch_normalization(x, training=training)
# + id="NGuvvElmY-fu"
print("Constructing model")
inputs = tf.keras.Input(shape=(5, 5, 5))
outputs = CompatV1BatchNorm()(inputs)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
print("Calling model in inference mode")
x = tf.random.normal(shape=(8, 5, 5, 5))
model(x, training=False)
print("Moving average variables before training: ",
{var.name: var.read_value() for var in model.non_trainable_variables})
# Notice that when running TF2 and eager execution, the batchnorm layer directly
# updates the moving averages while training without needing any extra control
# dependencies
print("calling model in training mode")
model(x, training=True)
print("Moving average variables after training: ",
{var.name: var.read_value() for var in model.non_trainable_variables})
# + [markdown] id="Gai4ikpmeRqR"
# ### Variable-scope based variable reuse
# Any variable creations in the forward pass based on `get_variable` will maintain the same variable naming and reuse semantics that variable scopes have in TF1.x. This is true as long as you have at least one non-empty outer scope for any `tf.compat.v1.layers` with auto-generated names, as mentioned above.
#
# Note: Naming and reuse will be scoped to within a single layer/module instance. Calls to `get_variable` inside one shim-decorated layer or module will not be able to refer to variables created inside of layers or modules. You can get around this by using Python references to other variables directly if need be, rather than accessing variables via `get_variable`.
# + [markdown] id="6PzYZdX2nMVt"
# ### Eager execution & `tf.function`
#
# As seen above, decorated methods for `tf.keras.layers.Layer` and `tf.Module` run inside of eager execution and are also compatible with `tf.function`. This means you can use [pdb](https://docs.python.org/3/library/pdb.html) and other interactive tools to step through your forward pass as it is running.
#
# Warning: Although it is perfectly safe to call your shim-decorated layer/module methods from *inside* of a `tf.function`, it is not safe to put `tf.function`s inside of your shim-decorated methods if those `tf.functions` contain `get_variable` calls. Entering a `tf.function` resets `variable_scope`s, which means the TF1.x-style variable-scope-based variable reuse that the shim mimics will break down in this setting.
# + [markdown] id="aPytVgZWnShe"
# ### Distribution strategies
#
# Calls to `get_variable` inside of `@track_tf1_style_variables`-decorated layer or module methods use standard `tf.Variable` variable creations under the hood. This means you can use them with the various distribution strategies available with `tf.distribute` such as `MirroredStrategy` and `TPUStrategy`.
# + [markdown] id="_DcK24FOA8A2"
# ## Nesting `tf.Variable`s, `tf.Module`s, `tf.keras.layers` & `tf.keras.models` in decorated calls
#
# Decorating your layer call in `tf.compat.v1.keras.utils.track_tf1_style_variables` will only add automatic implicit tracking of variables created (and reused) via `tf.compat.v1.get_variable`. It will not capture weights directly created by `tf.Variable` calls, such as those used by typical Keras layers and most `tf.Module`s. You still need to explicitly track these in the same way you would for any other Keras layer or `tf.Module`.
#
# If you need to embed `tf.Variable` calls, Keras layers/models, or `tf.Module`s in your decorators (either because you are following the incremental migration to Native TF2 described later in this guide, or because your TF1.x code partially consisted of Keras modules):
# * Explicitly make sure that the variable/module/layer is only created once
# * Explicitly attach them as instance attributes just as you would when defining a [typical module/layer](https://www.tensorflow.org/guide/intro_to_modules#defining_models_and_layers_in_tensorflow)
# * Explicitly reuse the already-created object in follow-on calls
#
# This ensures that weights are not created new and are correctly resued. Additionally, this also ensures that existing weights and regularization losses get tracked.
#
# Here is an example of how this could look:
# + id="mrRPPoJ5ap5U"
class WrappedDenseLayer(tf.keras.layers.Layer):
def __init__(self, units, **kwargs):
super().__init__(**kwargs)
self.units = units
self._dense_model = None
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
# Create the nested tf.variable/module/layer/model
# only if it has not been created already
if not self._dense_model:
inp = tf.keras.Input(shape=inputs.shape)
dense_layer = tf.keras.layers.Dense(
self.units, name="dense",
kernel_regularizer="l2")
self._dense_model = tf.keras.Model(
inputs=inp, outputs=dense_layer(inp))
return self._dense_model(inputs)
layer = WrappedDenseLayer(10)
layer(tf.ones(shape=(5, 5)))
# + [markdown] id="Lo9h6wc6bmEF"
# The weights are correctly tracked:
# + id="Qt6USaTVbauM"
assert len(layer.weights) == 2
weights = {x.name: x for x in layer.variables}
assert set(weights.keys()) == {"dense/bias:0", "dense/kernel:0"}
layer.weights
# + [markdown] id="oyH4lIcPb45r"
# As is the regularization loss (if present):
# + id="N7cmuhRGbfFt"
regularization_loss = tf.add_n(layer.losses)
regularization_loss
# + [markdown] id="FsTgnydkdezQ"
# ### Guidance on variable names
#
# Explicit `tf.Variable` calls and Keras layers use a different layer name / variable name autogeneration mechanism than you may be used to from the combination of `get_variable` and `variable_scopes`. Although the shim will make your variable names match for variables created by `get_variable` even when going from TF1.x graphs to TF2 eager execution & `tf.function`, it cannot guarantee the same for the variable names generated for `tf.Variable` calls and Keras layers that you embed within your method decorators. It is even possible for multiple variables to share the same name in TF2 eager execution and `tf.function`.
#
# You should take special care with this when following the sections on validating correctness and mapping TF1.x checkpoints later on in this guide.
# + [markdown] id="mSFaHTCvhUso"
# ### Nesting layers/modules that use `@track_tf1_style_variables`
#
# If you are nesting one layer that uses the `@track_tf1_style_variables` decorator inside of another, you should treat it the same way you would treat any Keras layer or `tf.Module` that did not use `get_variable` to create its variables.
#
# For example,
# + id="SI5V-1JLhTfW"
class NestedLayer(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
out = inputs
with tf.compat.v1.variable_scope("dense"):
# The weights are created with a `regularizer`,
# so the layer should track their regularization losses
kernel = tf.compat.v1.get_variable(
shape=[out.shape[-1], self.units],
regularizer=tf.keras.regularizers.L2(),
initializer=tf.compat.v1.initializers.glorot_normal,
name="kernel")
bias = tf.compat.v1.get_variable(
shape=[self.units,],
initializer=tf.compat.v1.initializers.zeros,
name="bias")
out = tf.linalg.matmul(out, kernel)
out = tf.compat.v1.nn.bias_add(out, bias)
return out
class WrappedDenseLayer(tf.keras.layers.Layer):
def __init__(self, units, **kwargs):
super().__init__(**kwargs)
self.units = units
# Only create the nested tf.variable/module/layer/model
# once, and then reuse it each time!
self._dense_layer = NestedLayer(self.units)
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
with tf.compat.v1.variable_scope('outer'):
outputs = tf.compat.v1.layers.dense(inputs, 3)
outputs = tf.compat.v1.layers.dense(inputs, 4)
return self._dense_layer(outputs)
layer = WrappedDenseLayer(10)
layer(tf.ones(shape=(5, 5)))
# Recursively track weights and regularization losses
layer.trainable_weights
layer.losses
# + [markdown] id="DkEkLnGbipSS"
# Notice that `variable_scope`s set in the outer layer may affect the naming of variables set in the nested layer, *but* `get_variable` will not share variables by name across the outer shim-based layer and the nested shim-based layer even if they have the same name, because the nested and outer layer utilize different internal variable stores.
# + [markdown] id="PfbiY08UizLz"
# As mentioned previously, if you are using a shim-decorated `tf.Module` there is no `losses` property to recursively and automatically track the regularization loss of your nested layer, and you will have to track it separately.
# + [markdown] id="CaP7fxoUWfMm"
# ### Using `tf.compat.v1.make_template` in the decorated method
#
# **It is highly recommended you directly use `tf.compat.v1.keras.utils.track_tf1_style_variables` instead of using `tf.compat.v1.make_template`, as it is a thinner layer on top of TF2**.
#
# Follow the guidance in this section for prior TF1.x code that was already relying on `tf.compat.v1.make_template`.
#
# Because `tf.compat.v1.make_template` wraps code that uses `get_variable`, the `track_tf1_style_variables` decorator allows you to use these templates in layer calls and successfully track the weights and regularization losses.
#
# However, do make sure to call `make_template` only once and then reuse the same template in each layer call. Otherwise, a new template will be created each time you call the layer along with a new set of variables.
#
# For example,
# + id="iHEQN8z44dbK"
class CompatV1TemplateScaleByY(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def my_op(x, scalar_name):
var1 = tf.compat.v1.get_variable(scalar_name,
shape=[],
regularizer=tf.compat.v1.keras.regularizers.L2(),
initializer=tf.compat.v1.constant_initializer(1.5))
return x * var1
self.scale_by_y = tf.compat.v1.make_template('scale_by_y', my_op, scalar_name='y')
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
with tf.compat.v1.variable_scope('layer'):
# Using a scope ensures the `scale_by_y` name will not be incremented
# for each instantiation of the layer.
return self.scale_by_y(inputs)
layer = CompatV1TemplateScaleByY()
out = layer(tf.ones(shape=(2, 3)))
print("weights:", layer.weights)
print("regularization loss:", layer.losses)
print("output:", out)
# + [markdown] id="3vKTJ7IsTEe8"
# Warning: Avoid sharing the same `make_template`-created template across multiple layer instances as it may break the variable and regularization loss tracking mechanisms of the shim decorator. Additionally, if you plan to use the same `make_template` name inside of multiple layer instances then you should nest the created template's usage inside of a `variable_scope`. If not, the generated name for the template's `variable_scope` will increment with each new instance of the layer. This could alter the weight names in unexpected ways.
# + [markdown] id="P4E3-XPhWD2N"
# ## Incremental migration to Native TF2
#
# As mentioned earlier, `track_tf1_style_variables` allows you to mix TF2-style object-oriented `tf.Variable`/`tf.keras.layers.Layer`/`tf.Module` usage with legacy `tf.compat.v1.get_variable`/`tf.compat.v1.layers`-style usage inside of the same decorated module/layer.
#
# This means that after you have made your TF1.x model fully-TF2-compatible, you can write all new model components with native (non-`tf.compat.v1`) TF2 APIs and have them interoperate with your older code.
#
# However, if you continue to modify your older model components, you may also choose to incrementally switch your legacy-style `tf.compat.v1` usage over to the purely-native object-oriented APIs that are recommended for newly written TF2 code.
#
# `tf.compat.v1.get_variable` usage can be replaced with either `self.add_weight` calls if you are decorating a Keras layer/model, or with `tf.Variable` calls if you are decorating Keras objects or `tf.Module`s.
#
# Both functional-style and object-oriented `tf.compat.v1.layers` can generally be replaced with the equivalent `tf.keras.layers` layer with no argument changes required.
#
# You may also consider chunks parts of your model or common patterns into individual layers/modules during your incremental move to purely-native APIs, which may themselves use `track_tf1_style_variables`.
#
# ### A note on Slim and contrib.layers
#
# A large amount of older TF 1.x code uses the [Slim](https://ai.googleblog.com/2016/08/tf-slim-high-level-library-to-define.html) library, which was packaged with TF 1.x as `tf.contrib.layers`. Converting code using Slim to native TF 2 is more involved than converting `v1.layers`. In fact, it may make sense to convert your Slim code to `v1.layers` first, then convert to Keras. Below is some general guidance for converting Slim code.
#
# - Ensure all arguments are explicit. Remove `arg_scopes` if possible. If you still need to use them, split `normalizer_fn` and `activation_fn` into their own layers.
# - Separable conv layers map to one or more different Keras layers (depthwise, pointwise, and separable Keras layers).
# - Slim and `v1.layers` have different argument names and default values.
# - Note that some arguments have different scales.
# + [markdown] id="RFoULo-gazit"
# ### Migration to Native TF2 ignoring checkpoint compatibility
#
# The following code sample demonstrates an incremental move of a model to purely-native APIs without considering checkpoint compatibility.
# + id="dPO9YJsb6r-D"
class CompatModel(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
with tf.compat.v1.variable_scope('model'):
out = tf.compat.v1.layers.conv2d(
inputs, 3, 3,
kernel_regularizer="l2")
out = tf.compat.v1.layers.flatten(out)
out = tf.compat.v1.layers.dropout(out, training=training)
out = tf.compat.v1.layers.dense(
out, self.units,
kernel_regularizer="l2")
return out
# + [markdown] id="fp16xK6Oa8k9"
# Next, replace the `compat.v1` APIs with their native object-oriented equivalents in a piecewise manner. Start by switching the convolution layer to a Keras object created in the layer constructor.
# + id="LOj1Swe16so3"
class PartiallyMigratedModel(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
self.conv_layer = tf.keras.layers.Conv2D(
3, 3,
kernel_regularizer="l2")
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
with tf.compat.v1.variable_scope('model'):
out = self.conv_layer(inputs)
out = tf.compat.v1.layers.flatten(out)
out = tf.compat.v1.layers.dropout(out, training=training)
out = tf.compat.v1.layers.dense(
out, self.units,
kernel_regularizer="l2")
return out
# + [markdown] id="kzJF0H0sbce8"
# Use the deterministic number generation test tool to verify that this incremental change leaves the model with the same behavior as before.
# + id="VRTg0bQlcPeP"
# import tensorflow.python.framework.random_seed as random_seed
seed_implementation = sys.modules[tf.compat.v1.get_seed.__module__]
class DeterministicTestTool(object):
def __init__(self, seed: int = 42, mode='constant'):
"""Set mode to 'constant' or 'num_random_ops'. Defaults to 'constant'."""
if mode not in {'constant', 'num_random_ops'}:
raise ValueError("Mode arg must be 'constant' or 'num_random_ops'. " +
"Got: {}".format(mode))
self._mode = mode
self._seed = seed
self.operation_seed = 0
self._observed_seeds = set()
def scope(self):
tf.random.set_seed(self._seed)
def _get_seed(_):
"""Wraps TF get_seed to make deterministic random generation easier.
This makes a variable's initialization (and calls that involve random
number generation) depend only on how many random number generations
were used in the scope so far, rather than on how many unrelated
operations the graph contains.
Returns:
Random seed tuple.
"""
op_seed = self.operation_seed
if self._mode == "constant":
tf.random.set_seed(op_seed)
else:
if op_seed in self._observed_seeds:
raise ValueError(
'This `DeterministicTestTool` object is trying to re-use the ' +
'already-used operation seed {}. '.format(op_seed) +
'It cannot guarantee random numbers will match between eager ' +
'and sessions when an operation seed is reused. ' +
'You most likely set ' +
'`operation_seed` explicitly but used a value that caused the ' +
'naturally-incrementing operation seed sequences to overlap ' +
'with an already-used seed.')
self._observed_seeds.add(op_seed)
self.operation_seed += 1
return (self._seed, op_seed)
# mock.patch internal symbols to modify the behavior of TF APIs relying on them
return mock.patch.object(seed_implementation, 'get_seed', wraps=_get_seed)
# + id="MTJq0qW9_Tz2"
random_tool = DeterministicTestTool(mode='num_random_ops')
with random_tool.scope():
layer = CompatModel(10)
inputs = tf.random.normal(shape=(10, 5, 5, 5))
original_output = layer(inputs)
# Grab the regularization loss as well
original_regularization_loss = tf.math.add_n(layer.losses)
print(original_regularization_loss)
# + id="X4Wq3wuaHjEV"
random_tool = DeterministicTestTool(mode='num_random_ops')
with random_tool.scope():
layer = PartiallyMigratedModel(10)
inputs = tf.random.normal(shape=(10, 5, 5, 5))
migrated_output = layer(inputs)
# Grab the regularization loss as well
migrated_regularization_loss = tf.math.add_n(layer.losses)
print(migrated_regularization_loss)
# + id="mMMXS7EHjvCy"
# Verify that the regularization loss and output both match
np.testing.assert_allclose(original_regularization_loss.numpy(), migrated_regularization_loss.numpy())
np.testing.assert_allclose(original_output.numpy(), migrated_output.numpy())
# + [markdown] id="RMxiMVFwbiQy"
# You have now replaced all of the individual `compat.v1.layers` with native Keras layers.
# + id="3dFCnyYc9DrX"
class NearlyFullyNativeModel(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
self.conv_layer = tf.keras.layers.Conv2D(
3, 3,
kernel_regularizer="l2")
self.flatten_layer = tf.keras.layers.Flatten()
self.dense_layer = tf.keras.layers.Dense(
self.units,
kernel_regularizer="l2")
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
with tf.compat.v1.variable_scope('model'):
out = self.conv_layer(inputs)
out = self.flatten_layer(out)
out = self.dense_layer(out)
return out
# + id="QGPqEjkGHgar"
random_tool = DeterministicTestTool(mode='num_random_ops')
with random_tool.scope():
layer = NearlyFullyNativeModel(10)
inputs = tf.random.normal(shape=(10, 5, 5, 5))
migrated_output = layer(inputs)
# Grab the regularization loss as well
migrated_regularization_loss = tf.math.add_n(layer.losses)
print(migrated_regularization_loss)
# + id="uAs60eCdj6x_"
# Verify that the regularization loss and output both match
np.testing.assert_allclose(original_regularization_loss.numpy(), migrated_regularization_loss.numpy())
np.testing.assert_allclose(original_output.numpy(), migrated_output.numpy())
# + [markdown] id="oA6viSo3bo3y"
# Finally, remove both any remaining (no-longer-needed) `variable_scope` usage and the `track_tf1_style_variables` decorator itself.
#
# You are now left with a version of the model that uses entirely native APIs.
# + id="mIHpHWIRDunU"
class FullyNativeModel(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
self.conv_layer = tf.keras.layers.Conv2D(
3, 3,
kernel_regularizer="l2")
self.flatten_layer = tf.keras.layers.Flatten()
self.dense_layer = tf.keras.layers.Dense(
self.units,
kernel_regularizer="l2")
def call(self, inputs):
out = self.conv_layer(inputs)
out = self.flatten_layer(out)
out = self.dense_layer(out)
return out
# + id="ttAmiCvLHW54"
random_tool = DeterministicTestTool(mode='num_random_ops')
with random_tool.scope():
layer = FullyNativeModel(10)
inputs = tf.random.normal(shape=(10, 5, 5, 5))
migrated_output = layer(inputs)
# Grab the regularization loss as well
migrated_regularization_loss = tf.math.add_n(layer.losses)
print(migrated_regularization_loss)
# + id="ym5DYtT4j7e3"
# Verify that the regularization loss and output both match
np.testing.assert_allclose(original_regularization_loss.numpy(), migrated_regularization_loss.numpy())
np.testing.assert_allclose(original_output.numpy(), migrated_output.numpy())
# + [markdown] id="oX4pdrzycIsa"
# ### Maintaining checkpoint compatibility during migration to Native TF2
#
# The above migration process to native TF2 APIs changed both the variable names (as Keras APIs produce very different weight names), and the object-oriented paths that point to different weights in the model. The impact of these changes is that they will have broken both any existing TF1-style name-based checkpoints or TF2-style object-oriented checkpoints.
#
# However, in some cases, you might be able to take your original name-based checkpoint and find a mapping of the variables to their new names with approaches like the one detailed in the [Reusing TF1.x checkpoints guide](./reusing_checkpoints.ipynb).
#
# Some tips to making this feasible are as follows:
# - Variables still all have a `name` argument you can set.
# - Keras models also take a `name` argument as which they set as the prefix for their variables.
# - The `v1.name_scope` function can be used to set variable name prefixes. This is very different from `tf.variable_scope`. It only affects names, and doesn't track variables and reuse.
#
# With the above pointers in mind, the following code samples demonstrate a workflow you can adapt to your code to incrementally update part of a model while simultaneously updating checkpoints.
#
# Note: Due to the complexity of variable naming with Keras layers, this is not guaranteed to work for all use cases.
# + [markdown] id="EFmMY3dcx3mR"
# 1. Begin by switching functional-style `tf.compat.v1.layers` to their object-oriented versions.
# + id="cRxCFmNjl2ta"
class FunctionalStyleCompatModel(tf.keras.layers.Layer):
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
with tf.compat.v1.variable_scope('model'):
out = tf.compat.v1.layers.conv2d(
inputs, 3, 3,
kernel_regularizer="l2")
out = tf.compat.v1.layers.conv2d(
out, 4, 4,
kernel_regularizer="l2")
out = tf.compat.v1.layers.conv2d(
out, 5, 5,
kernel_regularizer="l2")
return out
layer = FunctionalStyleCompatModel()
layer(tf.ones(shape=(10, 10, 10, 10)))
[v.name for v in layer.weights]
# + [markdown] id="QvzUyXxjydAd"
# 2. Next, assign the compat.v1.layer objects and any variables created by `compat.v1.get_variable` as properties of the `tf.keras.layers.Layer`/`tf.Module` object whose method is decorated with `track_tf1_style_variables` (note that any object-oriented TF2 style checkpoints will now save out both a path by variable name and the new object-oriented path).
# + id="02jMQkJFmFwl"
class OOStyleCompatModel(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.conv_1 = tf.compat.v1.layers.Conv2D(
3, 3,
kernel_regularizer="l2")
self.conv_2 = tf.compat.v1.layers.Conv2D(
4, 4,
kernel_regularizer="l2")
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
with tf.compat.v1.variable_scope('model'):
out = self.conv_1(inputs)
out = self.conv_2(out)
out = tf.compat.v1.layers.conv2d(
out, 5, 5,
kernel_regularizer="l2")
return out
layer = OOStyleCompatModel()
layer(tf.ones(shape=(10, 10, 10, 10)))
[v.name for v in layer.weights]
# + [markdown] id="8evFpd8Nq63v"
# 3. Resave a loaded checkpoint at this point to save out paths both by the variable name (for compat.v1.layers), or by the object-oriented object graph.
# + id="7neFr-9pqmJX"
weights = {v.name: v for v in layer.weights}
assert weights['model/conv2d/kernel:0'] is layer.conv_1.kernel
assert weights['model/conv2d_1/bias:0'] is layer.conv_2.bias
# + [markdown] id="pvsi743Xh9wn"
# 4. You can now swap out the object-oriented `compat.v1.layers` for native Keras layers while still being able to load the recently-saved checkpoint. Ensure that you preserve variable names for the remaining `compat.v1.layers` by still recording the auto-generated `variable_scopes` of the replaced layers. These switched layers/variables will now only use the object attribute path to the variables in the checkpoint instead of the variable name path.
#
# In general, you can replace usage of `compat.v1.get_variable` in variables attached to properties by:
#
# * Switching them to using `tf.Variable`, **OR**
# * Updating them by using [`tf.keras.layers.Layer.add_weight`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer#add_weight). Note that if you are not switching all layers in one go this may change auto-generated layer/variable naming for the remaining `compat.v1.layers` that are missing a `name` argument. If that is the case, you must keep the variable names for remaining `compat.v1.layers` the same by manually opening and closing a `variable_scope` corresponding to the removed `compat.v1.layer`'s generated scope name. Otherwise the paths from existing checkpoints may conflict and checkpoint loading will behave incorrectly.
#
# + id="NbixtIW-maoH"
def record_scope(scope_name):
"""Record a variable_scope to make sure future ones get incremented."""
with tf.compat.v1.variable_scope(scope_name):
pass
class PartiallyNativeKerasLayersModel(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.conv_1 = tf.keras.layers.Conv2D(
3, 3,
kernel_regularizer="l2")
self.conv_2 = tf.keras.layers.Conv2D(
4, 4,
kernel_regularizer="l2")
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs, training=None):
with tf.compat.v1.variable_scope('model'):
out = self.conv_1(inputs)
record_scope('conv2d') # Only needed if follow-on compat.v1.layers do not pass a `name` arg
out = self.conv_2(out)
record_scope('conv2d_1') # Only needed if follow-on compat.v1.layers do not pass a `name` arg
out = tf.compat.v1.layers.conv2d(
out, 5, 5,
kernel_regularizer="l2")
return out
layer = PartiallyNativeKerasLayersModel()
layer(tf.ones(shape=(10, 10, 10, 10)))
[v.name for v in layer.weights]
# + [markdown] id="2eaPpevGs3dA"
# Saving a checkpoint out at this step after constructing the variables will make it contain ***only*** the currently-available object paths.
#
# Ensure you record the scopes of the removed `compat.v1.layers` to preserve the auto-generated weight names for the remaining `compat.v1.layers`.
# + id="EK7vtWBprObA"
weights = set(v.name for v in layer.weights)
assert 'model/conv2d_2/kernel:0' in weights
assert 'model/conv2d_2/bias:0' in weights
# + [markdown] id="DQ5-SfmWFTvY"
# 5. Repeat the above steps until you have replaced all the `compat.v1.layers` and `compat.v1.get_variable`s in your model with fully-native equivalents.
# + id="PA1d2POtnTQa"
class FullyNativeKerasLayersModel(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.conv_1 = tf.keras.layers.Conv2D(
3, 3,
kernel_regularizer="l2")
self.conv_2 = tf.keras.layers.Conv2D(
4, 4,
kernel_regularizer="l2")
self.conv_3 = tf.keras.layers.Conv2D(
5, 5,
kernel_regularizer="l2")
def call(self, inputs, training=None):
with tf.compat.v1.variable_scope('model'):
out = self.conv_1(inputs)
out = self.conv_2(out)
out = self.conv_3(out)
return out
layer = FullyNativeKerasLayersModel()
layer(tf.ones(shape=(10, 10, 10, 10)))
[v.name for v in layer.weights]
# + [markdown] id="vZejG7rTsTb6"
# Remember to test to make sure the newly updated checkpoint still behaves as you expect. Apply the techniques described in the [validate numerical correctness guide](./validate_correctness.ipynb) at every incremental step of this process to ensure your migrated code runs correctly.
# + [markdown] id="Ewi_h-cs6n-I"
# ## Handling TF1.x to TF2 behavior changes not covered by the modeling shims
#
# The moeling shims described in this guide can make sure that variables, layers, and regularization losses created with `get_variable`, `tf.compat.v1.layers`, and `variable_scope` semantics continue to work as before when using eager execution and `tf.function`, without having to rely on collections.
#
# This does not cover ***all*** TF1.x-specific semantics that your model forward passes may be relying on. In some cases, the shims might be insufficient to get your model forward pass running in TF2 on their own. Read the [TF1.x vs TF2 behaviors guide](./effective_tf2.md) to learn more about the behavioral differences between TF1.x and TF2.
| site/en/guide/migrate/model_mapping.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Views and describes the data, before much processing.
# +
#default_exp surveyors
# -
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# %config Completer.use_jedi = False
# +
from matplotlib import pyplot as plt
import seaborn as sns
import plotnine as pn
import pandas as pd
from mizani.formatters import date_format
import random
#export
import restaurants_timeseries.core as core
# +
#export
class Logger:
def log(self, s):
print(s)
logger = Logger()
# -
# # Process reservations.
# +
def add_time_between(reservations):
reservations['time_between'] = (
(reservations['visit_datetime'] - reservations['reserve_datetime']))
reservations['total_seconds_between'] = reservations['time_between'].dt.seconds
reservations['days_between'] = reservations['time_between'].dt.days
reservations['hours_between'] = reservations['total_seconds_between'] / 3600
#export
class ReservationsSurveyor:
def __init__(self, reservations: pd.DataFrame):
self.reservations = reservations.copy()
self.reservations['visit_date'] = pd.to_datetime(self.reservations['visit_datetime'].dt.date)
add_time_between(self.reservations)
self.reservations = self.reservations[
['air_store_id', 'visit_date',
'total_seconds_between', 'days_between', 'hours_between',
'reserve_visitors']]
def get_daily_sums(self):
""" This wastes information! """
daily = (self.reservations
.groupby(['air_store_id', 'visit_date'])['reserve_visitors']
.sum()
.reset_index(name='reserve_visitors'))
return daily
# +
rs = ReservationsSurveyor(core.data['reservations'])
daily_sums = rs.get_daily_sums()
print('Reservations raw data:')
display(core.data['reservations'].head())
print('\nReservations processed data:')
display(rs.reservations.head())
print('\nDaily sums (wastes information!):')
display(daily_sums.head())
# -
# # Process visits.
# +
def get_earliest_date(dat):
return dat.query("visitors > 0")['visit_date'].min()
def get_latest_date(dat):
return dat.query("visitors > 0")['visit_date'].max()
#export
class VisitsSurveyor:
def __init__(self, visits: pd.DataFrame, make_report: bool):
"""
Does a bunch of processing and, if make_report is true, prints
a bunch of messages and dataframes and plots describing
the input dataframe.
"""
self.joined_already = False
self.make_report = make_report
self.visits = visits.copy()
if len(self.visits['air_store_id'].unique()) == 1:
logger.log(f"Input is a single store.")
self.visits.loc[:, 'day'] = range(self.visits.shape[0])
else:
logger.log(f"Input is multiple stores.")
self.visits.loc[:, 'day'] = (
self.visits
.sort_values(['air_store_id', 'visit_date'])
.groupby(['air_store_id'], sort=False)
.cumcount())
self._assign_never_visited()
self._filter_zero_periods()
self.count_visited_days()
self._get_populated_spans()
self.plot_spans()
self.visits = self.visits.sort_values(['air_store_id', 'visit_date'], ascending=True)
def attach_reservations(self, rs: ReservationsSurveyor) -> None:
"""
Attaches the daily reservation counts in `rs` to self.visits. Errors
if this function has already been called.
:note: In the future we should
avoid the brute daily sum, and use all the information in the reservations,
especially how far in advance each was made.
"""
if self.joined_already:
raise AssertionError(f"Appears attach_reservations was already called.")
logger.log("Attaching daily reservation sums.")
self.joined_already = True
pre_count = self.visits.shape[0]
self.visits = self.visits.merge(
rs.get_daily_sums(), how='left', on=['air_store_id', 'visit_date'])
post_count = self.visits.shape[0]
if pre_count != post_count:
raise AssertionError(
f"Had {pre_count} rows before join to reservations, but {post_count} after.")
def report(self, s: str) -> None:
if self.make_report:
if issubclass(type(s), core.pd.DataFrame):
display(s)
else:
print(s)
def _assign_never_visited(self) -> None:
"""
Saves the stores that were never visited, and
(if make_report is True) reports how many there were.
"""
self.store_counts = self.visits.groupby('air_store_id').visitors.sum()
self.never_visited = self.store_counts[self.store_counts == 0]
self.report("The visits data looks like this:")
self.report(self.visits.head())
self.report(" ")
self.report(f"There are {len(self.store_counts)} stores.")
self.report(f"{len(self.never_visited)} stores had no visits ever.")
def _filter_zero_periods(self) -> None:
"""
Removes records from self.visits if they fall outside the
dataset-wide earliest and latest days with positive visitors.
"""
self.daily_total_visits = (
self.visits
[['visit_date', 'visitors']]
.groupby('visit_date')
.sum()
.reset_index())
earliest_day = get_earliest_date(self.daily_total_visits)
latest_day = get_latest_date(self.daily_total_visits)
self.visits = self.visits[
(self.visits.visit_date >= earliest_day) &
(self.visits.visit_date <= latest_day)]
self.report(f"Populated data is from {earliest_day} to {latest_day}.")
def count_visited_days(self) -> None:
"""
if make_report is False this function does nothing.
Plots a histogram describing the distribution of all-time visit
counts for the restaurants.
"""
self.visited_days_counts = (
self.visits[self.visits.visitors > 0].
groupby('air_store_id')['visitors'].
count().
reset_index(name='days_visited'))
if self.make_report:
hist = (
pn.ggplot(self.visited_days_counts, pn.aes(x='days_visited')) +
pn.geom_histogram(bins=60) +
pn.theme_bw() +
pn.labs(x = "days visited", y = "count restaurants") +
pn.theme(figure_size=(13, 3)))
self.report("")
self.report("Visited days per restaurant:")
display(hist)
def _get_populated_spans(self) -> core.pd.DataFrame:
"""
Creates a dataframe field named `spans` inside this object.
Contains the store id and the earliest and latest visit dates
the each restaurant had a positive visitor count.
"""
rows = []
for store_id, df in self.visits.groupby('air_store_id'):
earliest_day = get_earliest_date(df)
latest_day = get_latest_date(df)
row = (store_id, earliest_day, latest_day)
rows.append(row)
spans = core.pd.DataFrame(
rows, columns=['air_store_id', 'earliest_visit_date', 'latest_visit_date'])
spans['length'] = spans['latest_visit_date'] - spans['earliest_visit_date']
spans.sort_values('length', inplace=True)
spans['air_store_id'] = core.pd.Categorical(spans.air_store_id,
categories=core.pd.unique(spans.air_store_id))
self.spans = spans
def plot_spans(self):
"""
if make_report is False this function does nothing.
Displays a plot where there is one line per
restaurant. The length of the line is the number of days in the contiguous period
between
1) the earliest date the restaurant had a non-zero amount of visitors.
2) the latest date the restaurant had a non-zero amount of visitors.
"""
if self.make_report:
x = 'air_store_id'
spans_plot = (
pn.ggplot(self.spans, pn.aes(x=x, xend=x,
y='earliest_visit_date', yend='latest_visit_date')) +
pn.geom_segment(color='gray') +
pn.theme_bw() +
pn.scale_y_date(breaks="1 month", labels=date_format("%b %Y")) +
pn.theme(figure_size=(30, 4), axis_text_x=pn.element_blank(),
axis_ticks_minor_x=pn.element_blank(), axis_ticks_major_x=pn.element_blank(),
axis_text_y=pn.element_text(size=10),
panel_grid=pn.element_blank()))
# takes long, so comment out during development
#display(spans_plot)
# -
vs = VisitsSurveyor(core.data['visits'], True)
N_MOST_OPEN = 10
N_LEAST_OPEN = 10
vs = VisitsSurveyor(core.data['visits'], False)
#export
random.seed(40)
most_open_stores = list(
vs.visited_days_counts
.sort_values(by='days_visited', ascending=False)
.air_store_id[0:N_MOST_OPEN])
least_open_stores = list(
vs.visited_days_counts
.sort_values(by='days_visited', ascending=True)
.air_store_id[0:N_LEAST_OPEN])
# # Describing visits and reservations together.
vs.attach_reservations(rs)
vs.visits[vs.visits['reserve_visitors'].isnull()]
from nbdev.export import *
notebook2script()
| 01_view_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 298} colab_type="code" executionInfo={"elapsed": 4969, "status": "ok", "timestamp": 1583180675039, "user": {"displayName": "<NAME>\u0144ski", "photoUrl": "", "userId": "14842700098904817277"}, "user_tz": -60} id="eFVCNaj86fM8" outputId="6017a628-a528-4d3f-b7ec-f441d6166e00"
# # !pip install eli5
# + colab={} colab_type="code" id="5kCTfaRv6v_a"
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import cross_val_score
import eli5
from eli5.sklearn import PermutationImportance
from ast import literal_eval
from tqdm import tqdm_notebook
# -
# ls
# For colab
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 647, "status": "ok", "timestamp": 1583181219911, "user": {"displayName": "<NAME>\u0144ski", "photoUrl": "", "userId": "14842700098904817277"}, "user_tz": -60} id="Ft9o2xN18uv-" outputId="fa078392-2388-47f5-a2d3-ca80a636478f"
# # cd "/content/drive/My Drive/Colab Notebooks/dw_matrix"
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1749, "status": "ok", "timestamp": 1583181279072, "user": {"displayName": "<NAME>0144ski", "photoUrl": "", "userId": "14842700098904817277"}, "user_tz": -60} id="TQ4HyXWL9ET1" outputId="0584c981-de2f-40d6-d5d5-481ddf919fe8"
# ls data
# + colab={} colab_type="code" id="R_laTp509SaD"
df = pd.read_csv('data/shoes_prices_filter.csv', low_memory=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 245} colab_type="code" executionInfo={"elapsed": 559, "status": "ok", "timestamp": 1583181421799, "user": {"displayName": "<NAME>0144ski", "photoUrl": "", "userId": "14842700098904817277"}, "user_tz": -60} id="ddEPPOlC9f_S" outputId="9f89cca2-c952-4bf1-c1be-5e7485e60917"
df.columns
# + colab={} colab_type="code" id="TJb71o3O91nL"
def run_model(feats, model = DecisionTreeRegressor(max_depth=5)):
X = df[ feats].values
y = df['prices_amountmin'].values
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 587, "status": "ok", "timestamp": 1583182140938, "user": {"displayName": "<NAME>\u0144ski", "photoUrl": "", "userId": "14842700098904817277"}, "user_tz": -60} id="dLam13IF-v-o" outputId="fd6ae03b-06f1-4646-f68c-bf80d934b304"
df['brand_cat'] = df['brand'].map(lambda x: str(x).lower()).factorize()[0]
# -
run_model(['brand_cat'])
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 4273, "status": "ok", "timestamp": 1583182153613, "user": {"displayName": "<NAME>\u0144ski", "photoUrl": "", "userId": "14842700098904817277"}, "user_tz": -60} id="oFmLp936_Fhb" outputId="041042b0-3f83-4437-ffc3-9850be59d72d"
model = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0)
run_model(['brand_cat'], model)
# + colab={"base_uri": "https://localhost:8080/", "height": 521} colab_type="code" executionInfo={"elapsed": 1043, "status": "ok", "timestamp": 1583181981703, "user": {"displayName": "<NAME>\u0144ski", "photoUrl": "", "userId": "14842700098904817277"}, "user_tz": -60} id="WToRSDab_ow7" outputId="13a3780c-b7cc-4f07-ea05-35f78ba368ab"
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 141} colab_type="code" executionInfo={"elapsed": 475, "status": "ok", "timestamp": 1583182281545, "user": {"displayName": "<NAME>0144ski", "photoUrl": "", "userId": "14842700098904817277"}, "user_tz": -60} id="4Wn6Scur_-Lw" outputId="a6d33b93-4337-4173-a914-e0e2f1223599"
df.features.head().values
# -
str_dict = '[{"key":"Gender","value":["Men"]},{"key":"Shoe Size","value":["M"]},{"key":"Shoe Category","value":["Men\'s Shoes"]},{"key":"Color","value":["Multicolor"]},{"key":"Manufacturer Part Number","value":["8190-W-NAVY-7.5"]},{"key":"Brand","value":["Josmo"]}]'
literal_eval(str_dict)[0]['value'][0]
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 518, "status": "ok", "timestamp": 1583182596685, "user": {"displayName": "<NAME>\u0144ski", "photoUrl": "", "userId": "14842700098904817277"}, "user_tz": -60} id="-bH6wTCTBAxi" outputId="d430cb4c-d5b2-47d2-f18d-4556f18ac9cc"
str_dict = '[{"key":"Gender","value":["Men"]},{"key":"Shoe Size","value":["M"]},{"key":"Shoe Category","value":["Men\'s Shoes"]},{"key":"Color","value":["Multicolor"]},{"key":"Manufacturer Part Number","value":["8190-W-NAVY-7.5"]},{"key":"Brand","value":["Josmo"]}]'
literal_eval(str_dict)[0]['key'][2]
# + colab={"base_uri": "https://localhost:8080/", "height": 132} colab_type="code" executionInfo={"elapsed": 1256, "status": "error", "timestamp": 1583183482700, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "14842700098904817277"}, "user_tz": -60} id="UfxUkWiVEuLj" outputId="a9e6f248-27d4-4c5b-fc28-1683adeffaca"
str_dict = '{"value":["17\\""]}'
literal_eval(str_dict.replace('\\"','"'))
# + colab={} colab_type="code" id="7Q47d9iAB0EN"
def parse_features(x):
output_dict = {}
if str(x) == 'nan': return output_dict
features = literal_eval(x.replace('\\"','"'))
for item in features:
key = item['key'].lower().strip()
value = item['value'][0].lower().strip()
output_dict[key] = value
return output_dict
df['features_parsed'] = df['features'].map(parse_features)
# + colab={"base_uri": "https://localhost:8080/", "height": 121} colab_type="code" executionInfo={"elapsed": 596, "status": "ok", "timestamp": 1583184054436, "user": {"displayName": "<NAME>\u0144ski", "photoUrl": "", "userId": "14842700098904817277"}, "user_tz": -60} id="AhHYDeuNDwPe" outputId="32efe8d3-94c7-4662-b088-c5a7853b10b6"
df['features_parsed'].head()
# -
keys = set()
df['features_parsed'].map( lambda x: keys.update(x.keys()) )
len(keys)
# +
def get_name_feat(key):
return 'feat_' + key
for key in tqdm_notebook(keys):
df[get_name_feat(key)] = df.features_parsed.map(lambda feats: feats[key] if key in feats else np.nan)
# -
df.columns
keys_stat = {}
for key in keys:
keys_stat[key] = df[False == df[get_name_feat(key)].isnull()].shape[0] / df.shape[0] * 100
{ k:v for k,v in keys_stat.items() if v > 30}
# +
df['feat_brand_cat'] = df['feat_brand'].factorize()[0]
df['feat_color_cat'] = df['feat_color'].factorize()[0]
df['feat_gender_cat'] = df['feat_gender'].factorize()[0]
df['feat_material_cat'] = df['feat_material'].factorize()[0]
df['feat_sport_cat'] = df['feat_sport'].factorize()[0]
df['feat_style_cat'] = df['feat_style'].factorize()[0]
for key in keys:
df[get_name_feat(key) + '_cat'] = df[get_name_feat(key)].factorize()[0]
# -
df['brand'] = df['brand'].map(lambda x: str(x).lower())
df[df.brand == df.feat_brand][['brand', 'feat_brand']]
feats = ['brand_cat',
'feat_brand_cat',
'feat_gender_cat',
'feat_material_cat',
'feat_style_cat',
'feat_sport_cat',
'feat_metal type_cat',
'feat_shape_cat']
model = RandomForestRegressor(max_depth=5, n_estimators=100)
run_model(feats ,model)
feats_cat = [x for x in df.columns if 'cat' in x]
feats_cat
# +
# feats += feats_cat
# feats = list(set(feats))
# -
model = RandomForestRegressor(max_depth=5, n_estimators=100)
run_model(feats ,model)
# +
X = df[feats].values
y = df['prices_amountmin'].values
m = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0)
m.fit(X, y)
perm = PermutationImportance(m, random_state=1).fit(X, y);
eli5.show_weights(perm, feature_names = feats)
# -
df['brand'].value_counts(normalize=True)
df[df['brand'] == 'nike'].features_parsed.sample().values
df['feat_age group'].value_counts()
| .ipynb_checkpoints/day5-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="6JDGZWzSRWWz"
# ### **Drowsiness detection (Model creation & Training Part)**
# + [markdown] id="CeyuF4eBRTDr"
# Extract the dataset file
# + id="raEn38ig3DRw"
from zipfile import ZipFile
a = ZipFile('/content/drive/MyDrive/archive (1).zip')
a.extractall()
# + [markdown] id="4Ym9wae4SF-O"
# Import Libraries
# + id="-YvU8BfS5kid"
import os
from keras.preprocessing import image
import matplotlib.pyplot as plt
import numpy as np
from keras.utils.np_utils import to_categorical
import random,shutil
from keras.models import Sequential
from keras.layers import Dropout,Conv2D,Flatten,Dense, MaxPooling2D, BatchNormalization
from keras.models import load_model
from keras.preprocessing.image import ImageDataGenerator
import cv2
# + [markdown] id="ZktOk4PXSKU2"
# Create a Data generator function which includes data agumentation and preprocessing
# + colab={"base_uri": "https://localhost:8080/"} id="jD-vkCzo8WJk" outputId="19374f75-ed7a-4501-f1f2-7eb7369abcbc"
def Datagenerator(dir, gen=image.ImageDataGenerator(rescale=1./255, zoom_range=0.2, horizontal_flip=True, rotation_range=30), shuffle=True,batch_size=1,target_size=(40,40),class_mode='categorical' ):
return gen.flow_from_directory(dir,batch_size=batch_size,shuffle=shuffle,color_mode='grayscale',class_mode=class_mode,target_size=target_size)
path_train = '/content/train/'
train_batch= Datagenerator(path_train,shuffle=True, batch_size=32,target_size=(40,40))
# + [markdown] id="yu_iZIMeSe0I"
# Modelling a sequential model
# + colab={"base_uri": "https://localhost:8080/"} id="aFjdekZQ8XiH" outputId="4506c9a2-eec9-470c-fb9f-b75ddb446c6e"
model = Sequential([
Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(40,40,1)),
MaxPooling2D(pool_size=(1,1)),
Conv2D(32,(3,3),activation='relu'),
MaxPooling2D(pool_size=(1,1)),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D(pool_size=(1,1)),
Dropout(0.25),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.5),
Dense(2, activation='softmax')
])
model.summary()
# + [markdown] id="RzrnGFzuS7hl"
# Training a model
# + colab={"base_uri": "https://localhost:8080/"} id="99_ynYRR8XkQ" outputId="985a3a82-3af8-4cea-949b-dd9da0e00a9b"
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
# callback fuction for saving model checkpoint
# model_checkpoint = ModelCheckpoint("eyes_model",
# monitor='val_my_iou_metric',
# mode = 'max',
# save_best_only=True,
# verbose=1)
# Learning Rate annealer
reduce_lr = ReduceLROnPlateau(factor=0.1,
patience=4,
min_lr=0.00001,
verbose=1,
monitor = 'loss')
# Compiling the model
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
# fitting training data
model.fit_generator(train_batch,epochs=12,steps_per_epoch=len(train_batch.classes)//32, callbacks=[reduce_lr] )
# Saving model
model.save('/content/drive/MyDrive/ddv0.5.h5')
# + [markdown] id="cSP6y-ItT8c4"
# Predictions
# + id="czYjb9vs8Xn3" colab={"base_uri": "https://localhost:8080/", "height": 268} outputId="b139b8b2-af37-4fbf-ed18-16801721c0dc"
import tensorflow as tf
# loading Model
model = tf.keras.models.load_model(r"/content/drive/MyDrive/ddv0.5.h5")
file = '/content/train/Open_Eyes/s0001_02337_0_0_1_0_0_01.png'
# defining Image size
IMG_SIZE = 40
# Visualizing Image to be predicted
import matplotlib.pyplot as plt
plt.imshow(plt.imread(file))
# processing image to feed the model for predicting the class
img_array = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
img_array = img_array / 255
resized_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
resized_array = resized_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1)
# + colab={"base_uri": "https://localhost:8080/"} id="m76Ro8-iZ3bh" outputId="e2b7f79d-de51-421c-daca-398b07218897"
#This is the predicted Class and it looks that model perfomer pretty well
prediction = model.predict([resized_array])
np.argmax(prediction)
# + id="kllxVdbD8XzQ" colab={"base_uri": "https://localhost:8080/", "height": 285} outputId="940e1f7c-ed9a-4944-d499-6adc757d06f4"
file = '/content/train/Closed_Eyes/s0001_00004_0_0_0_0_0_01.png'
# defining Image size
IMG_SIZE = 40
# Visualizing Image to be predicted
import matplotlib.pyplot as plt
plt.imshow(plt.imread(file))
# processing image to feed the model for predicting the class
img_array = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
img_array = img_array / 255
resized_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
resized_array = resized_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1)
prediction = model.predict([resized_array])
np.argmax(prediction)
# + id="a3SkwCSW8X12"
| Training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
message='Hello Anil'
message
suffix=".Ponnam"
suffix
full_pharse= message+suffix
full_pharse
full_pharse.capitalize
# when we give any function with out () letters then python thinks as user is asking something about the internal known items in the above case python says us as it is a function
full_pharse.capitalize()
# Functions are like methods in any other programing language. in python functions wiht end up with () which can take input as well few of the string functions can be observered as follows.
full_pharse.split()
# split is a in built function in python which will slipt the string into an array of String. it can take input as well , if we provide any input to this string it will split the string based on the given input else by default it will use whitespace to split the string
full_pharse.split('a')
# Slipt function input is case sensitive, it will look at the input what we provided and it will slipt the string according to the given input. even though we have upper case of the character it will not consider that character. example is given above
full_pharse.split('r')
# if we provide any character which is not avilable in the string to split method it will not throw any error, it will consider whole string as single unit , example is given as above
2+3
'2'+'3'
full_pharse * 10
# In Python we can contact 2 strings using + operator and we can mulitiply the strings, if we use * with any String it will print the or multiply the strng with given count value
x= 'sammy'
x[2:]
| StringCheck.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="zIjXsyE2vd53" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1740} outputId="6609d320-7456-416b-8efc-b73761caa2f0" executionInfo={"status": "ok", "timestamp": 1551375958310, "user_tz": 480, "elapsed": 20762, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "17292478729201017439"}}
# !pip install six numpy scipy Pillow matplotlib scikit-image opencv-python imageio Shapely
# !pip install imgaug
# !pip install --upgrade scikit-image
# + id="3LYMxywDJbf6" colab_type="code" outputId="429b8751-4472-4ada-aa47-2579c12f35c4" executionInfo={"status": "error", "timestamp": 1551375976424, "user_tz": 480, "elapsed": 8649, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "17292478729201017439"}} colab={"base_uri": "https://localhost:8080/", "height": 1128}
from keras.preprocessing import image
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import numpy as np
import h5py
from imgaug import augmenters as iaa
import imgaug as ia
import math
import tensorflow as tf
from PIL import Image
from skimage import color
import cv2
from google.colab import drive
drive.mount('/content/gdrive')
# !ls "/content/gdrive/My Drive/Colab/datasets/labels"
resizedWidth = 384
resizedShortRatio = 384 / 700
resizedLongRatio = 384 / 1000
gridCount = 12
# + id="1fbNSkYlKLpv" colab_type="code" colab={}
def getImageData():
imagePixelData = np.zeros((200, resizedWidth, resizedWidth, 3))
imageLocationData = np.zeros((200, gridCount, gridCount, 8))
groups = ["G", "I", "H", "J"]
count = 0
# separate each image to 20 grids, since there is low chance that there will be multiple hairs in one grid, no anchor box is necessary
# for each grid
# 1. boolean for whether there is an object
# 2. bx, by, bw, bh
# 3. classes (3 for hairs)
# therefore the output size should be 20 x 20 x 8
for groupName in groups:
labelPath = "/content/gdrive/My Drive/Colab/datasets/labels/image_boxes_" + groupName + ".txt"
with open(labelPath) as file:
boundingBoxLocations = eval(file.read())
imagePath = "/content/gdrive/My Drive/Colab/datasets/" + groupName + "/" + groupName + "_data/"
for pictureID, boxLocation in boundingBoxLocations.items():
# picture = image.load_img(imagePath + pictureID + ".png", target_size=(resizedWidth,resizedWidth), color_mode="grayscale")
picture = image.load_img(imagePath + pictureID + ".png", target_size=(resizedWidth,resizedWidth))
pixelVal = image.img_to_array(picture) / 255.0
imagePixelData[count] = pixelVal
gridData = np.zeros((gridCount, gridCount, 8))
gridSize = resizedWidth // gridCount
for eachBox in boxLocation:
eachBox[0] = eachBox[0] * resizedLongRatio
eachBox[2] = eachBox[2] * resizedLongRatio
## Images in G are in different dimensions than other images
if groupName == 'G':
eachBox[1] = eachBox[1] * resizedLongRatio
eachBox[3] = eachBox[3] * resizedLongRatio
else:
eachBox[1] = eachBox[1] * resizedShortRatio
eachBox[3] = eachBox[3] * resizedShortRatio
gridX = int(eachBox[0] // gridSize)
gridY = int(eachBox[1] // gridSize)
gridCenterX = eachBox[0] % gridSize / gridSize
gridCenterY = eachBox[1] % gridSize / gridSize
# width and height according to grid box ratio
# gridBoxWidth = eachBox[2] / gridSize
# gridBoxHeight = eachBox[3] / gridSize
# boolAndPos = np.array([1, gridCenterX, gridCenterY, gridBoxWidth, gridBoxHeight])
# width and height according to image ratio
gridBoxWidth = eachBox[2] / resizedWidth
gridBoxHeight = eachBox[3] / resizedWidth
boolAndPos = np.array([1, gridCenterX, gridCenterY, gridBoxWidth, gridBoxHeight])
classes = np.eye(3, dtype=np.float32)[eachBox[4]]
gridData[gridY][gridX] = np.append(boolAndPos,classes)
imageLocationData[count] = gridData
count += 1
if count % 50 == 0:
print("count =", count)
return imagePixelData, imageLocationData
# + id="6x3bS7HRWYWu" colab_type="code" colab={}
def getAugmentedData(augmentations):
print("Begin augmenting")
augmentedImagePixelData = np.zeros((200, resizedWidth, resizedWidth, 3))
augmentedImageLocationData = np.zeros((200, gridCount, gridCount, 8))
groups = ["G", "I", "H", "J"]
count = 0
for groupName in groups:
labelPath = "/content/gdrive/My Drive/Colab/datasets/labels/image_boxes_" + groupName + ".txt"
with open(labelPath) as file:
boundingBoxLocations = eval(file.read())
imagePath = "/content/gdrive/My Drive/Colab/datasets/" + groupName + "/" + groupName + "_data/"
for pictureID, boxLocations in boundingBoxLocations.items():
# picture = image.load_img(imagePath + pictureID + ".png", target_size=(resizedWidth,resizedWidth), color_mode="grayscale")
picture = image.load_img(imagePath + pictureID + ".png", target_size=(resizedWidth,resizedWidth))
pixelVal = image.img_to_array(picture) / 255.0
boxLocations = boxLocations
augmentedBox = []
for i in range(len(boxLocations)):
centerX = boxLocations[i][0] * resizedLongRatio
width = boxLocations[i][2] * resizedLongRatio
## Images in G are in different dimensions than other images
centerY = 0
height = 0
if groupName == 'G':
centerY = boxLocations[i][1] * resizedLongRatio
height = boxLocations[i][3] * resizedLongRatio
else:
centerY = boxLocations[i][1] * resizedShortRatio
height = boxLocations[i][3] * resizedShortRatio
topX = centerX - width / 2
topY = centerY - height / 2
botX = centerX + width / 2
botY = centerY + height / 2
augmentedBox.append(ia.BoundingBox(x1 = topX, y1 = topY, x2 = botX, y2 = botY))
bbs = ia.BoundingBoxesOnImage(augmentedBox, shape=(resizedWidth, resizedWidth))
seq = iaa.Sequential(augmentations)
seq_det = seq.to_deterministic()
augmentedImage = seq_det.augment_image(pixelVal)
augmentedBBs = seq_det.augment_bounding_boxes(bbs).remove_out_of_image().clip_out_of_image()
augmentedImagePixelData[count] = augmentedImage
gridData = np.zeros((gridCount, gridCount, 8))
gridSize = resizedWidth // gridCount
for i in range(len(augmentedBBs.bounding_boxes)):
augmentedBox = augmentedBBs.bounding_boxes[i]
augmentedTopX = augmentedBox.x1
augmentedTopY = augmentedBox.y1
augmentedBotX = augmentedBox.x2
augmentedBotY = augmentedBox.y2
augmentedWidth = math.fabs(augmentedBotX - augmentedTopX)
augmentedHeight = math.fabs(augmentedBotY - augmentedTopY)
augmentedCenterX = augmentedTopX + augmentedWidth / 2
augmentedCenterY = augmentedTopY + augmentedHeight / 2
gridX = int(augmentedCenterX // gridSize)
gridY = int(augmentedCenterY // gridSize)
if (gridX < gridCount and gridY < gridCount):
gridCenterX = augmentedCenterX % gridSize / gridSize
gridCenterY = augmentedCenterY % gridSize / gridSize
# gridBoxWidth = augmentedWidth / gridSize
# gridBoxHeight = augmentedHeight / gridSize
# boolAndPos = np.array([1, gridCenterX, gridCenterY, gridBoxWidth, gridBoxHeight])
gridBoxWidth = augmentedWidth / resizedWidth
gridBoxHeight = augmentedHeight / resizedWidth
boolAndPos = np.array([1, gridCenterX, gridCenterY, gridBoxWidth, gridBoxHeight])
classes = np.eye(3, dtype=np.float32)[boxLocations[i][4]]
gridData[gridY][gridX] = np.append(boolAndPos,classes)
augmentedImageLocationData[count] = gridData
count += 1
if count % 50 == 0:
print("count =", count)
return augmentedImagePixelData, augmentedImageLocationData
# + id="l9r5gKw3mpP_" colab_type="code" colab={}
augmentation1 = [
iaa.Multiply((1.2, 1.5)),
iaa.Affine(scale = {
"x" : (0.5, 1.0),
"y" : (0.5, 1.0)
},
translate_percent = {
"x" : (-0.2, 0.2),
"y" : (-0.2, 0.2)
},
shear = (-10, 10)
),
iaa.Crop(px = (0, 16)), # crop images from each side by 0 to 16px
iaa.ContrastNormalization((0.8, 1.25)),
]
augmentation2 = [
iaa.Fliplr(0.8), # horizontal flip 70%
iaa.ContrastNormalization((0.75, 1.5)),
iaa.Crop(px = (0, 16)), # crop images from each side by 0 to 16px
iaa.GaussianBlur(sigma=(0, 2.0)), # blur images with a sigma of 0 to 3
iaa.Affine(
translate_px = {"x" : 10, "y": -10},
scale = (0.7, 0.6)
)
]
augmentation3 = [
iaa.Flipud(0.7), # vertical flip by 70%
iaa.Fliplr(0.4),
iaa.ContrastNormalization((0.8, 1.25)),
iaa.Affine(
translate_px = {"x" : 10, "y": -10},
scale = (0.7, 0.6)
),
iaa.AverageBlur(k=(1,3)),
iaa.Crop(px = (10, 18))
]
# + id="qGq9bWFts37d" colab_type="code" colab={}
def writePreprocessedData(imageData, locationData):
hf = h5py.File("/content/gdrive/My Drive/Colab/datasets/hairData.h5", "w")
hf.create_dataset("hair_data", data=imageData)
hf.create_dataset("hair_target", data=locationData)
hf.close()
return
def loadDatasets():
hf = h5py.File("/content/gdrive/My Drive/Colab/datasets/hairData.h5", "r")
pixel = np.array(hf.get("hair_data"))
location = np.array(hf.get("hair_target"))
hf.close()
return pixel, location
# + id="kEziLh7rlz-B" colab_type="code" outputId="6930089f-fa95-4f55-cce1-389c963bc3f6" executionInfo={"status": "ok", "timestamp": 1551346727189, "user_tz": 480, "elapsed": 5930, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "17292478729201017439"}} colab={"base_uri": "https://localhost:8080/", "height": 124}
imagePixelData, imageLocationData = getImageData()
print("imagePixelData shape =", imagePixelData.shape)
print("imageLocationData shape =", imageLocationData.shape)
# + id="kUqHT2ntlvy7" colab_type="code" outputId="dbf0061a-ed47-4f06-939f-580b3e40eaf1" executionInfo={"status": "ok", "timestamp": 1551346732566, "user_tz": 480, "elapsed": 11006, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "17292478729201017439"}} colab={"base_uri": "https://localhost:8080/", "height": 141}
augmentedImagePixelData, augmentedImageLocationData = getAugmentedData(augmentation1)
print("augmentedImagePixelData shape =", augmentedImagePixelData.shape)
print("augmentedImageLocationData shape =", augmentedImageLocationData.shape)
# + id="B2-e-flDfDXL" colab_type="code" outputId="8fad31a8-ca2f-48ee-83ab-af010aae21bf" executionInfo={"status": "ok", "timestamp": 1551346737969, "user_tz": 480, "elapsed": 16006, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "17292478729201017439"}} colab={"base_uri": "https://localhost:8080/", "height": 141}
augmentedImagePixelData2, augmentedImageLocationData2 = getAugmentedData(augmentation2)
print("augmentedImagePixelData2 shape =", augmentedImagePixelData2.shape)
print("augmentedImageLocationData2 shape =", augmentedImageLocationData2.shape)
# + id="DApHywnOgm_N" colab_type="code" outputId="61089f8d-8496-422a-8265-f0af3ce5d069" executionInfo={"status": "ok", "timestamp": 1551346743119, "user_tz": 480, "elapsed": 19883, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "17292478729201017439"}} colab={"base_uri": "https://localhost:8080/", "height": 141}
augmentedImagePixelData3, augmentedImageLocationData3 = getAugmentedData(augmentation3)
print("augmentedImagePixelData3 shape =", augmentedImagePixelData3.shape)
print("augmentedImageLocationData3 shape =", augmentedImageLocationData3.shape)
# + id="ajeVmI-obBzZ" colab_type="code" outputId="efb062ff-1f9a-49e3-a80b-4597267dd5cf" executionInfo={"status": "ok", "timestamp": 1551346747665, "user_tz": 480, "elapsed": 22407, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "17292478729201017439"}} colab={"base_uri": "https://localhost:8080/", "height": 52}
newPixelData = np.zeros((800, resizedWidth, resizedWidth, 3))
newLocationData = np.zeros((800, gridCount, gridCount, 8))
newPixelData[:200] = imagePixelData
newPixelData[200:400] = augmentedImagePixelData
newPixelData[400:600] = augmentedImagePixelData2
newPixelData[600:800] = augmentedImagePixelData3
newLocationData[:200] = imageLocationData
newLocationData[200:400] = augmentedImageLocationData
newLocationData[400:600] = augmentedImageLocationData2
newLocationData[600:800] = augmentedImageLocationData3
indices = np.random.permutation(800)
newPixelData = newPixelData[indices]
newLocationData = newLocationData[indices]
print(newPixelData.shape)
print(newLocationData.shape)
# + id="TtGTwsyKpWhE" colab_type="code" colab={}
writePreprocessedData(newPixelData, newLocationData)
# + id="aQtQljI57RwE" colab_type="code" outputId="3d54fd80-90af-4227-9997-d43c66d861db" executionInfo={"status": "ok", "timestamp": 1551346814665, "user_tz": 480, "elapsed": 17285, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "17292478729201017439"}} colab={"base_uri": "https://localhost:8080/", "height": 52}
newPixelData, newLocationData = loadDatasets()
print(newPixelData.shape)
print(newLocationData.shape)
# + id="ja4WcTicxRKu" colab_type="code" outputId="7170568e-4263-48f0-9a21-5fb14e4e084e" executionInfo={"status": "ok", "timestamp": 1551346173043, "user_tz": 480, "elapsed": 2905, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "17292478729201017439"}} colab={"base_uri": "https://localhost:8080/", "height": 2346}
def displayBoundingBoxOnImage(picture, curr):
# Create figure and axes
fig,ax = plt.subplots(figsize=(gridCount,10))
# squeeze dimension from (width, height, color) -> (width, height)
# squeezed = np.squeeze(picture, axis = 2)
# Display the image
# ax.imshow(squeezed)
ax.imshow(picture)
gridSize = resizedWidth // gridCount
for ygrid in range(gridCount):
for xgrid in range(gridCount):
if curr[ygrid][xgrid][0] != 0:
offsetX = xgrid * gridSize
offsetY = ygrid * gridSize
centerX = curr[ygrid][xgrid][1] * gridSize + offsetX
centerY = curr[ygrid][xgrid][2] * gridSize + offsetY
# width = curr[ygrid][xgrid][3] * gridSize
# height = curr[ygrid][xgrid][4] * gridSize
width = curr[ygrid][xgrid][3] * resizedWidth
height = curr[ygrid][xgrid][4] * resizedWidth
topX = centerX - width / 2
topY = centerY - height / 2
# Create a Rectangle patch
rect = patches.Rectangle((topX,topY),width,height,linewidth=1,edgecolor='r',facecolor='none')
# Add the patch to the Axes
ax.add_patch(rect)
plt.show()
displayBoundingBoxOnImage(imagePixelData[100], imageLocationData[100])
displayBoundingBoxOnImage(augmentedImagePixelData[0], augmentedImageLocationData[0])
displayBoundingBoxOnImage(augmentedImagePixelData2[0], augmentedImageLocationData2[0])
displayBoundingBoxOnImage(augmentedImagePixelData3[0], augmentedImageLocationData3[0])
# test = color.rgb2grey(imagePixelData[0])
# print(test.shape)
# test = cv2.imread("/content/gdrive/My Drive/Colab/datasets/G/G_data/0020.png", 0)
# print(test.shape)
# test = cv2.cvtColor(imagePixelData[0], cv2.COLOR_BGR2GRAY)
# print(test.shape)
# displayBoundingBoxOnImage(test, imageLocationData[0])
# np.squeeze(imagePixelData[0], axis = 2).shape
# + id="PnA9e2yMNfwV" colab_type="code" colab={}
# + id="C1LW5qxyOjwQ" colab_type="code" colab={}
# + id="LrY3jzHtGFBB" colab_type="code" colab={}
| YOLO model/Create Datasets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_pytorch_p36)
# language: python
# name: conda_pytorch_p36
# ---
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import preprocessor as p
import re
import json
import wordninja
import random
import csv
import numpy as np
import pandas as pd
from torch.utils.data import TensorDataset, DataLoader
from sklearn.metrics import precision_recall_fscore_support
from transformers import AutoModel, BertForMaskedLM, AdamW
from transformers import BertTokenizer, BertModel, AutoTokenizer, BertweetTokenizer
# + code_folding=[]
# Data Loading
def load_data(filename):
filename = [filename]
concat_text = pd.DataFrame()
raw_text = pd.read_csv(filename[0],usecols=[0], encoding='ISO-8859-1')
raw_label = pd.read_csv(filename[0],usecols=[2], encoding='ISO-8859-1')
raw_target = pd.read_csv(filename[0],usecols=[1], encoding='ISO-8859-1')
label = pd.DataFrame.replace(raw_label,['FAVOR','NONE','AGAINST'], [1,2,0])
concat_text = pd.concat([raw_text, label, raw_target], axis=1)
concat_text = concat_text[concat_text.Stance != 2]
return(concat_text)
# + code_folding=[]
# Data Cleaning
def data_clean(strings, norm_dict):
p.set_options(p.OPT.URL,p.OPT.EMOJI,p.OPT.RESERVED)
clean_data = p.clean(strings) # using lib to clean URL, emoji...
clean_data = re.sub(r"#SemST", "", clean_data)
clean_data = re.findall(r"[A-Za-z#@]+|[,.!?&/\<>=$]|[0-9]+",clean_data)
clean_data = [[x.lower()] for x in clean_data]
for i in range(len(clean_data)):
if clean_data[i][0] in norm_dict.keys():
clean_data[i][0] = norm_dict[clean_data[i][0]]
continue
if clean_data[i][0].startswith("#") or clean_data[i][0].startswith("@"):
clean_data[i] = wordninja.split(clean_data[i][0]) # split compound hashtags
clean_data = [j for i in clean_data for j in i]
return clean_data
# + code_folding=[]
# Clean All Data
def clean_all(filename, norm_dict):
concat_text = load_data(filename)
raw_data = concat_text['Tweet'].values.tolist()
label = concat_text['Stance'].values.tolist()
x_target = concat_text['Target'].values.tolist()
clean_data = [None for _ in range(len(raw_data))]
for i in range(len(raw_data)):
clean_data[i] = data_clean(raw_data[i], norm_dict)
x_target[i] = data_clean(x_target[i], norm_dict)
return clean_data,label,x_target
# +
# Tokenization
def convert_data_to_ids(tokenizer, target, text):
input_ids, seg_ids, attention_masks, sent_len = [], [], [], []
for tar, sent in zip(target, text):
encoded_dict = tokenizer.encode_plus(
' '.join(tar), # Target to encode
' '.join(sent), # Sentence to encode
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = 128, # Pad & truncate all sentences
padding = 'max_length',
return_attention_mask = True, # Construct attention masks
)
# Add the encoded sentence to the list.
input_ids.append(encoded_dict['input_ids'])
seg_ids.append(encoded_dict['token_type_ids'])
attention_masks.append(encoded_dict['attention_mask'])
sent_len.append(sum(encoded_dict['attention_mask']))
return input_ids, seg_ids, attention_masks, sent_len
def data_helper_bert(x_train_all,x_val_all,x_test_all,model_select):
print('Loading data')
x_train,y_train,x_train_target = x_train_all[0],x_train_all[1],x_train_all[2]
x_val,y_val,x_val_target = x_val_all[0],x_val_all[1],x_val_all[2]
x_test,y_test,x_test_target = x_test_all[0],x_test_all[1],x_test_all[2]
print("Length of x_train: %d, the sum is: %d"%(len(x_train), sum(y_train)))
print("Length of x_val: %d, the sum is: %d"%(len(x_val), sum(y_val)))
print("Length of x_test: %d, the sum is: %d"%(len(x_test), sum(y_test)))
# get the tokenizer
if model_select == 'Bertweet':
tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base", normalization=True)
elif model_select == 'Bert':
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", do_lower_case=True)
# tokenization
x_train_input_ids, x_train_seg_ids, x_train_atten_masks, x_train_len = \
convert_data_to_ids(tokenizer, x_train_target, x_train)
x_val_input_ids, x_val_seg_ids, x_val_atten_masks, x_val_len = \
convert_data_to_ids(tokenizer, x_val_target, x_val)
x_test_input_ids, x_test_seg_ids, x_test_atten_masks, x_test_len = \
convert_data_to_ids(tokenizer, x_test_target, x_test)
# print(x_test_input_ids[0])
x_train_all = [x_train_input_ids,x_train_seg_ids,x_train_atten_masks,y_train,x_train_len]
x_val_all = [x_val_input_ids,x_val_seg_ids,x_val_atten_masks,y_val,x_val_len]
x_test_all = [x_test_input_ids,x_test_seg_ids,x_test_atten_masks,y_test,x_test_len]
return x_train_all,x_val_all,x_test_all
# + code_folding=[]
# BERT/BERTweet
class stance_classifier(nn.Module):
def __init__(self,num_labels,model_select):
super(stance_classifier, self).__init__()
self.dropout = nn.Dropout(0.)
self.relu = nn.ReLU()
self.tanh = nn.Tanh()
if model_select == 'Bertweet':
self.bert = AutoModel.from_pretrained("vinai/bertweet-base")
elif model_select == 'Bert':
self.bert = BertModel.from_pretrained("bert-base-uncased")
self.linear = nn.Linear(self.bert.config.hidden_size, self.bert.config.hidden_size)
self.out = nn.Linear(self.bert.config.hidden_size, num_labels)
def forward(self, x_input_ids, x_seg_ids, x_atten_masks, x_len):
last_hidden = self.bert(input_ids=x_input_ids, \
attention_mask=x_atten_masks, token_type_ids=x_seg_ids, \
)
query = last_hidden[0][:,0]
query = self.dropout(query)
linear = self.relu(self.linear(query))
out = self.out(linear)
return out
# + code_folding=[]
# Evaluation
def compute_f1(preds, y):
rounded_preds = F.softmax(preds)
_, indices = torch.max(rounded_preds, 1)
correct = (indices == y).float()
acc = correct.sum()/len(correct) # compute accuracy
y_pred = np.array(indices.cpu().numpy())
y_true = np.array(y.cpu().numpy())
result = precision_recall_fscore_support(y_true, y_pred, average=None, labels=[0,1])
# print(result[2][0],result[2][1])
f1_average = (result[2][0]+result[2][1])/2 # average F1 score of Favor and Against
return acc, f1_average, result[0], result[1]
# + code_folding=[]
# Main
def data_loader(x_all, batch_size, data_type):
x_input_ids = torch.tensor(x_all[0], dtype=torch.long).cuda()
x_seg_ids = torch.tensor(x_all[1], dtype=torch.long).cuda()
x_atten_masks = torch.tensor(x_all[2], dtype=torch.long).cuda()
y = torch.tensor(x_all[3], dtype=torch.long).cuda()
x_len = torch.tensor(x_all[4], dtype=torch.long).cuda()
tensor_loader = TensorDataset(x_input_ids,x_seg_ids,x_atten_masks,y,x_len)
if data_type == 'train':
data_loader = DataLoader(tensor_loader, shuffle=True, batch_size=batch_size)
else:
data_loader = DataLoader(tensor_loader, shuffle=False, batch_size=batch_size)
return x_input_ids, x_seg_ids, x_atten_masks, y, x_len, data_loader
def sep_test_set(input_data):
# split the combined test set for Trump, Biden and Bernie
data_list = [input_data[:777], input_data[777:1522], input_data[1522:2157]]
return data_list
def run_classifier(input_word_pair,model_select,train_mode):
random_seeds = [0,1,14,15,16,17,19]
target_word_pair = input_word_pair
#Creating Normalization Dictionary
with open("./noslang_data.json", "r") as f:
data1 = json.load(f)
data2 = {}
with open("./emnlp_dict.txt","r") as f:
lines = f.readlines()
for line in lines:
row = line.split('\t')
data2[row[0]] = row[1].rstrip()
normalization_dict = {**data1,**data2}
for target_index in range(len(target_word_pair)):
best_result, best_val = [], []
for seed in random_seeds:
print("current random seed: ", seed)
if train_mode == "unified":
filename1 = '/home/ubuntu/Stance_ACL2021/raw_train_all.csv'
filename2 = '/home/ubuntu/Stance_ACL2021/raw_val_all.csv'
filename3 = '/home/ubuntu/Stance_ACL2021/raw_test_all.csv'
elif train_mode == "adhoc":
filename1 = '/home/ubuntu/Stance_ACL2021/raw_train_'+target_word_pair[target_index]+'.csv'
filename2 = '/home/ubuntu/Stance_ACL2021/raw_val_'+target_word_pair[target_index]+'.csv'
filename3 = '/home/ubuntu/Stance_ACL2021/raw_test_'+target_word_pair[target_index]+'.csv'
x_train,y_train,x_train_target = clean_all(filename1, normalization_dict)
x_val,y_val,x_val_target = clean_all(filename2, normalization_dict)
x_test,y_test,x_test_target = clean_all(filename3, normalization_dict)
num_labels = len(set(y_train))
# print(x_train_target[0])
x_train_all = [x_train,y_train,x_train_target]
x_val_all = [x_val,y_val,x_val_target]
x_test_all = [x_test,y_test,x_test_target]
# set up the random seed
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
# prepare for model
x_train_all,x_val_all,x_test_all = data_helper_bert(x_train_all,x_val_all,x_test_all,model_select)
# print(x_test_all[0][0])
x_train_input_ids, x_train_seg_ids, x_train_atten_masks, y_train, x_train_len, trainloader = \
data_loader(x_train_all, batch_size, 'train')
x_val_input_ids, x_val_seg_ids, x_val_atten_masks, y_val, x_val_len, valloader = \
data_loader(x_val_all, batch_size, 'val')
x_test_input_ids, x_test_seg_ids, x_test_atten_masks, y_test, x_test_len, testloader = \
data_loader(x_test_all, batch_size, 'test')
model = stance_classifier(num_labels,model_select).cuda()
for n,p in model.named_parameters():
if "bert.embeddings" in n:
p.requires_grad = False
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if n.startswith('bert.encoder')] , 'lr': lr},
{'params': [p for n, p in model.named_parameters() if n.startswith('bert.pooler')] , 'lr': 1e-3},
{'params': [p for n, p in model.named_parameters() if n.startswith('linear')], 'lr': 1e-3},
{'params': [p for n, p in model.named_parameters() if n.startswith('out')], 'lr': 1e-3}
]
loss_function = nn.CrossEntropyLoss(reduction='sum')
optimizer = AdamW(optimizer_grouped_parameters)
sum_loss = []
sum_val = []
train_f1_average = []
val_f1_average = []
if train_mode == "unified":
test_f1_average = [[] for i in range(3)]
elif train_mode == "adhoc":
test_f1_average = [[]]
for epoch in range(0, total_epoch):
print('Epoch:', epoch)
train_loss, valid_loss = [], []
model.train()
for input_ids,seg_ids,atten_masks,target,length in trainloader:
optimizer.zero_grad()
output1 = model(input_ids, seg_ids, atten_masks, length)
loss = loss_function(output1, target)
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), 1)
optimizer.step()
train_loss.append(loss.item())
sum_loss.append(sum(train_loss)/len(x_train))
print(sum_loss[epoch])
# evaluation on dev set
model.eval()
val_preds = []
with torch.no_grad():
for input_ids,seg_ids,atten_masks,target,length in valloader:
pred1 = model(input_ids, seg_ids, atten_masks, length)
val_preds.append(pred1)
pred1 = torch.cat(val_preds, 0)
acc, f1_average, precision, recall = compute_f1(pred1,y_val)
val_f1_average.append(f1_average)
# evaluation on test set
with torch.no_grad():
test_preds = []
for input_ids,seg_ids,atten_masks,target,length in testloader:
pred1 = model(input_ids, seg_ids, atten_masks, length)
test_preds.append(pred1)
pred1 = torch.cat(test_preds, 0)
if train_mode == "unified":
pred1_list = sep_test_set(pred1)
y_test_list = sep_test_set(y_test)
else:
pred1_list = [pred1]
y_test_list = [y_test]
for ind in range(len(y_test_list)):
pred1 = pred1_list[ind]
acc, f1_average, precision, recall = compute_f1(pred1,y_test_list[ind])
test_f1_average[ind].append(f1_average)
best_epoch = [index for index,v in enumerate(val_f1_average) if v == max(val_f1_average)][-1]
best_result.append([f1[best_epoch] for f1 in test_f1_average])
print("******************************************")
print("dev results with seed {} on all epochs".format(seed))
print(val_f1_average)
best_val.append(val_f1_average[best_epoch])
print("******************************************")
print("test results with seed {} on all epochs".format(seed))
print(test_f1_average)
print("******************************************")
# model that performs best on the dev set is evaluated on the test set
print("model performance on the test set: ")
print(best_result)
# +
# run classifier in unified setting
lr = 2e-5
batch_size = 32
total_epoch = 3
run_classifier(['all'],'Bertweet','unified')
# +
# run classifier in adhoc setting
lr = 2e-5
batch_size = 32
total_epoch = 3
run_classifier(['trump2'],'Bertweet','adhoc')
run_classifier(['biden2'],'Bertweet','adhoc')
run_classifier(['bernie2'],'Bertweet','adhoc')
| jupyter/pstance_run.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Evaluate Model with Data Generated for It
# !ls ../resources/nmt_output/en_ja
# !ls ../resources/hybrid_nmt_output_appended/en_ja/*noisy*valid.ref
# !cat ../resources/nmt_output/results.txt
import pandas as pd
import glob
from collections import defaultdict
import os
import sacrebleu
import matplotlib.pyplot as plt
def get_table(file_list, base_list=None):
table = dict()
for ref_file in base_list:
key = os.path.split(ref_file)[-1].split('.')[0]
key = f"base_{key.split('_Previous_')[1][0]}"
hypo_file = os.path.splitext(ref_file)[0]
table[key] = []
with open(ref_file) as ref_source, open(hypo_file) as hypo_source:
for ref, hypo in zip(ref_source, hypo_source):
table[key].append((hypo.strip(), sacrebleu.corpus_bleu(hypo, ref).score))
for ref_file in file_list:
key = os.path.split(ref_file)[-1].split('.')[0]
if '1-to-1' in key:
key = 'model_bias_0'
base_key = None
else:
key = f"model_bias_{key.split('_bias_')[1][0]}"
base_key = f"base_{key[-1]}"
hypo_file = os.path.splitext(ref_file)[0]
table["reference"] = []
table[key] = []
with open(ref_file) as ref_source, open(hypo_file) as hypo_source:
for index, (ref, hypo) in enumerate(zip(ref_source, hypo_source)):
table["reference"].append(ref.strip())
if base_key:
base_score = table[base_key][index][1]
table[key].append([hypo.strip(), sacrebleu.corpus_bleu(hypo, ref).score - base_score])
else:
table[key].append([hypo.strip(), sacrebleu.corpus_bleu(hypo, ref).score])
for index in range(1, 6):
table.pop(f"base_{index}")
for sent_id, item in enumerate(table[f"model_bias_{index}"]):
table[f"model_bias_{index}"][sent_id][1] += table['model_bias_0'][sent_id][1]
return pd.DataFrame(table)
en_ja_valid = get_table(glob.glob("../resources/nmt_output/en_ja/*valid.ref"), glob.glob("../resources/hybrid_nmt_output_appended/en_ja/*noisy*valid.ref"))
en_ja_test = get_table(glob.glob("../resources/nmt_output/en_ja/*test.ref"), glob.glob("../resources/hybrid_nmt_output_appended/en_ja/*noisy*test.ref"))
ja_en_valid = get_table(glob.glob("../resources/nmt_output/ja_en/*valid.ref"), glob.glob("../resources/hybrid_nmt_output_appended/en_ja/*noisy*valid.ref"))
ja_en_test = get_table(glob.glob("../resources/nmt_output/ja_en/*test.ref"), glob.glob("../resources/hybrid_nmt_output_appended/en_ja/*noisy*test.ref"))
en_ja_valid
[(key, *value) for key, value in dict(en_ja_valid.loc[:, en_ja_valid.columns != 'reference'].iloc[0]).items()]
def get_oracle(table):
results = {"hypothesis": [], "reference": []}
for index, row in table.loc[:, table.columns != 'reference'].iterrows():
line = [(key, *value) for key, value in dict(row).items()]
results["hypothesis"].append(max(line, key=lambda x: x[-1])[:-1])
results["reference"].append(table.iloc[index]['reference'])
return results
oracle_en_ja_valid = get_oracle(en_ja_valid)
oracle_en_ja_test = get_oracle(en_ja_test)
oracle_ja_en_valid = get_oracle(ja_en_valid)
oracle_ja_en_test = get_oracle(ja_en_test)
oracle_ja_en_valid
def save_oracle(oracle, save_path, suffix=""):
with open(save_path + f'/oracle_table_{suffix}.tsv', 'w') as table, open(save_path + f'/oracle_{suffix}.sys', 'w') as target:
table.write("model\treference\thypothesis\n")
for (model, sys), ref in zip(oracle['hypothesis'], oracle['reference']):
table.write('\t'.join((model, ref, sys)) + '\n')
target.write(sys + '\n')
return save_path + f'/oracle_table_{suffix}.tsv'
save_oracle(oracle_en_ja_valid, '../resources/nmt_output/en_ja', "valid")
save_oracle(oracle_ja_en_valid, '../resources/nmt_output/ja_en', "valid")
save_oracle(oracle_en_ja_test, '../resources/nmt_output/en_ja', "test")
save_oracle(oracle_ja_en_test, '../resources/nmt_output/ja_en', "test")
enja_table_valid = pd.read_csv('../resources/nmt_output/en_ja/oracle_table_valid.tsv', delimiter='\t')
jaen_table_valid = pd.read_csv('../resources/nmt_output/ja_en/oracle_table_valid.tsv', delimiter='\t')
enja_table_test = pd.read_csv('../resources/nmt_output/en_ja/oracle_table_test.tsv', delimiter='\t')
jaen_table_test = pd.read_csv('../resources/nmt_output/ja_en/oracle_table_test.tsv', delimiter='\t')
enja_table_valid
# +
from collections import defaultdict
def corpus_by_model(table, key='model'):
result = defaultdict(lambda : defaultdict(list))
for index, row in table.iterrows():
result[row[key]]['reference'].append(row['reference'])
result[row[key]]['hypothesis'].append(row['hypothesis'])
for key, value in result.items():
value['reference'] = [value['reference']]
return result
# -
enja_corpus_valid = corpus_by_model(enja_table_valid)
enja_corpus_test = corpus_by_model(enja_table_test)
jaen_corpus_valid = corpus_by_model(jaen_table_valid)
jaen_corpus_test = corpus_by_model(jaen_table_test)
enja_corpus_valid = corpus_by_model(enja_table_valid, 'context_sentence_distance')
enja_corpus_test = corpus_by_model(enja_table_test, 'context_sentence_distance')
jaen_corpus_valid = corpus_by_model(jaen_table_valid, 'context_sentence_distance')
jaen_corpus_test = corpus_by_model(jaen_table_test, 'context_sentence_distance')
def calculate_bleu(table):
result = defaultdict(dict)
for key, value in table.items():
print(key)
corpus_score = sacrebleu.corpus_bleu(value['hypothesis'], value['reference']).score
sentence_scores = [sacrebleu.sentence_bleu(hypo, ref).score for hypo, ref in zip(value['hypothesis'], value['reference'][0])]
sentence_scores_mean = sum(sentence_scores) / len(sentence_scores)
print(corpus_score)
print(sentence_scores_mean)
result[key]['corpus_bleu'] = corpus_score
result[key]['sentence_bleu_mean'] = sentence_scores_mean
print('\n')
result = dict(sorted(result.items(), key=lambda x : x[0]))
return result
enja_bleu_valid = calculate_bleu(enja_corpus_valid)
enja_bleu_test = calculate_bleu(enja_corpus_test)
jaen_bleu_valid = calculate_bleu(jaen_corpus_valid)
jaen_bleu_test = calculate_bleu(jaen_corpus_test)
enja_table_valid.head(10)
ja_en_valid.head(10)
jaen_table_valid[jaen_table_valid['context_sentence_distance'] == 0]['hypothesis']
# +
import matplotlib.pyplot as plt
#ax1 = enja_bleu_valid['sentence_bleu_mean'].plot(rot=90, kind='bar')
#ax2 = enja_table_valid.groupby('model').count()['reference'].plot(rot=90)
def plot_result(table, bleu, bleu_type='sentence_bleu_mean'):
fig, ax1 = plt.subplots()
color = 'tab:cyan'
ax1.set_xlabel('context distance')
ax1.set_ylabel('average sentence bleu', color=color) # we already handled the x-label with ax1
ax1.bar(list(bleu.keys()), pd.DataFrame(bleu).T[bleu_type], color=color)
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis
color = 'black'
ax2.set_ylabel('count', color=color)
counter = table.groupby('model').count()
ax2.plot(counter.index, counter['reference'], color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout() # otherwise the right y-label is slightly clipped
plt.show()
return fig
# -
fig = plot_result(enja_table_valid, enja_bleu_valid)
fig.savefig('/home/litong/context_translation/resources/nmt_output/enja_oracle_distribution_valid.png')
fig = plot_result(enja_table_valid, enja_bleu_valid, 'corpus_bleu')
fig = plot_result(enja_table_test, enja_bleu_test)
fig.savefig('/home/litong/context_translation/resources/nmt_output/enja_oracle_distribution_test.png')
fig = plot_result(enja_table_test, enja_bleu_test, 'corpus_bleu')
fig = plot_result(jaen_table_valid, jaen_bleu_valid)
fig.savefig('/home/litong/context_translation/resources/nmt_output/jaen_oracle_distribution_valid.png')
fig = plot_result(jaen_table_valid, jaen_bleu_valid, 'corpus_bleu')
fig = plot_result(jaen_table_test, jaen_bleu_test)
fig.savefig('/home/litong/context_translation/resources/nmt_output/jaen_oracle_distribution_test.png')
fig = plot_result(jaen_table_test, jaen_bleu_test, 'corpus_bleu')
def readable_model_name(model):
return int(model[-1])
enja_table_valid['model'] = enja_table_valid['model'].apply(readable_model_name)
jaen_table_valid['model'] = jaen_table_valid['model'].apply(readable_model_name)
enja_table_test['model'] = enja_table_test['model'].apply(readable_model_name)
jaen_table_test['model'] = jaen_table_test['model'].apply(readable_model_name)
jaen_table_test[(jaen_table_test['model'] == 0) & (jaen_table_test['context_sentence_distance'] != 0)]
jaen_table_test.groupby('context_sentence_distance').count()
def get_real_context(table, lengths):
real_bias = []
start = 0
for length in lengths:
index = 0
for _, row in table.iloc[start:start + length].iterrows():
model = int(row['model'])
if index < model:
real_bias.append(0)
else:
real_bias.append(model)
index += 1
start += length
#table['context_sentence_distance'] = list(map(str, real_bias))
table['context_sentence_distance'] = real_bias
#return real_bias
get_real_context(enja_table_valid, valid_lengths)
get_real_context(enja_table_test, test_lengths)
get_real_context(jaen_table_valid, valid_lengths)
get_real_context(jaen_table_test, test_lengths)
enja_table_valid.groupby('context_sentence_distance').count()['reference'].plot()
enja_table_test.groupby('context_sentence_distance').count()['reference'].plot()
jaen_table_valid.groupby('context_sentence_distance').count()['reference'].plot()
jaen_table_test.groupby('context_sentence_distance').count()['reference'].plot()
def merge_table(a, b):
result = {}
result['valid'] = dict(a.groupby('context_sentence_distance').count()['reference'])
result['test'] = dict(b.groupby('context_sentence_distance').count()['reference'])
return result
enja_table = merge_table(enja_table_test, enja_table_valid)
jaen_table = merge_table(jaen_table_test, jaen_table_valid)
ax = pd.DataFrame(enja_table).plot()
ax.figure.savefig('/home/litong/context_translation/resources/nmt_output/enja_oracle_distribution.png')
ax = pd.DataFrame(jaen_table).plot()
ax.figure.savefig('/home/litong/context_translation/resources/nmt_output/jaen_oracle_distribution.png')
train_path = '/home/litong/context_translation/resources/train_77b9dbd0538187438b8dd13a8f6b935c.pkl'
valid_path = '/home/litong/context_translation/resources/valid_0a06896723176aff827aac15a2e1ac94.pkl'
test_path = '/home/litong/context_translation/resources/test_312c4d4a71cc6fc659e7a08be8346726.pkl'
import pickle
def get_lengths(path):
with open(path, 'rb') as source:
data = pickle.load(source)
return [len(doc["pairs"]) for doc in data.values()]
valid_lengths = get_lengths(valid_path)
test_lengths = get_lengths(test_path)
sum(test_lengths)
plt.xticks(rotation=90)
plt.plot([f"bias {index}" for index in range(0, 6)], enja_table.groupby('model').count()['reference'], label="enja")
plt.plot([f"bias {index}" for index in range(0, 6)], jaen_table.groupby('model').count()['reference'], label="jaen")
plt.subplot(1, 2, 1)
plt.plot([f"bias {index}" for index in range(0, 6)], jaen_table.groupby('model').count()['reference'], label="jaen")
plt.subplot(1, 2, 2)
plt.plot([f"bias {index}" for index in range(0, 6)], enja_table.groupby('model').count()['reference'], label="enja")
# !cat ../resources/nmt_output/en_ja/oracle.sys | sacrebleu -b -w 2 ../resources/nmt_output/en_ja/jiji_onto_ami_conver_train_1-to-1_en_ja.ref
# !cat ../resources/nmt_output/ja_en/oracle.sys | sacrebleu -b -w 2 ../resources/nmt_output/ja_en/jiji_onto_ami_conver_train_1-to-1_ja_en.ref
# # Evaluate Model with All Kind of Data
# !ls ../resources/hybird_nmt_output/en_ja
os.path.splitext(glob.glob("../resources/hybird_nmt_output/en_ja/*ref")[0])
for direction in ("en_ja", "ja_en"):
for bias in range(1, 6):
table = get_table(glob.glob(f"../resources/hybird_nmt_output/{direction}/*bias_{bias}*ref"))
oracle = get_oracle(table)
tsv_path = save_oracle(oracle, table["reference"], f"../resources/hybird_nmt_output/{direction}/", f"bias_{bias}_model")
for direction in ("en_ja", "ja_en"):
for bias in range(1, 6):
table = get_table(glob.glob(f"../resources/hybird_nmt_output/{direction}/*Bias_{bias}*ref"))
oracle = get_oracle(table)
tsv_path = save_oracle(oracle, table["reference"], f"../resources/hybird_nmt_output/{direction}/", f"bias_{bias}_data")
# !cat ../resources/hybird_nmt_output/results.txt
# ## Oracle scores for en -> ja
# !find ../resources/hybird_nmt_output/en_ja -type f -name "*data.sys" | sort | xargs -I {} sh -c "echo {}; cat {} | sacrebleu -b -w 2 ../resources/hybird_nmt_output/en_ja/reference"
# !find ../resources/hybird_nmt_output/en_ja -type f -name "*model.sys" | sort | xargs -I {} sh -c "echo {}; cat {} | sacrebleu -b -w 2 ../resources/hybird_nmt_output/en_ja/reference"
# ## Oracle scores for ja -> en
# Use n-biased data on every 2-to-1 system
# !find ../resources/hybird_nmt_output/ja_en -type f -name "*data.sys" | sort | xargs -I {} sh -c "echo {}; cat {} | sacrebleu -b -w 2 ../resources/hybird_nmt_output/ja_en/reference"
# Use every kind of biased data on each n-biased 2-to-1 system
# !find ../resources/hybird_nmt_output/ja_en -type f -name "*model.sys" | sort | xargs -I {} sh -c "echo {}; cat {} | sacrebleu -b -w 2 ../resources/hybird_nmt_output/ja_en/reference"
| notebooks/Oracle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import os
from matplotlib import cm
import matplotlib
from scipy import optimize
plt.style.use('seaborn-deep')
plt.style.use('classic')
matplotlib.rcParams['axes.prop_cycle'] = matplotlib.cycler('color', ['#0072B2', '#009E73', '#D55E00', '#CC79A7', '#F0E442', '#56B4E9'])
matplotlib.rcParams['axes.linewidth'] = 1.3
matplotlib.rcParams['lines.linewidth'] = 1.3
matplotlib.rc('text', usetex=True)
matplotlib.rcParams['text.latex.preamble'] = [r"\usepackage{amsmath}"]
matplotlib.rcParams.update({'font.size': 8})
# +
gen = []
for i in range(11):
gen.append(np.genfromtxt('./NSGA_joukowskyCLCD/data/gen%i.txt' %i, delimiter=','))
# -
ms = np.linspace(15,20,len(gen))
al = np.linspace(1,0.5,len(gen))
color = cm.jet(np.linspace(0,1,len(gen)))
# +
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10))
ax1.plot(gen[0][:,0],gen[0][:,1],'.',alpha=al[0],color=color[0],markersize=ms[0])
ax1.plot(gen[1][:,0],gen[1][:,1],'.',alpha=al[1],color=color[1],markersize=ms[1])
ax1.plot(gen[2][:,0],gen[2][:,1],'.',alpha=al[2],color=color[2],markersize=ms[2])
ax1.plot(gen[3][:,0],gen[3][:,1],'.',alpha=al[3],color=color[3],markersize=ms[3])
ax1.plot(gen[4][:,0],gen[4][:,1],'.',alpha=al[4],color=color[4],markersize=ms[4])
ax1.plot(gen[5][:,0],gen[5][:,1],'.',alpha=al[5],color=color[5],markersize=ms[5])
ax1.plot(gen[6][:,0],gen[6][:,1],'.',alpha=al[6],color=color[6],markersize=ms[6])
ax1.plot(gen[7][:,0],gen[7][:,1],'.',alpha=al[7],color=color[7],markersize=ms[7])
ax1.plot(gen[8][:,0],gen[8][:,1],'.',alpha=al[8],color=color[8],markersize=ms[8])
ax1.plot(gen[9][:,0],gen[9][:,1],'.',alpha=al[9],color=color[9],markersize=ms[9])
ax1.plot(gen[10][:,0],gen[10][:,1],'.',alpha=al[10],color=color[10],markersize=ms[10])
ax1.plot([0.10],[0.20],'.',alpha=al[0],color=color[0],markersize=ms[0], label='Gen. 0')
ax1.plot([0.10],[0.20],'.',alpha=al[6],color=color[6],markersize=ms[6], label='Gen. 6')
ax1.plot([0.10],[0.20],'.',alpha=al[1],color=color[1],markersize=ms[1], label='Gen. 1')
ax1.plot([0.10],[0.20],'.',alpha=al[7],color=color[7],markersize=ms[7], label='Gen. 7')
ax1.plot([0.10],[0.20],'.',alpha=al[2],color=color[2],markersize=ms[2], label='Gen. 2')
ax1.plot([0.10],[0.20],'.',alpha=al[8],color=color[8],markersize=ms[8], label='Gen. 8')
ax1.plot([0.10],[0.20],'.',alpha=al[3],color=color[3],markersize=ms[3], label='Gen. 3')
ax1.plot([0.10],[0.20],'.',alpha=al[9],color=color[9],markersize=ms[9], label='Gen. 9')
ax1.plot([0.10],[0.20],'.',alpha=al[4],color=color[4],markersize=ms[4], label='Gen. 4')
ax1.plot([0.10],[0.20],'.',alpha=al[10],color=color[10],markersize=ms[10], label='Gen.10')
ax1.plot([0.10],[0.20],'.',alpha=al[5],color=color[5],markersize=ms[5], label='Gen. 5')
k = 11
for i in range(k):
ax2.plot(-0.5*30**2*1.225*gen[i][:,2],0.5*30**2*1.225*gen[i][:,3],'.',alpha=al[i],color=color[i],markersize=ms[i])
ax1.legend(bbox_to_anchor=(2.25,-0.1), fontsize=26, ncol=6)
ax1.set_title('Search space', fontsize=28)
ax2.set_title('Function space', fontsize=28)
ax1.tick_params(axis = 'both', labelsize = 26)
ax2.tick_params(axis = 'both', labelsize = 26)
ax1.set_xlabel(r'$\mu_x$',fontsize=28)
ax1.set_ylabel(r'$\mu_y$',fontsize=28)
ax2.set_xlabel(r"Lift ($N$)",fontsize=26)
ax2.set_ylabel(r'Drag ($N$)',fontsize=26)
ax1.set_xlim([-0.32,-0.08])
ax1.set_ylim([-0.02,0.17])
ax2.set_xlim([0.0,45.0])
ax2.set_ylim([1.2,2.6])
# plt.savefig('./cLcDgen%i.png' %(k-1), bbox_inches='tight',dpi=300)
| cases/results/joukLD.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# # 環境の操作
#
# Azure Machine Learning 実験としてスクリプトを実行する場合は、実験実行の実行コンテキストを定義する必要があります。実行コンテキストは以下で構成されます。
#
# * スクリプト向けの Python 環境。スクリプトで使用するすべての Python パッケージを含める必要があります。
# * スクリプトが実行されるコンピューティング先。実験実行を開始するローカル ワークステーション、またはオンデマンドで提供されるトレーニング クラスターなどのリモート コンピューター先になります。
#
# このノートブックでは、実験の *環境* について学びます。
#
# ## Azure Machine Learning SDK をインストールする
#
# Azure Machine Learning SDK は頻繁に更新されます。以下のセルを実行し、ノートブック ウィジェットをサポートする追加パッケージとともに最新のリリースにアップグレードします。
# !pip install --upgrade azureml-sdk azureml-widgets
# ## ワークスペースに接続する
#
# 最新バージョンの SDK がインストールされているため、ワークスペースに接続できます。
#
# > **注**: Azure サブスクリプションでまだ認証済みのセッションを確立していない場合は、リンクをクリックして認証コードを入力し、Azure にサインインして認証するよう指示されます。
# +
import azureml.core
from azureml.core import Workspace
# 保存した構成ファイルからワークスペースを読み込む
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
# -
# ## 実験用データを準備する
#
# このノートブックでは、糖尿病患者の詳細を含むデータセットを使用します。次のセルを実行してこのデータセットを作成します (すでに存在する場合は、コードによって既存のバージョンを検索します)
# +
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # 糖尿病 CSV ファイルを /data にアップロードする
target_path='diabetes-data/', # データストアのフォルダー パスに入れる
overwrite=True, # 同じ名前の既存のファイルを置き換える
show_progress=True)
#データストア上のパスから表形式のデータセットを作成する (しばらく時間がかかる場合があります)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# 表形式のデータセットを登録する
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
# -
# ## トレーニング スクリプトを作成する
#
# 次の 2 つのセルを実行して作成します。
# 1. 新しい実験用のフォルダー
# 2. **SCIkit-learn を使用** してモデルをトレーニングし、**matplotlib** を使用して ROC 曲線をプロットするトレーニング スクリプト ファイル。
# +
import os
# 実験ファイル用フォルダーを作成する
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
# +
# %%writefile $experiment_folder/diabetes_training.py
# ライブラリをインポートする
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# スクリプト引数を取得する
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 正規化ハイパーパラメーターを設定する
reg = args.reg_rate
# 実験実行コンテキストを取得する
run = Run.get_context()
# 糖尿病データを読み込む (入力データセットとして渡される)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# ロジスティック回帰モデルのトレーニング
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# 正確さを計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# ROC 曲線をプロットする
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# 対角 50% ラインをプロットする
plt.plot([0, 1], [0, 1], 'k--')
# モデルによって達成された FPR と TPR をプロットする
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
# -
# ## 環境を定義する
#
# Azure Machine Learning で実験として Python スクリプトを実行すると、Conda 環境が作成され、スクリプトの実行コンテキストが定義されます。Azure Machine Learning には、多くの共通パッケージを含む既定の環境を提供します。これには、実験実行の操作に必要なライブラリを含む **azureml-defaults** パッケージ、**Pandas** や **numpy** などの一般なパッケージが含まれます。
#
# また、**Conda** または **PIP** を使用して独自の環境を定義し、パッケージを追加して、実験が必要なすべてのライブラリにアクセスできるようにすることもできます。
#
# > **注**: Conda 依存関係を最初にインストールした後、pip 依存関係をインストールします。**pip** パッケージでは pip 依存関係をインストールする必要があるため、これを Conda 依存関係にも含めるようお勧めします (忘れた場合でも Azure ML がインストールしますが、ログに警告が表示されます!)
# +
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# 実験用 Python 環境を作成する
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # 依存関係を Azure ML に管理させる
diabetes_env.docker.enabled = True # ドッカー コンテナーを使用する
# パッケージの依存関係のセットを作成する (必要に応じて Conda または PIP)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# 環境に依存関係を追加する
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
# -
# これで、環境を使用し、スクリプトを実験として実行することができます。
#
# 次のコードでは、作成した環境を ScriptRunConfig に割り当て、実験を送信します。実験の実行中に、ウィジェットおよび **azureml_logs/60_control_log.txt** 出力ログで実行の詳細を確認すると、Conda 環境が構築されていることがわかります。
# +
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# トレーニング データセットを取得する
diabetes_ds = ws.datasets.get("diabetes dataset")
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # 正則化率パラメーター
'--input-data', diabetes_ds.as_named_input('training_data')], # データセットへの参照
environment=diabetes_env)
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
# -
# 実験では、必要なすべてのパッケージを含む環境が正常に使用されました - Azure Machine Learning Studio で実行された実験のメトリックと出力を表示するか、以下のコード (**Scikit-learn** を使用してトレーニングされたモデルや **matplotlib** を使用して生成された ROC チャート イメージを含む) を実行して表示できます。
# 指標の記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
# ## 環境を登録する
#
# 必要なパッケージを使用して環境を定義する手間が省けたので、ワークスペースに登録できます。
# 環境を登録する
diabetes_env.register(workspace=ws)
# 環境は、最初に作成したときに割り当てた名前 (この場合、*diabetes-experiment-env*) で登録されていることに注意してください。
#
# 環境が登録されている場合、同じ要件を持つすべてのスクリプトに再利用できます。たとえば、別のアルゴリズムを使用して糖尿病モデルをトレーニングするフォルダーとスクリプトを作成してみましょう。
# +
import os
# 実験ファイル用フォルダーを作成する
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
# +
# %%writefile $experiment_folder/diabetes_training.py
# ライブラリをインポートする
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# スクリプト引数を取得する
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 実験実行コンテキストを取得する
run = Run.get_context()
# 糖尿病データを読み込む (入力データセットとして渡される)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# デシジョン ツリー モデルをトレーニングする
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# 正確さを計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# ROC 曲線をプロットする
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# 対角 50% ラインをプロットする
plt.plot([0, 1], [0, 1], 'k--')
# モデルによって達成された FPR と TPR をプロットする
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
# -
# これで、登録された環境を取得し、代替トレーニング スクリプトを実行する新しい実験でこれを使用できます (デシジョン ツリー分類子は正規化パラメーターを必要としないため、ここでは使われていません)。
# +
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# 登録済みの環境を取得する
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# トレーニング データセットを取得する
diabetes_ds = ws.datasets.get("diabetes dataset")
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # データセットへの参照
environment=registered_env)
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
# -
# 今回は、一致する環境が前回の実行からキャッシュされており、ローカル コンピューティングで再作成する必要がないため、実験の実行速度が速くなります。ただし、異なるコンピューティング ターゲットでも、同じ環境が作成および使用され、実験スクリプトの実行コンテキストの一貫性が確保されます。
#
# 実験のメトリックと出力を見てみましょう。
# 指標の記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
# ## 登録済み環境を表示する
#
# 独自の環境を登録するだけでなく、一般的な実験タイプに対して事前構築され「選別された」環境を活用できます。次のコードは、登録されているすべての環境を一覧表示します。
# +
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
# -
# すべての選別された環境には、***AzureML-*** で始まる名前が付いています (このプレフィックスは、独自の環境では使用できません)。
#
# 選別された環境を詳しく調べ、それぞれのパッケージに含まれているパッケージを確認しましょう。
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
# > **詳細情報**:
# >
# > - Azure Machine Learning での環境の詳細については、[Azure Machine Learning でソフトウェア環境を作成して使用する](https://docs.microsoft.com/azure/machine-learning/how-to-use-environments)を参照してください。
| 05A - Working with Environments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# # Complex Networks - 5
# #### 1) Implemente a simulação de crescimento de rede por anexação aleatória: comece com uma rede inicial e a cada passo acrescente um novo nó, com grau k (k fixo ou extraído de um sorteio) que deve se ligar a k nós previamente existentes com probabilidade uniforme.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
import warnings
warnings.filterwarnings('ignore')
# +
# Questão 1)
# K = 1
def evolution_network(G,I):
for i in range(I):
existing_nodes = np.unique(G)
new_node = max(existing_nodes) + 1
size = len(existing_nodes) - 1
u = np.random.randint(size)
to_connect = existing_nodes[u]
G = np.append(G,[[new_node,to_connect]],axis = 0)
return G
# Select K
def evolution_network_select_k(G,I,k):
for i in range(I):
existing_nodes = np.unique(G)
new_node = max(existing_nodes) + 1
size = len(existing_nodes) - 1
for j in range(k-1):
u = np.random.randint(size)
to_connect = existing_nodes[u]
G = np.append(G,[[new_node,to_connect]],axis = 0)
return G
# Random K
def evolution_network_random_k(G,I):
for i in range(I):
existing_nodes = np.unique(G)
new_node = max(existing_nodes) + 1
size = len(existing_nodes) - 1
k = np.random.randint(size)
for j in range(k):
u = np.random.randint(size)
to_connect = existing_nodes[u]
G = np.append(G,[[new_node,to_connect]],axis = 0)
return G
# Network Histogram
def network_histogram(G):
a = G[:,1]
G = np.delete(G,a)
G = np.append(G,a)
plt.hist(G)
plt.show()
def network_histogram_2(G):
a = G[:,1]
G = np.delete(G,a)
G = np.append(G,a)
sns.distplot(G, hist = False)
plt.show()
# -
# #### 2) Use o item anterior para criar uma rede com 5000 nós por anexação aleatória e faça um histograma dos graus. Verifique que uma distribuição exponencial se ajusta bem ao histograma.
network_histogram(G)
# #### 3) Justifique a aderência da distribuição exponencial no item anterior. Ou seja, mostre que o histograma converge para a função exponencial (histograma converge? Como??)
network_histogram_2(G)
# #### 4) Repita o exercício (1) mas com anexação preferencial (modelo de Price que generaliza o de Albert-Barabasi). Agora o novo nó que entra na rede tem probabilidade 𝛼 de se ligar aos anteriores por anexação aleatória e probabilidade 1 − 𝛼 de se ligar aos anteriores por anexação preferencial (isto é, a probabilidade do nó ser escolhido é proporcional ao seu grau). O valor de 𝛼 é um parâmetro da simulação (pode ser 0, para a anexação preferencial pura).
# +
# Questão 4)
def evolution_network_price_barabsi(G,I,k,alpha):
for i in range(I):
existing_nodes = np.unique(G)
new_node = max(existing_nodes) + 1
size = len(existing_nodes) - 1
m = np.random.binomial(1,alpha, size = 1)
m = np.asscalar(m)
if m == 0:
a = G[:,1]
b = G[:,0]
z = np.append(a,b)
for j in range(k-1):
u = np.random.choice(z)
to_connect = existing_nodes[u]
G = np.append(G,[[new_node,to_connect]],axis = 0)
else:
for j in range(k-1):
u = np.random.randint(size)
to_connect = existing_nodes[u]
G = np.append(G,[[new_node,to_connect]],axis = 0)
return G
# -
# #### 5) Use o item anterior para criar uma rede com 5000 nós por anexação preferencial e faça um histograma dos graus. Verifique que uma lei de potência se ajusta bem ao histograma.
G = evolution_network_price_barabsi(G,5000,3,0.3)
network_histogram(G)
# #### 6) Justifique a aderência da distribuição de pareto no item anterior. Ou seja, mostre que o histograma converge para uma lei de potência (histograma converge? Como??)
network_histogram_2(G)
| Complex Networks/Complex Networks - 5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Example usage
#
# To use `magmaviz` in a project:
# +
from magmaviz.boxplot import boxplot
from magmaviz.corrplot import corrplot
from magmaviz.histogram import histogram
from magmaviz.scatterplot import scatterplot
from vega_datasets import data
# -
# ### Toy Dataset
from vega_datasets import data
cars = data.cars()
cars.head(3)
# ### Boxplot
# We can create boxplots to view the distribution of a certain variable with one or more categories using the `boxplot()` function. This function will automatically name the columns based on the column names that are supplied, and has an option to facet.
boxplot(cars, 'Miles_per_Gallon', 'Origin')
# ### Correlation plot
# We can create a correlation plot based on all the numeric features in the dataframe by calling `corrplot()`. The values returned are the Pearson correlation coefficents. The color scheme is set to diverging for easy interpretation. There is an option to print the values as a list.
corrplot(cars, print_corr=False)
# ### Histogram
# To generate a histogram for a categorical feature with an aggregation function, we can call `histogram()`. The aggergation functions include average, count, distinct etc. The list of the accepted aggregation functions are supplied in the documentation.
histogram(cars, x='Horsepower', y='count()')
# ### Scatter plot
# A scatter plot based on two numeric features can be created using the `scatterplot()` funciton. The style of the points, labels of the plot and scale are customizable. The names of the x and y are automatically generated if not supplied. Customizations of the points include opacity, shape and size.
scatterplot(cars, 'Horsepower', 'Acceleration', c='Origin')
| docs/example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Foreign Born Population by Year of Entry
# ## Variables
# +
# %matplotlib inline
import os, requests, json, pandas as pd, matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import pandas.io.formats.format as pf
# display numbers (floats) with thousand separator
pd.options.display.float_format = '{:,.1f}'.format
# found this hack to format integers for display with thousand separator
# https://stackoverflow.com/questions/29663252/format-pandas-integers-for-display?answertab=active#tab-top
class IntArrayFormatter(pf.GenericArrayFormatter):
def _format_strings(self):
formatter = self.formatter or '{:,d}'.format
fmt_values = [formatter(x) for x in self.values]
return fmt_values
pf.IntArrayFormatter = IntArrayFormatter
# +
#Set variables
dir_path = os.getcwd()
out_path=os.path.join(dir_path,'output')
year='2018'
dsource='acs'
dsource2='acs5'
dname='subject'
state='36'
place='51000'
cbsa='35620'
keyfile='census_key.txt'
# -
col_list=['NAME',
'S0501_C01_001E','S0501_C01_001M',
'S0502_C01_001E','S0502_C01_001M',
'S0502_C02_001E','S0502_C02_001M',
'S0502_C02_004E','S0502_C02_004M',
'S0502_C02_005E','S0502_C02_005M',
'S0502_C02_006E','S0502_C02_006M',
'S0502_C02_007E','S0502_C02_007M',
'S0502_C02_008E','S0502_C02_008M',
'S0502_C02_009E','S0502_C02_009M',
'S0502_C02_010E','S0502_C02_010M',
]
cols=','.join(col_list)
cols
# +
vars_url=f'https://api.census.gov/data/{year}/{dsource}/{dsource2}/{dname}/variables.json'
response=requests.get(vars_url)
#"variables" is top key in the file - reference it to flatten the file so individual variables become keys
variables=response.json()['variables']
variables
# +
#Iterate through keys and get specific sub-values
varnames={}
for k,v in variables.items():
if k in cols:
varnames[k]=v['label']
sorted_vars = sorted(varnames.items())
sorted_vars
# -
# ## Retrieve Data
base_url = f'https://api.census.gov/data/{year}/{dsource}/{dsource2}/{dname}'
base_url
#Read api key in from file
with open(keyfile) as key:
api_key=key.read().strip()
# NYC
data_url = f'{base_url}?get={cols}&for=place:{place}&in=state:{state}&key={api_key}'
response=requests.get(data_url)
nycdata=response.json()
print('List has', len(nycdata), 'records')
df1=pd.DataFrame(nycdata[1:], columns=nycdata[0])
df1.set_index('NAME',inplace=True)
df1.drop(columns=['state','place'],inplace=True)
df1
# NYMA
data_url = f'{base_url}?get={cols}&for=metropolitan statistical area/micropolitan statistical area:{cbsa}&key={api_key}'
response=requests.get(data_url)
nymadata=response.json()
print('List has', len(nymadata), 'records')
df2=pd.DataFrame(nymadata[1:], columns=nymadata[0])
df2.set_index('NAME',inplace=True)
df2.drop(columns=['metropolitan statistical area/micropolitan statistical area'],inplace=True)
df2
# USA
data_url = f'{base_url}?get={cols}&for=us:1&key={api_key}'
response=requests.get(data_url)
usdata=response.json()
print('List has', len(usdata), 'records')
df3=pd.DataFrame(usdata[1:], columns=usdata[0])
df3.set_index('NAME',inplace=True)
df3.drop(columns=['us'],inplace=True)
df3
places = pd.concat([df1,df2,df3])
places = places.apply(pd.to_numeric)
places.index=['New York City','New York Metro Area','United States']
places
# ## Analysis and Charts
pct_fb=round((places['S0502_C01_001E']/places['S0501_C01_001E'])*100,1)
pct_fb
# +
regions=['S0502_C02_009E','S0502_C02_010E','S0502_C02_005E','S0502_C02_007E','S0502_C02_006E','S0502_C02_008E']
ax=places[regions].plot(kind='barh',stacked=True, figsize=(8,5), cmap='GnBu')
ax.invert_yaxis()
leg_names=['Latin America','Northern America','Europe','Africa','Asia','Oceania']
lgd=plt.legend(leg_names, loc=8,bbox_to_anchor=(0.5, -0.3), ncol=3, prop = {'size':13},frameon=False)
for p in ax.patches:
width, height = p.get_width(), p.get_height()
x, y = p.get_xy()
ax.text(x+width/2,
y+height/2,
'{:.0f}%'.format(width),
horizontalalignment='center',
verticalalignment='center',
fontweight='bold', fontsize=12)
ax.set_xlim(0,100)
ax.set_xlabel("Percent Total", fontsize=12)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
plt.tight_layout()
fig_for=ax.get_figure()
fig_for.savefig(os.path.join(out_path, 'images','foreign_origins.png'),bbox_inches='tight')
# -
| scripts/foreign_pop.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 01 Data Preparation
#
# To the pre-prepared COMPAS data we add predictions for name gender and origin made as made by NamSor API.
# +
# >>> Import Libraries
print("Importing necessary libraries... ")
import openapi_client #NamSor, see https://github.com/namsor/namsor-python-sdk2
from openapi_client.rest import ApiException
import pandas as pd
print("Libraries imported.")
# +
# >>> Import COMPAS data set
print("Importing COMPAS data set... ")
df = pd.read_csv("data/compas_for_namsor.csv")
print("Data set imported. It is has {} entries and looks like this:".format(df.shape[0]))
df.head()
# +
# >>> Preparing for API use
# Get private API Key for NamSor API v2 (contained in txt file)
print("Getting private key... ")
key = ''
try:
with open("key.txt", "r") as file:
key = file.read()
if(len(key) < 0):
raise FileNotFoundError()
except (FileNotFoundError):
print("Could not find private key. Please make sure you have an API key that you stored as key.txt in the root folder.")
print("Got private key.")
# +
print("Setting up NamSor API v2 connection settings...")
# Configure API key authorization: api_key
configuration = openapi_client.Configuration()
configuration.api_key['X-API-KEY'] = key
# create an instance of the personal API class
pers_api_instance = openapi_client.PersonalApi(openapi_client.ApiClient(configuration))
print("Connection set.")
# +
# >>> Classifying names with NamSor API
# Formatting a df of names
print('Formatting names dataframe...')
names_df = df[['entity_id', 'first', 'last']]
print('Names dataframe formatted. It looks like this: ')
print(names_df.head())
# -
def to_first_last_name_geo_in(row) :
''' This function turns a tuple of values [id, first_name, last_name] into a to_first_last_name_geo_in object'''
# https://github.com/namsor/namsor-python-sdk2/blob/master/docs/FirstLastNameGeoIn.md
if(not row[0] or not row[1] or not row[2]):
print("Entered invalid data to be turned into to_first_last_name_geo_in")
return
return openapi_client.FirstLastNameGeoIn(id=row[0],
first_name=row[1],
last_name=row[2],
country_iso2='us') # http://www.vas.com/Tnotes/Country%20Codes.htm
# +
# Formatting a list of batches from the names df so names can be fed to the API batch-wise
print('Creating list of name-batches...')
names_stack = list() # this will be a list of name-batches generated from the df
limit = len(names_df.index)
start = 0
end = -1
batch_size = 1000 #1000 is the API limit given by NamSor
while(end < limit):
start = end + 1
try_end = start + batch_size
if(try_end <= limit):
end = try_end
else:
end = limit
# each list item will fit openapi_client.BatchFirstLastNameGeoIn
current_df_batch = names_df[start:end]
# https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas/55557758#55557758
list_first_last_name_geo_in = [to_first_last_name_geo_in(row) for row in current_df_batch[['entity_id', 'first', 'last']].to_numpy()]
names_stack.append(list_first_last_name_geo_in)
print('List of batches created.')
print('Will need to make {} calls.'.format(len(names_stack)))
# -
def get_batch(list_first_last_name_geo_in):
return openapi_client.BatchFirstLastNameGeoIn(personal_names=list_first_last_name_geo_in)
def predict_gender_batch(list_first_last_name_geo_in):
api_response = pers_api_instance.gender_geo_batch(batch_first_last_name_geo_in=batch_first_last_name_geo_in)# call api
return api_response.personal_names
def predict_ethnicity_batch(list_first_last_name_geo_in):
# "Output is W_NL (white, non latino), HL (hispano latino), A (asian, non latino), B_NL (black, non latino)."
api_response = pers_api_instance.us_race_ethnicity_batch(batch_first_last_name_geo_in=batch_first_last_name_geo_in)# call api
return api_response.personal_names
# +
# Sending in one batch at a time and saving the result answer by answer.
print("Sending batches to the API...")
result_gender = []
result_ethnicity = []
current = 0
limit = len(names_stack)
while(current < limit): # I assume len(result_gender) == len(result_ethnicity)
print(current)
batch_first_last_name_geo_in = get_batch(names_stack[current])
try:
result_gender.extend(predict_gender_batch(batch_first_last_name_geo_in))
result_ethnicity.extend(predict_ethnicity_batch(batch_first_last_name_geo_in))
except ApiException as e:
print("Exception when calling PersonalApi: {}".format(e))
if(len(result_gender) != (batch_size * current + len(names_stack[current])) or
len(result_ethnicity) != (batch_size * current + len(names_stack[current]))):
print("Some names got lost when the exception at stack {} occurred. Please try again.".format(current))
break
if(len(result_gender) == (batch_size * current + len(names_stack[current]))):
print("No names got lost for gender predictions. Trying again with stack size {}...".format(len(names_stack)))
if(len(result_ethnicity) == (batch_size * current + len(names_stack[current]))):
print("No names got lost for ethnicity predictions. Trying again with stack size {}...".format(len(names_stack)))
current -= 1
continue
current += 1
print("All batches analyzed.")
print(result_gender[:5])
print(result_ethnicity[:5])
# +
# >>> TODO: Save results to dataframe
df.reset_index(inplace=True)
df.set_index('entity_id', inplace=True)
# Convert results (list of openapi_client.models.personal_name_gendered_out.PersonalNameGenderedOut) to (list of dictionaries)
print('Filling the results into the names dataframe...')
for i in range(len(result_gender)):
oapi_el = result_gender[i]
current_id = int(oapi_el.id)
df.loc[current_id, 'sex_pred'] = oapi_el.likely_gender
df.loc[current_id, 'sex_pred_prob'] = oapi_el.probability_calibrated
oapi_el = result_ethnicity[i]
df.loc[current_id, 'race_pred'] = oapi_el.race_ethnicity
df.loc[current_id, 'race_pred_prob'] = oapi_el.probability_calibrated
print('Dataframe completed with API results. Here are some results: {}'.format(df.head()))
# -
# Saving results to 'names_cat.csv'
print("Saving compas dataframe with predictions for gender and ethnicity to CSV... ")
df.to_csv("data/compas_with_predictions.csv")
print("CSV saved!")
| 01-Data-Preparation-02-Data-Completion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deep Net in Keras with TensorBoard
# In this notebook, we adapt our [deep net](https://github.com/the-deep-learners/deep-learning-illustrated/blob/master/notebooks/deep_net_in_keras.ipynb) code to enable TensorBoard logging.
# [](https://colab.research.google.com/github/the-deep-learners/deep-learning-illustrated/blob/master/notebooks/deep_net_in_keras_with_tensorboard.ipynb)
# #### Load dependencies
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers.normalization import BatchNormalization
from keras.optimizers import SGD
from keras.callbacks import TensorBoard # new!
# #### Load data
(X_train, y_train), (X_valid, y_valid) = mnist.load_data()
# #### Preprocess data
X_train = X_train.reshape(60000, 784).astype('float32')
X_valid = X_valid.reshape(10000, 784).astype('float32')
X_train /= 255
X_valid /= 255
n_classes = 10
y_train = keras.utils.to_categorical(y_train, n_classes)
y_valid = keras.utils.to_categorical(y_valid, n_classes)
# #### Design neural network architecture
# +
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(784,)))
model.add(BatchNormalization())
model.add(Dense(64, activation='relu'))
model.add(BatchNormalization())
model.add(Dense(64, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
# -
model.summary()
# #### Configure model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# #### Set TensorBoard logging directory
tensorboard = TensorBoard('logs/deep-net')
# #### Train!
model.fit(X_train, y_train,
batch_size=128,
epochs=20, verbose=1,
validation_data=(X_valid, y_valid),
callbacks=[tensorboard])
| notebooks/deep_net_in_keras_with_tensorboard.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Coding Exercise #0503
# ### 1. Classification with KNN:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import seaborn as sns
import warnings
from sklearn.model_selection import train_test_split,GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics, preprocessing
warnings.filterwarnings(action='ignore') # Turn off the warnings.
# %matplotlib inline
# #### 1.1. Read in data:
# The data and explanation can be found [here](https://www.kaggle.com/c/titanic/data) (requires sign in).
# +
# Go to the directory where the data file is located.
# os.chdir(r'~~') # Please, replace the path with your own.
# -
df = pd.read_csv('data_titanic.csv', header='infer')
df.shape
df.head(3)
# #### 1.2. Missing value processing:
# Check for the missing values.
df.isnull().sum(axis=0)
# Fill the missing values in the Age variable.
n = df.shape[0]
Age = [] # A temporary list.
for i in range(n):
if np.isnan(df.Age[i]):
if ('Mr' in df.Name[i]) or ('Mrs' in df.Name[i]) :
Age.append(30) # If Mr. or Mrs. in the name, then fill with 30.
else:
Age.append(10) # Likely a child. So, fill with 10.
else:
Age.append(df.Age[i])
df.Age = pd.Series(Age)
# We will drop some columns.
df = df.drop(columns = ['PassengerId','Name','Ticket','Fare','Cabin'])
df.head(3)
# Delete the rest of missing values.
df=df.dropna(axis=0)
df.shape
df.shape
# #### 1.3. Exploratory data analysis:
# The frequency table of Survived.
sns.countplot('Survived',data=df)
plt.show()
# Survival rate by Age category.
df['AgeCategory'] = pd.qcut(df.Age,4) # Using quantiles cut into 4 intervals.
sns.barplot(x='AgeCategory',y='Survived', ci=None, data=df)
plt.show()
# Survival rate by SibSp category.
sns.barplot(x='SibSp', y='Survived', ci=None, data=df)
plt.show()
# Survival rate by Parch.
sns.barplot(x='Parch', y='Survived', ci=None, data=df)
plt.show()
# Survival rate by Pclass.
sns.barplot(x='Pclass', y='Survived', ci=None, data=df)
plt.show()
# Survival rate by Embarked.
sns.barplot(x='Embarked', y='Survived', ci=None, data=df)
plt.show()
# Survival rate by Sex.
sns.barplot(x='Sex', y='Survived', ci=None, data=df)
plt.show()
# #### 1.4. Feature engineering:
# Convert into dummy variables and then remove the original variables.
df = pd.get_dummies(df.AgeCategory, drop_first=True,prefix='Age').join(df.drop(columns=['Age','AgeCategory']))
df = pd.get_dummies(df.Pclass, drop_first=True,prefix='Pclass').join(df.drop(columns=['Pclass']))
df = pd.get_dummies(df.SibSp, drop_first=True,prefix='SibSp').join(df.drop(columns=['SibSp']))
df = pd.get_dummies(df.Parch, drop_first=True,prefix='Parch').join(df.drop(columns=['Parch']))
df = pd.get_dummies(df.Sex, drop_first=True,prefix='Sex').join(df.drop(columns=['Sex']))
df = pd.get_dummies(df.Embarked, drop_first=True,prefix='Embarked').join(df.drop(columns=['Embarked']))
df.head(5)
# Save to an external file.
df.to_csv('data_titanic_2.csv',index=False)
# #### 1.5. KNN train and test:
X = df.drop(columns=['Survived'])
Y = df.Survived
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=1234)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
# KNN with n_neighbours = 5
knn5 = KNeighborsClassifier(n_neighbors=5)
knn5.fit(X_train, Y_train);
Y_pred = knn5.predict(X_test)
print(metrics.confusion_matrix(Y_test,Y_pred))
print("------------------------")
print( "Accuracy : " + str(np.round(metrics.accuracy_score(Y_test,Y_pred),3)))
# KNN with n_neighbours = 100
knn100 = KNeighborsClassifier(n_neighbors=100)
knn100.fit(X_train, Y_train);
Y_pred = knn100.predict(X_test)
print(metrics.confusion_matrix(Y_test,Y_pred))
print("------------------------")
print( "Accuracy : " + str(np.round(metrics.accuracy_score(Y_test,Y_pred),3)))
# #### 1.6. KNN bias-Variance tradeoff as function of *k*:
accs = []
k_grid = range(1,100,1)
for k in k_grid:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
accs.append(metrics.accuracy_score(Y_test,Y_pred))
# Visualize.
plt.scatter(k_grid,accs,c='red',marker='o',s=10,alpha=0.6)
plt.xlabel('k')
plt.ylabel('Accuracy')
plt.title('Accuracy vs k')
plt.show()
# #### 1.7. KNN hyperparameter optimization:
# Parameter grid.
k_grid = np.arange(1,51,1)
parameters = {'n_neighbors':k_grid}
# Optimize the k.
gridCV = GridSearchCV(KNeighborsClassifier(), parameters, cv=10, n_jobs = -1) # "n_jobs = -1" means "use all the CPU cores".
gridCV.fit(X_train, Y_train)
best_k = gridCV.best_params_['n_neighbors']
print("Best k : " + str(best_k))
# Test with the best k.
KNN_best = KNeighborsClassifier(n_neighbors=best_k)
KNN_best.fit(X_train, Y_train)
Y_pred = KNN_best.predict(X_test)
print( "Best Accuracy : " + str(np.round(metrics.accuracy_score(Y_test,Y_pred),3)))
| SIC_AI_Coding_Exercises/SIC_AI_Chapter_06_Coding_Exercises/ex_0503.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # H2O workflow
# ## Imports
# +
import sys
import os
sys.path.append(os.path.split(os.path.split(os.getcwd())[0])[0])
config_filepath = os.path.join(os.getcwd(),"config/fit_config_h2o.json")
notebook_filepath = os.path.join(os.getcwd(),"fit.ipynb")
import uuid
import uuid
import json
import datetime
import getpass
from mercury_ml.common import tasks
from mercury_ml.common import utils
from mercury_ml.common import containers as common_containers
from mercury_ml.h2o import containers as h2o_containers
# +
#For testing purposes only!
if os.path.isdir("./example_results"):
import shutil
shutil.rmtree("./example_results")
# -
# ## Helpers
#
# These functions will help with the flow of this particular notebook
# +
def print_data_bunch(data_bunch):
for data_set_name, data_set in data_bunch.__dict__.items():
print("{} <{}>".format(data_set_name, type(data_set).__name__))
for data_wrapper_name, data_wrapper in data_set.__dict__.items():
print(" {} <{}>".format(data_wrapper_name, type(data_wrapper).__name__))
print()
def maybe_transform(data_bunch, pre_execution_parameters):
if pre_execution_parameters:
return data_bunch.transform(**pre_execution_parameters)
else:
return data_bunch
def print_dict(d):
print(json.dumps(d, indent=2))
def get_installed_packages():
import pip
try:
from pip._internal.operations import freeze
except ImportError: # pip < 10.0
from pip.operations import freeze
packages = []
for p in freeze.freeze():
packages.append(p)
return packages
# -
# ## Config
# #### Load config
config = utils.load_referenced_json_config(config_filepath)
print_dict(config)
# #### Set model_id
session_id = str(uuid.uuid4().hex)
print(session_id)
# #### Update config
#
# The function `utils.recursively_update_config(config, string_formatting_dict)` allows us to use string formatting to replace placeholder strings with acctual values.
#
# for example:
#
# ```python
# >>> config = {"some_value": "some_string_{some_placeholder}"}
# >>> string_formatting_dict = {"some_placeholder": "ABC"}
# >>> utils.recursively_update_config(config, string_formatting_dict)
# >>> print(config)
# {"some_value": "some_string_ABC}"}
# ```
#
#
# First update `config["meta_info"]`
utils.recursively_update_config(config["meta_info"], {
"session_id": session_id,
"model_purpose": config["meta_info"]["model_purpose"],
"config_filepath": config_filepath,
"notebook_filepath": notebook_filepath
})
# Then use `config["meta_info"]` to update the rest.
utils.recursively_update_config(config, config["meta_info"])
# ## Session
#
# Create a small dictionary with the session information. This will later be stored as a dictionary artifact with all the key run infomration
session = {
"time_stamp": datetime.datetime.utcnow().isoformat()[:-3] + "Z",
"run_by": getpass.getuser(),
"meta_info": config["meta_info"],
"installed_packages": get_installed_packages()
}
print("Session info")
print(json.dumps(session, indent=2))
# ## Initialization
#
# These are the functions or classes we will be using in this workflow. We get / instatiate them all at the beginning using parameters under `config["initialization"]`.
#
# Here we use mainly use `getattr` to fetch them via the `containers` module based on a string input in the config file. Providers could however also be fetched directly. The following three methods are all equivalent:
#
# ```python
# # 1. (what we are using in this notebook)
# from mercury_ml.common import containers as common_containers
# source_reader=getattr(common_containers.SourceReaders, "read_pandas_data_set")
#
# # 2.
# from mercury_ml.common import containers as common_containers
# source_reader=common_containers.SourceReaders.read_pandas_data_set
#
# # 3.
# from mercury_ml.common.providers.source_reading import read_pandas_data_set
# source_reader=read_pandas_data_set
# ```
#
# ### Helpers
#
# These helper functions will create instantiate class providers (`create_and_log`) or fetch function providers (`get_and_log`) based on the parameters provided
# +
def create_and_log(container, class_name, params):
provider = getattr(container, class_name)(**params)
print("{}.{}".format(container.__name__, class_name))
print("params: ", json.dumps(params, indent=2))
return provider
def get_and_log(container, function_name):
provider = getattr(container, function_name)
print("{}.{}".format(container.__name__, function_name))
return provider
# -
# ### Common
#
# These are providers that are universally relevant, regardless of which Machine Learning engine is used.
# a function for storing dictionary artifacts to local disk
store_artifact_locally = get_and_log(common_containers.LocalArtifactStorers,
config["init"]["store_artifact_locally"]["name"])
# a function for storing data-frame-like artifacts to local disk
store_prediction_artifact_locally = get_and_log(common_containers.LocalArtifactStorers,
config["init"]["store_prediction_artifact_locally"]["name"])
# a function for copy artifacts from local disk to a remote store
copy_from_local_to_remote = get_and_log(common_containers.ArtifactCopiers, config["init"]["copy_from_local_to_remote"]["name"])
# a function for reading source data. When called it will return an instance of type DataBunch
read_source_data_set = get_and_log(common_containers.SourceReaders, config["init"]["read_source_data"]["name"])
# a dictionary of functions that calculate custom metrics
custom_metrics_dict = {
custom_metric_name: get_and_log(common_containers.CustomMetrics, custom_metric_name) for custom_metric_name in config["init"]["custom_metrics"]["names"]
}
# a dictionary of functions that calculate custom label metrics
custom_label_metrics_dict = {
custom_label_metric_name: get_and_log(common_containers.CustomLabelMetrics, custom_label_metric_name) for custom_label_metric_name in config["init"]["custom_label_metrics"]["names"]
}
# ### H2O
# a function to initiate the h2o (or h2o sparkling) session
initiate_session = get_and_log(h2o_containers.SessionInitiators, config["init"]["initiate_session"]["name"])
# fetch a built-in h2o model
model = get_and_log(h2o_containers.ModelDefinitions,
config["init"]["model_definition"]["name"])(**config["init"]["model_definition"]["params"])
# a function that fits an h2o model
fit = get_and_log(h2o_containers.ModelFitters, config["init"]["fit"]["name"])
# a dictionary of functions that save h2o models in various formats
save_model_dict = {
save_model_function_name: get_and_log(h2o_containers.ModelSavers, save_model_function_name) for save_model_function_name in config["init"]["save_model"]["names"]
}
# a function that generates metrics from an h2o model
evaluate = get_and_log(h2o_containers.ModelEvaluators, config["init"]["evaluate"]["name"])
# a function that generates metrics from an h2o model
evaluate_threshold_metrics = get_and_log(h2o_containers.ModelEvaluators, config["init"]["evaluate_threshold_metrics"]["name"])
# a function that produces predictions using an h2o model
predict = get_and_log(h2o_containers.PredictionFunctions, config["init"]["predict"]["name"])
# ## Execution
#
# Here we use the providers defined above to execute various tasks
# ### Save (formatted) config
tasks.store_artifacts(store_artifact_locally, copy_from_local_to_remote, config,
**config["exec"]["save_formatted_config"]["params"])
print("Config stored with following parameters")
print_dict(config["exec"]["save_formatted_config"]["params"])
# ### Save Session
# ##### Save session info
tasks.store_artifacts(store_artifact_locally, copy_from_local_to_remote, session,
**config["exec"]["save_session"]["params"])
print("Session dictionary stored with following parameters")
print_dict(config["exec"]["save_session"]["params"])
# ##### Save session artifacts
for artifact_dict in config["exec"]["save_session_artifacts"]["artifacts"]:
artifact_dir=os.path.dirname(artifact_dict["artifact_path"])
artifact_filename=os.path.basename(artifact_dict["artifact_path"])
# save to local artifact store
common_containers.ArtifactCopiers.copy_from_disk_to_disk(
source_dir=artifact_dir,
target_dir=artifact_dict["local_dir"],
filename=artifact_filename,
overwrite=False,
delete_source=False)
# copy to remote artifact store
copy_from_local_to_remote(source_dir=artifact_dict["local_dir"],
target_dir=artifact_dict["remote_dir"],
filename=artifact_filename,
overwrite=False,
delete_source=False)
print("Session artifacts stored with following parameters")
print_dict(config["exec"]["save_session_artifacts"])
# ### Start H2O
initiate_session(**config["exec"]["initiate_session"]["params"])
# ### Get source data
data_bunch_source = tasks.read_train_valid_test_data_bunch(read_source_data_set,**config["exec"]["read_source_data"]["params"] )
print("Source data read using following parameters: \n")
print_dict(config["exec"]["read_source_data"]["params"])
print("Read data_bunch consists of: \n")
print_data_bunch(data_bunch_source)
# ### Fit model
# ##### Transform data
# +
data_bunch_fit = maybe_transform(data_bunch_source, config["exec"]["fit"].get("pre_execution_transformation"))
print("Data transformed with following parameters: \n")
print_dict(config["exec"]["fit"].get("pre_execution_transformation"))
# -
print("Transformed data_bunch consists of: \n")
print_data_bunch(data_bunch_fit)
# ##### Perform fitting
model = fit(model = model,
data_bunch = data_bunch_fit,
**config["exec"]["fit"]["params"])
# ### Save model
for model_format, save_model in save_model_dict.items():
tasks.store_model(save_model=save_model,
model=model,
copy_from_local_to_remote = copy_from_local_to_remote,
**config["exec"]["save_model"][model_format]
)
print("Model saved with following paramters: \n")
print_dict(config["exec"]["save_model"])
# ### Evaluate metrics
# ##### Transform data
# +
data_bunch_metrics = maybe_transform(data_bunch_fit, config["exec"]["evaluate"].get("pre_execution_transformation"))
print("Data transformed with following parameters: \n")
print_dict(config["exec"]["evaluate"].get("pre_execution_transformation"))
# -
print("Transformed data_bunch consists of: \n")
print_data_bunch(data_bunch_metrics)
# ##### Calculate metrics
metrics = {}
for data_set_name in config["exec"]["evaluate"]["data_set_names"]:
data_set = getattr(data_bunch_metrics, data_set_name)
metrics[data_set_name] = evaluate(model, data_set, data_set_name, **config["exec"]["evaluate"]["params"])
print("Resulting metrics: \n")
print_dict(metrics)
# ##### Calculate metrics
threshold_metrics = {}
for data_set_name in config["exec"]["evaluate"]["data_set_names"]:
data_set = getattr(data_bunch_metrics, data_set_name)
threshold_metrics[data_set_name] = evaluate_threshold_metrics(model, data_set, data_set_name,
**config["exec"]["evaluate_threshold_metrics"]["params"])
print("Resulting metrics: \n")
print_dict(threshold_metrics)
# ### Save metrics
for data_set_name, params in config["exec"]["save_metrics"]["data_sets"].items():
tasks.store_artifacts(store_artifact_locally, copy_from_local_to_remote, metrics[data_set_name], **params)
for data_set_name, params in config["exec"]["save_threshold_metrics"]["data_sets"].items():
tasks.store_artifacts(store_artifact_locally, copy_from_local_to_remote, metrics[data_set_name], **params)
# ### Predict
# ##### Transform data
# +
data_bunch_predict = maybe_transform(data_bunch_metrics, config["exec"]["predict"].get("pre_execution_transformation"))
print("Data transformed with following parameters: \n")
print_dict(config["exec"]["predict"].get("pre_execution_transformation"))
# -
print("Transformed data_bunch consists of: \n")
print_data_bunch(data_bunch_predict)
# ##### Perform prediction
for data_set_name in config["exec"]["predict"]["data_set_names"]:
data_set = getattr(data_bunch_predict, data_set_name)
data_set.predictions = predict(model=model, data_set=data_set, **config["exec"]["predict"]["params"])
print("Data predicted with following parameters: \n")
print_dict(config["exec"]["predict"].get("params"))
data_bunch_predict.test.predictions.underlying
# ### Evaluate custom metrics
# ##### Transform data
data_bunch_custom_metrics = maybe_transform(data_bunch_predict,
config["exec"]["evaluate_custom_metrics"].get("pre_execution_transformation"))
print("Data transformed with following parameters: \n")
print_dict(config["exec"]["evaluate_custom_metrics"].get("pre_execution_transformation"))
print("Transformed data_bunch consists of: \n")
print_data_bunch(data_bunch_custom_metrics)
# ##### Calculate custom metrics
#
custom_metrics = {}
for data_set_name in config["exec"]["evaluate_custom_metrics"]["data_set_names"]:
data_set = getattr(data_bunch_custom_metrics, data_set_name)
custom_metrics[data_set_name] = tasks.evaluate_metrics(data_set, custom_metrics_dict)
print("Resulting custom metrics: \n")
print_dict(custom_metrics)
# ##### Calculate custom label metrics
custom_label_metrics = {}
for data_set_name in config["exec"]["evaluate_custom_label_metrics"]["data_set_names"]:
data_set = getattr(data_bunch_custom_metrics, data_set_name)
custom_label_metrics[data_set_name] = tasks.evaluate_label_metrics(data_set, custom_label_metrics_dict)
print("Resulting custom label metrics: \n")
print_dict(custom_label_metrics)
for data_set_name, params in config["exec"]["save_custom_metrics"]["data_sets"].items():
tasks.store_artifacts(store_artifact_locally, copy_from_local_to_remote,
custom_metrics[data_set_name], **params)
print("Custom metrics saved with following parameters: \n")
print_dict(config["exec"]["save_custom_metrics"])
for data_set_name, params in config["exec"]["save_custom_label_metrics"]["data_sets"].items():
tasks.store_artifacts(store_artifact_locally, copy_from_local_to_remote,
custom_label_metrics[data_set_name], **params)
print("Custom label metrics saved with following parameters: \n")
print_dict(config["exec"]["save_custom_label_metrics"])
# ### Prepare predictions for storage
# ##### Transform data
data_bunch_prediction_preparation = maybe_transform(data_bunch_predict,
config["exec"]["prepare_predictions_for_storage"].get("pre_execution_transformation"))
print("Transformed data_bunch consists of: \n")
print_data_bunch(data_bunch_prediction_preparation)
# ##### Prepare predictions and targets
for data_set_name in config["exec"]["prepare_predictions_for_storage"]["data_set_names"]:
data_set = getattr(data_bunch_prediction_preparation, data_set_name)
data_set.add_data_wrapper_via_concatenate(**config["exec"]["prepare_predictions_for_storage"]["params"]["predictions"])
data_set.add_data_wrapper_via_concatenate(**config["exec"]["prepare_predictions_for_storage"]["params"]["targets"])
print_data_bunch(data_bunch_prediction_preparation)
# ### Save predictions
# ##### Transform data
data_bunch_prediction_storage = maybe_transform(data_bunch_prediction_preparation,
config["exec"]["save_predictions"].get("pre_execution_transformation"))
print("Transformed data_bunch consists of: \n")
print_data_bunch(data_bunch_prediction_storage)
# ##### Save predictions
for data_set_name, data_set_params in config["exec"]["save_predictions"]["data_sets"].items():
data_set = getattr(data_bunch_prediction_storage, data_set_name)
data_wrapper = getattr(data_set, data_set_params["data_wrapper_name"])
data_to_store = data_wrapper.underlying
tasks.store_artifacts(store_prediction_artifact_locally, copy_from_local_to_remote,
data_to_store, **data_set_params["params"])
print("Predictions saved with following parameters: \n")
print_dict(config["exec"]["save_predictions"])
# ##### Save targets
for data_set_name, data_set_params in config["exec"]["save_targets"]["data_sets"].items():
data_set = getattr(data_bunch_prediction_storage, data_set_name)
data_wrapper = getattr(data_set, data_set_params["data_wrapper_name"])
data_to_store = data_wrapper.underlying
tasks.store_artifacts(store_prediction_artifact_locally, copy_from_local_to_remote,
data_to_store, **data_set_params["params"])
print("Targets saved with following parameters: \n")
print_dict(config["exec"]["save_targets"])
| examples/h2o/fit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Use `datetime.strptime()` to extract date information
# +
from datetime import datetime
date = '13-Sep-39'
day = datetime.strptime(date, '%d-%b-%y').day
month = datetime.strptime(date, '%d-%b-%y').month
year = datetime.strptime(date, '%d-%b-%y').year
print("The date is {}th, the month number is {} and the year is {}".format(day, month, year))
| python/general/Datetime.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1
import pandas as pd
df = pd.read_csv('/ihme/homes/edwin100/notebooks/repos/hw1/data/Fremont_Bridge_Hourly_Bicycle_Counts_by_Month_October_2012_to_present.csv')
df.head()
# # 2
# +
df['total'] = df.apply(lambda row: row['Fremont Bridge East Sidewalk'] + row['Fremont Bridge West Sidewalk'], axis=1)
df['hour_of_day'] = df['Date'].str.slice(start=11, stop=13) + df['Date'].str.slice(start=20)
df['year'] = df['Date'].str.slice(start=6, stop=10).astype(int)
# -
# # 3
df_2016 = df[df['year']==2016]
df_2016.head()
# # 4
# +
import matplotlib.pyplot as plt
# %matplotlib inline
plt.figure(figsize=(20, 8))
plt.scatter(df_2016['hour_of_day'], df_2016['total'])
plt.title('Total bicycle count by hour of day')
plt.xlabel('Hour of day')
plt.ylabel('Total bicycle count')
# To reverse x axis
ax = plt.gca()
ax.invert_xaxis()
plt.show()
# -
df_2016.describe()
# # 5
# Busiest hour of the day on average is observed to be 5 PM.
df_2016.groupby(['hour_of_day']).agg({'total':'mean'}).sort_values(by = 'total', ascending=False).index.values[0]
| analysis/hw1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Using generalized tracers and power spectra
# This example showcases how to use the generalized tracers and 2D power spectra implemented in CCL
import numpy as np
import pylab as plt
import pyccl as ccl
# %matplotlib inline
# ## Preliminaries
# Let's just begin by setting up a cosmology and a couple of redshift distributions and bias functions
# +
# Cosmology
cosmo = ccl.Cosmology(Omega_c=0.27, Omega_b=0.045, h=0.67, A_s=2.1e-9, n_s=0.96)
# Redshift-dependent functions
z = np.linspace(0,1.2,1024)
# Redshift distributions
nz1 = np.exp(-((z-0.5)/0.05)**2/2)
nz2 = np.exp(-((z-0.65)/0.05)**2/2)
# Bias
bz = 0.95/ccl.growth_factor(cosmo,1./(1+z))
# Magnification bias
sz = np.ones_like(z)
# Intrinsic alignment amplitude
az = -0.004 * np.ones_like(z)
# -
# ## Standard tracers
# There are a set of standard tracers with built-in constructors.
# First, there is number counts tracers (a.k.a. galaxy clustering). These include the standard contribution from the matter overdensity multiplied by a linear, scale-independent bias, redshift-space distortions and magnification. You can turn off any of them.
# This tracer will only include the density contribution
gc_d = ccl.NumberCountsTracer(cosmo, has_rsd=False, dndz=(z,nz1), bias=(z,bz), mag_bias=None)
# If you pass bias=None, then the tracer will not have a density term. E.g. this one is an RSD-only tracer.
gc_r = ccl.NumberCountsTracer(cosmo, has_rsd=True, dndz=(z,nz1), bias=None, mag_bias=None)
# Now a magnification-only tracer. To turn on magnification just pass a non-None mag_bias.
gc_m = ccl.NumberCountsTracer(cosmo, has_rsd=False, dndz=(z,nz1), bias=None, mag_bias=(z,sz))
# You can of course create a tracer that includes all of these terms
gc_a = ccl.NumberCountsTracer(cosmo, has_rsd=True, dndz=(z,nz1), bias=(z,bz), mag_bias=(z,sz))
# There are also standard weak lensing tracers with two terms: the cosmic shear term due to lensing and intrinsic alignments within the L-NLA model with a scale-independent IA amplitude.
# This tracer will only include the lensing shear contribution (by setting the ia_bias to None)
wl_s = ccl.WeakLensingTracer(cosmo, dndz=(z,nz2), has_shear=True, ia_bias=None)
# This tracer will only include IAs
wl_i = ccl.WeakLensingTracer(cosmo, dndz=(z,nz2), has_shear=False, ia_bias=(z,az))
# And you can have the full monty
wl_a = ccl.WeakLensingTracer(cosmo, dndz=(z,nz2), has_shear=True, ia_bias=(z,az))
# Finally, we also have CMB lensing tracers, defined by a single source redshift.
# We put the source at z=1100, but you can put it at any other redshift if you want to!
# It won't be CMB lensing though.
cmbl = ccl.CMBLensingTracer(cosmo, z_source=1100.)
# ## Custom tracers
#
# In general, tracers are things you can cross-correlate and get a power spectrum from. In the most general case, the angular power spectrum between two tracers is given by
#
# \begin{equation}
# C^{\alpha\beta}_\ell=\frac{2}{\pi}\int d\chi_1\,d\chi_2\,dk\,k^2 P_{\alpha\beta}(k,\chi_1,\chi_2)\,\Delta^\alpha_\ell(k,\chi_1)\,\Delta^\beta_\ell(k,\chi_2).
# \end{equation},
#
# where $P_{\alpha\beta}$ is a generalized power spectrum (see below), and $\Delta^\alpha_\ell(k,\chi)$ is a sum over different contributions associated to tracer $\alpha$, where every contribution takes the form:
# \begin{equation}
# \Delta^\alpha_\ell(k,\chi)=f^\alpha_\ell\,W_\alpha(\chi)\,T_\alpha(k,\chi)\,j^{(n_\alpha)}_\ell(k\chi).
# \end{equation}
#
# Here, $f^\alpha_\ell$ is an **$\ell$-dependent prefactor**, usually associated with angular derivatives, $W_\alpha(\chi)$ is the **radial kernel**, dependent only on redshift/distance, $T_\alpha(k,\chi)$ is the **transfer function**, dependent on both $k$ and $z/\chi$, and $j^{(n)}_\ell(x)$ is a generalized versions of the **spherical Bessel functions**, associated with radial derivatives or inverse Laplacians.
#
# Generalized tracers can be created as instances of the `Tracer` class. This is done in two steps:
# 1. First, generate an empty tracer:
# 2. Then, add a contribution to this tracer. A contribution is defined by the 4 parameters above:
# - The **$\ell$-dependent prefactor** is defined by the argument `der_angles`. This can be 0, 1 or 2. 0 means no prefactor, 1 means a prefactor $\ell(\ell+1)$ and 2 means a prefactor $\sqrt{(\ell+2)!/(\ell-2)!}$.
# - The form of the **spherical Bessel function** is determined by the argument `der_bessel`, which can be -1, 0, 1 or 2. For 0, 1 or 2, this determines the derivative of the Bessel function that this contribution uses. For `der_bessel=-1`, $j^{(-1)}_\ell(x)=j_\ell(x)/x$.
# - The **radial kernel** is determined by two arrays corresponding to $\chi$ and $W(\chi)$, wrapped into a tuple and passed as `kernel`. If `kernel=None`, CCL assumes that the radial kernel is 1 everywhere for this tracer.
# - The **transfer function** can be passed in two different forms. If your transfer function is factorizable, then use two parameters: `transfer_k` and `transfer_a`, corresponding to the $k$-dependent and redshift-dependent factors respectively. `transfer_k` should be a tuple of two arrays $(\ln(k),K(k))$, and `transfer_a` should be $(a,A(a))$, where $a$ is the scale factor (which should be monotonically increasing!). The total transfer function is then given by $T(a,k)=K(k) A(a)$. If your transfer function is not factorizable, then use the argument `transfer_ka`. This should be a tuple of three arrays $(a,\ln(k),T(a,k))$, where $a$ is an array of scale factors in ascending order, $\ln(k)$ is an array containing the logarithms of wavenumbers, and $T(a,k)$ is a 2D array containing the values of the transfer function at the corresponding values of $a$ and $k$. If $a$ and $\ln(k)$ have sizes $n_a$ and $n_k$ respectively, then $T(a,k)$ should have shape $(n_a,n_k)$. If `transfer_ka`, `transfer_a` or `transfer_k` are `None`, CCL assumes that the corresponding quantities are 1 everywhere.
#
# Then, keep adding contributions until you're done.
#
# Let's try this out by building a number-counts tracer that contains a linear bias, RSDs and magnification. We could do this easily with the built-in constructors above, but it's always satisfying when you build something with your own hands.
# First, initialize empty tracer
gc_custom = ccl.Tracer()
# Now we need to generate a radial kernel and a transfer function.
# To generate the radial kernel we can use a convenience function that exists in CCL:
kernel = ccl.get_density_kernel(cosmo, (z,nz1))
# Let's plot it out of curiosity:
plt.plot(kernel[0],kernel[1])
plt.xlabel('$\\chi\\,[{\\rm Mpc}]$',fontsize=14)
plt.ylabel('$W(\\chi)$',fontsize=14)
plt.show()
# We also need a transfer function.
# Because this tracer has a simple linear bias, the transfer function
# is only z-dependent and not k-dependent. So it's factorizable.
# The scale factor values need to be increasing, so we'll reverse
# The order of all arrays:
sf = (1./(1+z))[::-1]
transfer_a = (sf,bz[::-1])
# Now we're ready to add the density contribution to the tracer:
gc_custom.add_tracer(cosmo, kernel=kernel, transfer_a=transfer_a)
# Now let's add RSDs.
# In this case the transfer function is just given by the growth rate.
# Also, RSDs use the second derivative of the Bessel function
transfer_a_rsd = (sf, -ccl.growth_rate(cosmo,sf))
gc_custom.add_tracer(cosmo, kernel=kernel, transfer_a=transfer_a_rsd, der_bessel=2)
# And now magnification.
# Again, there is a built-in function to compute the radial kernel for
# lensing observables.
# For magnification, der_bessel=-1 and der_angles=1.
# The transfer function is just 1 everywhere
chis, w = ccl.get_lensing_kernel(cosmo, (z,nz1), mag_bias=(z,sz))
kernel_m = (chis, -2*w)
gc_custom.add_tracer(cosmo, kernel=kernel_m, der_bessel=-1, der_angles=1)
# +
# Now let's check that both tracers, the one we just
# created and the original one created using the built-in constructors
# give exactly the same result. To do that, let's compute their
# angular power spectra
ells = np.geomspace(2,1000,20)
cl_gc_a = ccl.angular_cl(cosmo, gc_a, gc_a, ells)
cl_gc_custom = ccl.angular_cl(cosmo, gc_custom, gc_custom, ells)
# Let's plot the result
plt.plot(ells, 1E4*cl_gc_a, 'r-', label='built-in tracer')
plt.plot(ells, 1E4*cl_gc_custom, 'k--', label='custom tracer')
plt.xscale('log')
plt.xlabel('$\\ell$', fontsize=14)
plt.ylabel('$10^3\\times C_\\ell$', fontsize=14)
plt.legend(loc='upper right', fontsize=12, frameon=False)
plt.show()
# -
# OK, but so far we haven't done anything that we couldn't have done with the built-in tracers. Let's have some fun!
#
# How about adding a contribution to the custom tracer corresponding to the effect of non-Gaussianity. Non-Gaussianity generates a scale-dependent bias on biased tracers that becomes relevant on large scales. Using Eq. 9 of arXiv:0710.4560, the bias receives an additive term of the form:
# \begin{equation}
# \Delta b = (b-1)f_{\rm NL}\frac{3\,\delta_c\,\Omega_M\,H_0^2}{D(a)\,k^2}.
# \end{equation}
# So we can include this effect as an additional contribution to the custom tracer that looks like a standard galaxy clustering term with this extra bias. Since the corresponding transfer function is still factorizable, this should be pretty easy:
# +
# Let's create an array of k values and the corresponding
# k-dependent part of the transfer function
ks = np.logspace(-5,2,512)
transfer_k_fnl = (np.log(ks),1./ks**2)
# Now the z-dependent part of the transfer function
fNL = 10
delta_c = 1.686
H0 = (cosmo.cosmo.params.h/ccl.physical_constants.CLIGHT_HMPC)
Omega_M = cosmo.cosmo.params.Omega_c+cosmo.cosmo.params.Omega_b
transfer_a_fnl = (sf, (bz[::-1]-1)*3*fNL*delta_c*Omega_M*H0**2/ccl.growth_factor(cosmo,sf))
# Now let's add that contribution
gc_custom.add_tracer(cosmo, kernel=kernel, transfer_a=transfer_a_fnl,
transfer_k=transfer_k_fnl)
# And let's check the effect in the power spectrum:
cl_gc_custom_fnl = ccl.angular_cl(cosmo, gc_custom, gc_custom, ells)
# Let's plot the result
plt.plot(ells, 1E4*cl_gc_custom, 'r-', label='$f_{\\rm NL}=0$')
plt.plot(ells, 1E4*cl_gc_custom_fnl, 'k--', label='$f_{\\rm NL}$=%d'%(int(fNL)))
plt.xscale('log')
plt.xlabel('$\\ell$', fontsize=14)
plt.ylabel('$10^3\\times C_\\ell$', fontsize=14)
plt.legend(loc='upper right', fontsize=12, frameon=False)
plt.show()
# -
# ## Generalized power spectra
#
# By default, when you call `ccl.angular_cl`, CCL will use the non-linear matter power spectrum in the equation for $C_\ell$ above to compute the angular power spectrum. You can however, pass a more general `Pk2D` object. These objects encapsulate generic power spectra that depend simultaneously on $k$ and $a$. They are versatile and can be created in different ways, but for our purposes here, we will generate them in the same way that we generated the tracer transfer functions (i.e. through arrays of scale factors and wavenumbers). For more information, check out the documentation for this class.
#
# To illustrate the use of `Pk2D` objects, let's read off the matter power spectrum from CCL, include a high-$k$ Gaussian cutoff, and generate a `Pk2D` object from that.
# +
# OK, let's first read off the matter power spectrum:
lpk_array = np.log(np.array([ccl.nonlin_matter_power(cosmo,ks,a) for a in sf]))
# Now let's impose a cutoff at a scale k_cut corresponding to roughly ~10 Mpc
k_cut = 0.4
lpk_array -= (ks/k_cut)**2
# Create a Pk2D object
pk_cut = ccl.Pk2D(a_arr=sf, lk_arr=np.log(ks), pk_arr=lpk_array, is_logp=True)
# -
# OK, now let's check out the impact of the cutoff on the angular power spectrum.
# +
# Compute power spectra with and without cutoff
cl_gc_ncut = ccl.angular_cl(cosmo, gc_d, gc_d, ells)
cl_gc_ycut = ccl.angular_cl(cosmo, gc_d, gc_d, ells, p_of_k_a=pk_cut)
# Plot stuff
plt.plot(ells, 1E4*cl_gc_ncut, 'r-', label='No cut-off')
plt.plot(ells, 1E4*cl_gc_ycut, 'k--', label='Cut-off at $k_* = %.1lf\,{\\rm Mpc}^{-1}$'%(k_cut))
plt.xscale('log')
plt.yscale('log')
plt.xlabel('$\\ell$', fontsize=14)
plt.ylabel('$10^3\\times C_\\ell$', fontsize=14)
plt.legend(loc='lower left', fontsize=12, frameon=False)
plt.show()
# -
| examples/GeneralizedTracers.ipynb |
// -*- coding: utf-8 -*-
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cpp
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: C++14
// language: C++14
// name: xcpp14
// ---
// ## Predicting Salary using Linear Regression
//
// ### Objective
// * We have to predict the salary of an employee given how many years of experience they have.
//
// ### Dataset
// * Salary_Data.csv has 2 columns — “Years of Experience” (feature) and “Salary” (target) for 30 employees in a company
//
// ### Approach
// * So in this example, we will train a Linear Regression model to learn the correlation between the number of years of experience of each employee and their respective salary.
// * Once the model is trained, we will be able to do some sample predictions.
!wget -q https://datasets.mlpack.org/Salary_Data.csv
// +
// Import necessary library header.
#include <mlpack/xeus-cling.hpp>
#include <mlpack/core.hpp>
#include <mlpack/core/data/split_data.hpp>
#include <mlpack/methods/linear_regression/linear_regression.hpp>
#include <cmath>
// +
#define WITHOUT_NUMPY 1
#include "matplotlibcpp.h"
#include "xwidgets/ximage.hpp"
namespace plt = matplotlibcpp;
// -
using namespace mlpack;
using namespace mlpack::regression;
// +
// Load the dataset into armadillo matrix.
arma::mat inputs;
data::Load("Salary_Data.csv", inputs);
// +
// Drop the first row as they represent header.
inputs.shed_col(0);
// +
// Display the first 5 rows of the input data.
std::cout << std::setw(18) << "Years Of Experience" << std::setw(10) << "Salary" << std::endl;
std::cout << inputs.submat(0, 0, inputs.n_rows-1, 5).t() << std::endl;
// +
// Plot the input data.
std::vector<double> x = arma::conv_to<std::vector<double>>::from(inputs.row(0));
std::vector<double> y = arma::conv_to<std::vector<double>>::from(inputs.row(1));
plt::figure_size(800, 800);
plt::scatter(x, y, 12, {{"color","coral"}});
plt::xlabel("Years of Experience");
plt::ylabel("Salary in $");
plt::title("Experience vs. Salary");
plt::save("./scatter.png");
auto img = xw::image_from_file("scatter.png").finalize();
img
// +
// Split the data into features (X) and target (y) variables
// targets are the last row.
arma::Row<size_t> targets = arma::conv_to<arma::Row<size_t>>::from(inputs.row(inputs.n_rows - 1));
// +
// Labels are dropped from the originally loaded data to be used as features.
inputs.shed_row(inputs.n_rows - 1);
// -
// ### Train Test Split
// The dataset has to be split into a training set and a test set.
// This can be done using the `data::Split()` api from mlpack.
// Here the dataset has 30 observations and the `testRatio` is taken as 40% of the total observations.
// This indicates the test set should have 40% * 30 = 12 observations and training test should have 18 observations respectively.
// +
// Split the dataset into train and test sets using mlpack.
arma::mat Xtrain;
arma::mat Xtest;
arma::Row<size_t> Ytrain;
arma::Row<size_t> Ytest;
data::Split(inputs, targets, Xtrain, Xtest, Ytrain, Ytest, 0.4);
// +
// Convert armadillo Rows into rowvec. (Required by mlpacks' LinearRegression API in this format).
arma::rowvec yTrain = arma::conv_to<arma::rowvec>::from(Ytrain);
arma::rowvec yTest = arma::conv_to<arma::rowvec>::from(Ytest);
// -
// ## Linear Model
//
// Regression analysis is the most widely used method of prediction. Linear regression is used when the dataset has a linear correlation and as the name suggests,
// simple linear regression has one independent variable (predictor) and one dependent variable(response).
//
// The simple linear regression equation is represented as $y = a+bx$ where $x$ is the explanatory variable, $y$ is the dependent variable, $b$ is coefficient and $a$ is the intercept
//
// To perform linear regression we'll be using `LinearRegression()` api from mlpack.
// +
// Create and Train Linear Regression model.
regression::LinearRegression lr(Xtrain, yTrain, 0.5);
// +
// Make predictions for test data points.
arma::rowvec yPreds;
lr.Predict(Xtest, yPreds);
// +
// Convert armadillo vectors and matrices to vector for plotting purpose.
std::vector<double> XtestPlot = arma::conv_to<std::vector<double>>::from(Xtest);
std::vector<double> yTestPlot = arma::conv_to<std::vector<double>>::from(yTest);
std::vector<double> yPredsPlot = arma::conv_to<std::vector<double>>::from(yPreds);
// +
// Visualize Predicted datapoints.
plt::figure_size(800, 800);
plt::scatter(XtestPlot, yTestPlot, 12, {{"color", "coral"}});
plt::plot(XtestPlot,yPredsPlot);
plt::xlabel("Years of Experience");
plt::ylabel("Salary in $");
plt::title("Predicted Experience vs. Salary");
plt::save("./scatter1.png");
auto img = xw::image_from_file("scatter1.png").finalize();
img
// -
// Test data is visualized with `XtestPlot` and `yPredsPlot`, the coral points indicates the data points and the blue line indicates the regression line or best fit line.
// ## Evaluation Metrics for Regression model
//
// In the Previous cell we have visualized our model performance by plotting the best fit line. Now we will use various evaluation metrics to understand how well our model has performed.
//
// * Mean Absolute Error (MAE) is the sum of absolute differences between actual and predicted values, without considering the direction.
// $$ MAE = \frac{\sum_{i=1}^n\lvert y_{i} - \hat{y_{i}}\rvert} {n} $$
// * Mean Squared Error (MSE) is calculated as the mean or average of the squared differences between predicted and expected target values in a dataset, a lower value is better
// $$ MSE = \frac {1}{n} \sum_{i=1}^n (y_{i} - \hat{y_{i}})^2 $$
// * Root Mean Squared Error (RMSE), Square root of MSE yields root mean square error (RMSE) it indicates the spread of the residual errors. It is always positive, and a lower value indicates better performance.
// $$ RMSE = \sqrt{\frac {1}{n} \sum_{i=1}^n (y_{i} - \hat{y_{i}})^2} $$
// +
// Model evaluation metrics.
std::cout << "Mean Absolute Error: " << arma::mean(arma::abs(yPreds - yTest)) << std::endl;
std::cout << "Mean Squared Error: " << arma::mean(arma::pow(yPreds - yTest,2)) << std::endl;
std::cout << "Root Mean Squared Error: " << sqrt(arma::mean(arma::pow(yPreds - yTest,2))) << std::endl;
// -
// From the above metrics we can notice that our model MAE is ~5K, which is relatively small compared to our average salary of $76003, from this we can conclude our model is resonably good fit.
| salary_prediction_with_linear_regression/salary-prediction-linear-regression-cpp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/HighCWu/anime_biggan_toy/blob/main/colab/paddle_anime_biggan_for_discriminator_converter.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="AaUYHr7V0tlG"
# ## Save weights
# + id="GnZqvQSZYhY4"
# !cp drive/My\ Drive/anime-biggan-256px-run39-607250 ./ -r
# + id="_qgqKavumqOc"
import os
os.makedirs('images', exist_ok=True)
filelist = [
'512px/0999/999999.jpg',
'512px/0999/999.jpg',
'512px/0999/998999.jpg',
'512px/0999/997999.jpg'
]
for i in range(4):
path = filelist[-i-1]
print(f'Rsync image from rsync://172.16.17.32:873/danbooru2019/{path} to directory "images"')
# !rsync rsync://172.16.17.32:873/danbooru2019/$path ./images
# + id="gKdG2e3YvIn5"
import glob
import numpy as np
from PIL import Image
imgs_path = glob.glob('./images/*.jpg')
imgs = []
for path in imgs_path:
img = (np.asarray(Image.open(path).crop([127,127,127+256,127+256]))[None,...]/255.0).astype('float32')
imgs.append(img)
imgs = np.concatenate(imgs, 0)
# + id="LKdD7uz59HJ3"
import os
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import tensorflow_hub as hub
module_path = os.path.join('anime-biggan-256px-run39-607250', "tfhub")
tf.reset_default_graph()
module = hub.Module(module_path,
name="disc_module", tags={"disc", "bsNone"})
print('Loaded BigGAN module from:', module_path)
initializer = tf.global_variables_initializer()
sess = tf.Session()
sess.run(initializer)
batch_size = 4
images = tf.placeholder(shape=[batch_size, 256, 256, 3], dtype=tf.float32) # noise sample
labels = tf.random.uniform([batch_size], maxval=1000, dtype=tf.int32, seed=0)
inputs = dict(images=images, labels=labels)
prediction = module(inputs, as_dict=True)["prediction"]
# + id="3qX6oSq661Sq"
for tensor in [tensor for op in sess.graph.get_operations() for tensor in op.values()]:
if 'disc_module_apply_default/discriminator' in tensor.name:
print(tensor.name, tensor.shape)
# + id="psuukclucWZL"
var_list = []
for var in tf.global_variables():
val = sess.run(var)
var_list.append([var.name, val])
for weights in var_list:
print(weights[0], weights[1].shape)
import pickle
f = open('tf_discriminator.pkl', 'wb')
pickle.dump(var_list, f)
f.close()
# + id="eCeRAAolojmm"
tensors_name_con = [
'conv1/add:0', 'conv2/add:0', 'conv_shortcut/add:0',
'conv2d_theta/Conv2D:0', 'conv2d_phi/Conv2D:0', 'conv2d_g/Conv2D:0', 'conv2d_attn_g/Conv2D:0',
'final_fc/add:0', 'embedding_fc/MatMul_4:0'
]
import collections
tensor_dict = collections.OrderedDict()
tensor_dict['images'] = images
tensor_dict['labels'] = labels
for tensor in [tensor for op in sess.graph.get_operations() for tensor in op.values()]:
if 'disc_module_apply_default/discriminator' in tensor.name:
for name_con in tensors_name_con:
if name_con in tensor.name:
tensor_dict[tensor.name] = tensor
break
tensor_dict['prediction'] = prediction
# for name, tensor in tensor_dict.items():
# print(name, tensor)
ret = sess.run(tensor_dict, feed_dict={images:imgs})
for name, value in ret.items():
print(name, value.shape)
import pickle
f = open('tf_tensor_samples.pkl', 'wb')
pickle.dump(ret, f)
f.close()
# + [markdown] id="H3eagKqD0xP7"
# ## Convert weights
# You may want to restart the kernel to release gpu memory after generating some samples use TF.
# + id="OlxRCG7jS_Tu"
# !pip install paddlepaddle-gpu==1.8.2.post107
# + id="CHtuKUzN1HwD"
import numpy as np
import paddle.fluid as fluid
from paddle.fluid import layers, dygraph as dg
from paddle.fluid.initializer import Normal, Constant, Uniform
import collections
tensors_name_con = [ 'images','labels',
'conv1/add:0', 'conv2/add:0', 'conv_shortcut/add:0',
'conv2d_theta/Conv2D:0', 'conv2d_phi/Conv2D:0', 'conv2d_g/Conv2D:0', 'conv2d_attn_g/Conv2D:0',
'final_fc/add:0', 'embedding_fc/MatMul_4:0',
'prediction'
]
gt = collections.OrderedDict()
for n in tensors_name_con:
gt[n] = []
def unpool(value):
"""Unpooling operation.
N-dimensional version of the unpooling operation from
https://www.robots.ox.ac.uk/~vgg/rg/papers/Dosovitskiy_Learning_to_Generate_2015_CVPR_paper.pdf
Taken from: https://github.com/tensorflow/tensorflow/issues/2169
Args:
value: a Tensor of shape [b, d0, d1, ..., dn, ch]
name: name of the op
Returns:
A Tensor of shape [b, 2*d0, 2*d1, ..., 2*dn, ch]
"""
value = layers.transpose(value, [0,2,3,1])
sh = value.shape
dim = len(sh[1:-1])
out = (layers.reshape(value, [-1] + sh[-dim:]))
for i in range(dim, 0, -1):
out = layers.concat([out, layers.zeros_like(out)], i)
out_size = [-1] + [s * 2 for s in sh[1:-1]] + [sh[-1]]
out = layers.reshape(out, out_size)
out = layers.transpose(out, [0,3,1,2])
return out
class ReLU(dg.Layer):
def forward(self, x):
return layers.relu(x)
class SoftMax(dg.Layer):
def __init__(self, **kwargs):
super().__init__()
self.kwargs = kwargs
def forward(self, x):
return layers.softmax(x, **self.kwargs)
class BatchNorm(dg.BatchNorm):
def __init__(self, *args, **kwargs):
if 'affine' in kwargs:
affine = kwargs.pop('affine')
else:
affine = True
super().__init__(*args, **kwargs)
if not affine:
weight = (self.weight * 0 + 1).detach()
bias = (self.bias * 0).detach()
del self._parameters['bias']
del self._parameters['weight']
self.weight = weight
self.bias = bias
self.initialized = False
self.accumulating = False
self.accumulated_mean = self.create_parameter(shape=[args[0]], default_initializer=Constant(0.0))
self.accumulated_var = self.create_parameter(shape=[args[0]], default_initializer=Constant(0.0))
self.accumulated_counter = self.create_parameter(shape=[1], default_initializer=Constant(1e-12))
self.accumulated_mean.trainable = False
self.accumulated_var.trainable = False
self.accumulated_counter.trainable = False
self.affine = affine
def forward(self, inputs, *args, **kwargs):
if not self.initialized:
self.check_accumulation()
self.set_initialized(True)
if self.accumulating:
self.eval()
with dg.no_grad():
axes = [0] + ([] if len(inputs.shape) == 2 else list(range(2,len(inputs.shape))))
_mean = layers.reduce_mean(inputs, axes, keep_dim=True)
mean = layers.reduce_mean(inputs, axes, keep_dim=False)
var = layers.reduce_mean((inputs-_mean)**2, axes)
self.accumulated_mean.set_value(self.accumulated_mean + mean)
self.accumulated_var.set_value(self.accumulated_var + var)
self.accumulated_counter.set_value(self.accumulated_counter + 1)
_mean = self._mean*1.0
_variance = self.variance*1.0
self._mean.set_value(self.accumulated_mean / self.accumulated_counter)
self._variance.set_value(self.accumulated_var / self.accumulated_counter)
out = super().forward(inputs, *args, **kwargs)
self._mean.set_value(_mean)
self._variance.set_value(_variance)
return out
out = super().forward(inputs, *args, **kwargs)
return out
def check_accumulation(self):
if self.accumulated_counter.numpy().mean() > 1-1e-12:
self._mean.set_value(self.accumulated_mean / self.accumulated_counter)
self._variance.set_value(self.accumulated_var / self.accumulated_counter)
return True
return False
def clear_accumulated(self):
self.accumulated_mean.set_value(self.accumulated_mean*0.0)
self.accumulated_var.set_value(self.accumulated_var*0.0)
self.accumulated_counter.set_value(self.accumulated_counter*0.0+1e-2)
def set_accumulating(self, status=True):
if status:
self.accumulating = True
else:
self.accumulating = False
def set_initialized(self, status=False):
if not status:
self.initialized = False
else:
self.initialized = True
def train(self):
super().train()
if self.affine:
self.weight.stop_gradient = False
self.bias.stop_gradient = False
else:
self.weight.stop_gradient = True
self.bias.stop_gradient = True
self._use_global_stats = False
def eval(self):
super().eval()
self.weight.stop_gradient = True
self.bias.stop_gradient = True
self._use_global_stats = True
class SpectralNorm(dg.Layer):
def __init__(self, module, name='weight', power_iterations=2):
super().__init__()
self.module = module
self.name = name
self.power_iterations = power_iterations
if not self._made_params():
self._make_params()
def _update_u(self):
w = self.weight
u = self.weight_u
if len(w.shape) == 4:
_w = layers.transpose(w, [2,3,1,0])
_w = layers.reshape(_w, [-1, _w.shape[-1]])
_w_t_shape = _w.shape
else:
_w = layers.reshape(w, [-1, w.shape[-1]])
_w = layers.reshape(_w, [-1, _w.shape[-1]])
singular_value = "left" if _w.shape[0] <= _w.shape[1] else "right"
norm_dim = 0 if _w.shape[0] <= _w.shape[1] else 1
for _ in range(self.power_iterations):
if singular_value == "left":
v = layers.l2_normalize(layers.matmul(_w, u, transpose_x=True), axis=norm_dim)
u = layers.l2_normalize(layers.matmul(_w, v), axis=norm_dim)
else:
v = layers.l2_normalize(layers.matmul(u, _w, transpose_y=True), axis=norm_dim)
u = layers.l2_normalize(layers.matmul(v, _w), axis=norm_dim)
if singular_value == "left":
sigma = layers.matmul(layers.matmul(u, _w, transpose_x=True), v)
else:
sigma = layers.matmul(layers.matmul(v, _w), u, transpose_y=True)
_w = w / sigma
if w.shape[0] == 4:
_w = layers.transpose(layers.reshape(_w, _w_t_shape), [3,2,0,1])
else:
_w = layers.reshape(_w, w.shape)
setattr(self.module, self.name, _w)
self.weight_u.set_value(u)
def _made_params(self):
try:
self.weight
self.weight_u
return True
except AttributeError:
return False
def _make_params(self):
# paddle linear weight is similar with tf's, and conv weight is similar with pytorch's.
w = getattr(self.module, self.name)
if len(w.shape) == 4:
_w = layers.transpose(w, [2,3,1,0])
_w = layers.reshape(_w, [-1, _w.shape[-1]])
else:
_w = layers.reshape(w, [-1, w.shape[-1]])
singular_value = "left" if _w.shape[0] <= _w.shape[1] else "right"
norm_dim = 0 if _w.shape[0] <= _w.shape[1] else 1
u_shape = (_w.shape[0], 1) if singular_value == "left" else (1, _w.shape[-1])
u = self.create_parameter(shape=u_shape, default_initializer=Normal(0, 1))
u.stop_gradient = True
u.set_value(layers.l2_normalize(u, axis=norm_dim))
del self.module._parameters[self.name]
self.add_parameter("weight", w)
self.add_parameter("weight_u", u)
def forward(self, *args, **kwargs):
self._update_u()
return self.module.forward(*args, **kwargs)
class SelfAttention(dg.Layer):
def __init__(self, in_dim, activation=layers.relu):
super().__init__()
self.chanel_in = in_dim
self.activation = activation
self.theta = SpectralNorm(dg.Conv2D(in_dim, in_dim // 8, 1, bias_attr=False))
self.phi = SpectralNorm(dg.Conv2D(in_dim, in_dim // 8, 1, bias_attr=False))
self.pool = dg.Pool2D(2, 'max', 2)
self.g = SpectralNorm(dg.Conv2D(in_dim, in_dim // 2, 1, bias_attr=False))
self.o_conv = SpectralNorm(dg.Conv2D(in_dim // 2, in_dim, 1, bias_attr=False))
self.gamma = self.create_parameter([1,], default_initializer=Constant(0.0))
self.softmax = SoftMax(axis=-1)
def forward(self, x):
m_batchsize, C, width, height = x.shape
N = height * width
theta = self.theta(x)
gt['conv2d_theta/Conv2D:0'].append(layers.transpose(theta,[0,2,3,1]))
phi = self.phi(x)
gt['conv2d_phi/Conv2D:0'].append(layers.transpose(phi,[0,2,3,1]))
phi = self.pool(phi)
phi = layers.reshape(phi,(m_batchsize, -1, N // 4))
theta = layers.reshape(theta,(m_batchsize, -1, N))
theta = layers.transpose(theta,(0, 2, 1))
attention = self.softmax(layers.bmm(theta, phi))
g = self.g(x)
gt['conv2d_g/Conv2D:0'].append(layers.transpose(g,[0,2,3,1]))
g = layers.reshape(self.pool(g),(m_batchsize, -1, N // 4))
attn_g = layers.reshape(layers.bmm(g, layers.transpose(attention,(0, 2, 1))),(m_batchsize, -1, width, height))
out = self.o_conv(attn_g)
gt['conv2d_attn_g/Conv2D:0'].append(layers.transpose(out,[0,2,3,1]))
return self.gamma * out + x
class ConditionalBatchNorm(dg.Layer):
def __init__(self, num_features, num_classes, epsilon=1e-5, momentum=0.1):
super().__init__()
self.bn_in_cond = BatchNorm(num_features, affine=False, epsilon=epsilon, momentum=momentum)
self.gamma_embed = SpectralNorm(dg.Linear(num_classes, num_features, bias_attr=False))
self.beta_embed = SpectralNorm(dg.Linear(num_classes, num_features, bias_attr=False))
def forward(self, x, y):
out = self.bn_in_cond(x)
gamma = self.gamma_embed(y)
# gamma = gamma + 1
beta = self.beta_embed(y)
out = layers.reshape(gamma, (0, 0, 1, 1)) * out + layers.reshape(beta, (0, 0, 1, 1))
return out
class ResBlock(dg.Layer):
def __init__(
self,
in_channel,
out_channel,
kernel_size=[3, 3],
padding=1,
stride=1,
n_class=None,
conditional=True,
activation=layers.relu,
upsample=True,
downsample=False,
z_dim=128,
use_attention=False,
skip_proj=None
):
super().__init__()
if conditional:
self.cond_norm1 = ConditionalBatchNorm(in_channel, z_dim)
self.conv0 = SpectralNorm(
dg.Conv2D(in_channel, out_channel, kernel_size, stride, padding)
)
if conditional:
self.cond_norm2 = ConditionalBatchNorm(out_channel, z_dim)
self.conv1 = SpectralNorm(
dg.Conv2D(out_channel, out_channel, kernel_size, stride, padding)
)
self.skip_proj = False
if skip_proj is not True and (upsample or downsample):
self.conv_sc = SpectralNorm(dg.Conv2D(in_channel, out_channel, 1, 1, 0))
self.skip_proj = True
if use_attention:
self.attention = SelfAttention(out_channel)
self.upsample = upsample
self.downsample = downsample
self.activation = activation
self.conditional = conditional
self.use_attention = use_attention
def forward(self, input, condition=None):
out = input
if self.conditional:
out = self.cond_norm1(out, condition)
out = self.activation(out)
if self.upsample:
out = unpool(out) # out = layers.interpolate(out, scale=2)
out = self.conv0(out)
gt['conv1/add:0'].append(layers.transpose(out,[0,2,3,1]))
if self.conditional:
out = self.cond_norm2(out, condition)
out = self.activation(out)
out = self.conv1(out)
gt['conv2/add:0'].append(layers.transpose(out,[0,2,3,1]))
if self.downsample:
out = layers.pool2d(out, 2, pool_type='avg', pool_stride=2)
if self.skip_proj:
skip = input
if self.upsample:
skip = unpool(skip) # skip = layers.interpolate(skip, scale=2, resample='NEAREST')
skip = self.conv_sc(skip)
gt['conv_shortcut/add:0'].append(layers.transpose(skip,[0,2,3,1]))
if self.downsample:
skip = layers.pool2d(skip, 2, pool_type='avg', pool_stride=2)
out = out + skip
else:
skip = input
if self.use_attention:
out = self.attention(out)
return out
class Discriminator(dg.Layer):
def __init__(self, n_class=1000, chn=96, blocks_with_attention="B2", resolution=256):
super().__init__()
def DBlock(in_channel, out_channel, downsample=True, use_attention=False, skip_proj=None):
return ResBlock(in_channel, out_channel, conditional=False, upsample=False,
downsample=downsample, use_attention=use_attention, skip_proj=skip_proj)
self.chn = chn
self.colors = 3
self.resolution = resolution
self.blocks_with_attention = set(blocks_with_attention.split(","))
self.blocks_with_attention.discard('')
dblock = []
in_channels, out_channels = self.get_in_out_channels()
self.sa_ids = [int(s.split('B')[-1]) for s in self.blocks_with_attention]
for i, (nc_in, nc_out) in enumerate(zip(in_channels[:-1], out_channels[:-1])):
dblock.append(DBlock(nc_in, nc_out, downsample=True,
use_attention=(i+1) in self.sa_ids, skip_proj=nc_in==nc_out))
dblock.append(DBlock(in_channels[-1], out_channels[-1], downsample=False,
use_attention=len(out_channels) in self.sa_ids, skip_proj=in_channels[-1]==out_channels[-1]))
self.blocks = dg.LayerList(dblock)
self.final_fc = SpectralNorm(dg.Linear(16 * chn, 1))
self.embed_y = dg.Embedding(size=[n_class, 16 * chn], is_sparse=False, param_attr=Uniform(-0.1,0.1))
self.embed_y = SpectralNorm(self.embed_y)
def get_in_out_channels(self):
colors = self.colors
resolution = self.resolution
if resolution == 1024:
channel_multipliers = [1, 1, 1, 2, 4, 8, 8, 16, 16]
elif resolution == 512:
channel_multipliers = [1, 1, 2, 4, 8, 8, 16, 16]
elif resolution == 256:
channel_multipliers = [1, 2, 4, 8, 8, 16, 16]
elif resolution == 128:
channel_multipliers = [1, 2, 4, 8, 16, 16]
elif resolution == 64:
channel_multipliers = [2, 4, 8, 16, 16]
elif resolution == 32:
channel_multipliers = [2, 2, 2, 2]
else:
raise ValueError("Unsupported resolution: {}".format(resolution))
out_channels = [self.chn * c for c in channel_multipliers]
in_channels = [colors] + out_channels[:-1]
return in_channels, out_channels
def forward(self, input, class_id):
for key, item in gt.items():
item.clear()
gt['images'].append(layers.transpose(input, [0,2,3,1]))
out = input
for i, dblock in enumerate(self.blocks):
out = dblock(out)
out = layers.relu(out)
out = layers.reduce_sum(out, [2,3])
out_linear = self.final_fc(out)
gt['final_fc/add:0'].append(out_linear)
gt['labels'].append(class_id)
class_emb = self.embed_y(class_id)
gt['embedding_fc/MatMul_4:0'].append(class_emb)
prod = layers.reduce_sum((class_emb * out), 1, keep_dim=True)
gt['prediction'].append(layers.sigmoid(out_linear + prod))
return layers.sigmoid(out_linear + prod)
# + id="bsf2xTVBN-Py"
place = fluid.CUDAPlace(fluid.dygraph.ParallelEnv().dev_id)
fluid.enable_dygraph(place)
img = layers.uniform_random(shape=[4,3,256,256],min=0,max=1)
y = layers.randint(0,1000,shape=[4])
y_hot = layers.one_hot(layers.unsqueeze(y,[1]), depth=1000)
print(img.shape)
d_256 = Discriminator(n_class=1000, chn=96, blocks_with_attention="B2", resolution=256)
pred = d_256(img, y)
print(pred.shape)
# + id="C5YodLFu-y63"
import pickle
f = open('tf_discriminator.pkl', 'rb')
tf_weights = pickle.load(f)
f.close()
# def tf_filter(x):
# if 'accu/update_accus:0' in x[0]:
# return False
# return True
# tf_weights = filter(tf_filter, tf_weights)
def pd_filter(x):
# if 'weight_v' in x[0] or '._mean' in x[0] or '._variance' in x[0] \
# or'.bn_in_cond.weight' in x[0] or '.bn_in_cond.bias' in x[0]:
# return False
return True
_pd_params = list(filter(pd_filter, d_256.named_parameters()))
pd_params = []
for i, params in enumerate(_pd_params):
b_continue = False
for j in range(6):
if 'attention.gamma' in _pd_params[i-j][0]:
pd_params.append(_pd_params[i+1])
b_continue = True
if b_continue:
continue
if 'attention.gamma' in _pd_params[i-6][0]:
pd_params.append(_pd_params[i-6])
continue
# if 'output_layer.0.weight' in params[0]:
# pd_params.append(_pd_params[i+1])
# continue
# if 'output_layer.0.bias' in params[0]:
# pd_params.append(_pd_params[i-1])
# continue
pd_params.append(params)
# _pd_params = pd_params
# pd_params = [param for param in _pd_params]
# pd_params[-8] = _pd_params[-6]
# pd_params[-7] = _pd_params[-5]
# pd_params[-6] = _pd_params[-4]
# pd_params[-5] = _pd_params[-8]
# pd_params[-4] = _pd_params[-7]
for i, (tf_weight, pd_param) in enumerate(zip(tf_weights, pd_params)):
if len(pd_param[1].shape) == 4:
weight = tf_weight[1].transpose([3, 2, 0, 1])
else:
weight = tf_weight[1].reshape(pd_param[1].shape)
pd_param[1].set_value(weight)
print(tf_weight[0], tf_weight[1].shape, pd_param[0], pd_param[1].shape)
# + id="ONJui9skEy7b"
import pickle
f = open('tf_tensor_samples.pkl', 'rb')
tf_tensors = pickle.load(f)
f.close()
gtt = collections.OrderedDict()
for n in tensors_name_con:
gtt[n] = []
for key, tensor in tf_tensors.items():
for n in tensors_name_con:
if n in key:
gtt[n].append([key, tensor])
break
# for _layers in d_256.named_sublayers():
# class_name = _layers[1].__class__.__name__
# if 'BatchNorm' == class_name:
# _layers[1].set_initialized(False)
d_256.eval()
x = dg.to_variable(tf_tensors['images'].astype('float32').transpose([0,3,1,2])) # layers.random_uniform(shape=[4,3,256,256],min=0,max=1)
y = dg.to_variable(tf_tensors['labels'].astype('int64')) # layers.randint(0,1000,shape=[2])
img = d_256(x, y)
# + id="u1GEMHaC-zsi"
for (_, item1), (key2, item2) in zip(gtt.items(), gt.items()):
for (key1, t1), t2 in zip(item1, item2):
print(key1, key2, t1.shape, t2.shape, (np.abs(t1 - t2.numpy())).mean())
# + id="fc1hfTDF_Fdw"
save_path = './anime-biggan-256px-run39-607250.discriminator'
dg.save_dygraph(d_256.state_dict(), save_path)
# !cp ./anime-biggan-256px-run39-607250.discriminator.pdparams ./drive/My\ Drive/
| colab/paddle_anime_biggan_for_discriminator_converter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %run "..\Startup_py3.py"
sys.path.append(r"C:\Users\puzheng\Documents")
import ImageAnalysis3 as ia
# %matplotlib notebook
from ImageAnalysis3 import *
print(os.getpid())
# -
# # 0. required packages for h5py
import h5py
from ImageAnalysis3.classes import _allowed_kwds
import ast
# # 1. Create field-of-view class
# +
reload(ia)
reload(classes)
reload(classes.batch_functions)
reload(classes.field_of_view)
reload(io_tools.load)
reload(visual_tools)
reload(ia.correction_tools)
reload(ia.correction_tools.alignment)
reload(ia.spot_tools.matching)
reload(ia.segmentation_tools.chromosome)
reload(ia.spot_tools.fitting)
fov_param = {'data_folder':r'\\10.245.74.158\Chromatin_NAS_6\20200707-IMR90_SI16-5kb',
'save_folder':r'W:\Pu_Temp\20200707_IMR90_5kb_SI13',
'experiment_type': 'DNA',
'num_threads': 12,
'correction_folder':r'\\10.245.74.158\Chromatin_NAS_0\Corrections\20200803-Corrections_3color',
'shared_parameters':{
'single_im_size':[30,2048,2048],
'corr_channels':['750','647','561'],
'num_empty_frames': 0,
'corr_hot_pixel':True,
'corr_Z_shift':False,
'min_num_seeds':200,
'max_num_seeds': 400,
'spot_seeding_th':200,
'normalize_intensity_local':True,
'normalize_intensity_background':False,
},
}
fov = classes.field_of_view.Field_of_View(fov_param, _fov_id=3,
_color_info_kwargs={
'_color_filename':'Color_Usage',
},
_prioritize_saved_attrs=False,
)
#fov._load_correction_profiles()
# -
# # 2. Process image into candidate spots
# +
reload(io_tools.load)
reload(spot_tools.fitting)
reload(correction_tools.chromatic)
reload(classes.batch_functions)
# process image into spots
id_list, spot_list = fov._process_image_to_spots('unique',
#_sel_ids=np.arange(41,47),
_load_common_reference=True,
_load_with_multiple=False,
_save_images=True,
_warp_images=False,
_overwrite_drift=False,
_overwrite_image=False,
_overwrite_spot=False,
_verbose=True)
# -
# # 3. Find chromosomes
# ## 3.1 load chromosome image
chrom_im = fov._load_chromosome_image(_type='forward', _overwrite=False)
visual_tools.imshow_mark_3d_v2([chrom_im])
# ## 3.2 find candidate chromosomes
chrom_coords = fov._find_candidate_chromosomes_by_segmentation(_binary_per_th=99.5,
_overwrite=True)
# ## 3.3 select among candidate chromosomes
# +
fov._load_from_file('unique')
chrom_coords = fov._select_chromosome_by_candidate_spots(_good_chr_loss_th=0.3,
_cand_spot_intensity_th=1,
_save=True,
_overwrite=True)
# -
# ### visualize chromosomes selections
# +
# %matplotlib notebook
# %matplotlib notebook
## visualize
coord_dict = {'coords':[np.flipud(_coord) for _coord in fov.chrom_coords],
'class_ids':list(np.zeros(len(fov.chrom_coords),dtype=np.int)),
}
visual_tools.imshow_mark_3d_v2([fov.chrom_im],
given_dic=coord_dict,
save_file=None,
)
# -
# ## select spots based on chromosomes
fov._load_from_file('unique')
# +
intensity_th = 0.25
from ImageAnalysis3.spot_tools.picking import assign_spots_to_chromosomes
kept_spots_list = []
for _spots in fov.unique_spots_list:
kept_spots_list.append(_spots[_spots[:,0] > intensity_th])
# finalize candidate spots
cand_chr_spots_list = [[] for _ct in fov.chrom_coords]
for _spots in kept_spots_list:
_cands_list = assign_spots_to_chromosomes(_spots, fov.chrom_coords)
for _i, _cands in enumerate(_cands_list):
cand_chr_spots_list[_i].append(_cands)
print(f"kept chromosomes: {len(fov.chrom_coords)}")
# +
reload(spot_tools.picking)
from ImageAnalysis3.spot_tools.picking import convert_spots_to_hzxys
dna_cand_hzxys_list = [convert_spots_to_hzxys(_spots, fov.shared_parameters['distance_zxy'])
for _spots in cand_chr_spots_list]
dna_reg_ids = fov.unique_ids
# -
# select_hzxys close to the chromosome center
dist_th = 3000 # upper limit is 5000nm
sel_dna_cand_hzxys_list = []
for _cand_hzxys, _chrom_coord in zip(dna_cand_hzxys_list, fov.chrom_coords):
_sel_cands_list = []
for _cands in _cand_hzxys:
if len(_cands) == 0:
_sel_cands_list.append([])
else:
_dists = np.linalg.norm(_cands[:,1:4] - _chrom_coord*np.array([200,108,108]), axis=1)
_sel_cands_list.append(_cands[_dists < dist_th])
# append
sel_dna_cand_hzxys_list.append(_sel_cands_list)
# ### EM pick spots
# +
reload(ia.spot_tools.picking)
# load functions
from ImageAnalysis3.spot_tools.picking import Pick_spots_by_intensity, EM_pick_scores_in_population, generate_reference_from_population,evaluate_differences
# %matplotlib inline
niter= 10
nkeep = len(sel_dna_cand_hzxys_list)
num_threads = 12
# initialize
init_dna_hzxys = Pick_spots_by_intensity(sel_dna_cand_hzxys_list[:nkeep])
# set save list
sel_dna_hzxys_list, sel_dna_scores_list, all_dna_scores_list = [init_dna_hzxys], [], []
for _iter in range(niter):
print(f"- iter:{_iter}")
# generate reference
ref_ct_dists, ref_local_dists, ref_ints = generate_reference_from_population(
sel_dna_hzxys_list[-1], dna_reg_ids,
sel_dna_hzxys_list[-1][:nkeep], dna_reg_ids,
num_threads=num_threads,
collapse_regions=True,
)
plt.figure(figsize=(4,2))
plt.hist(np.ravel(ref_ints), bins=np.arange(0,20,0.5))
plt.figure(figsize=(4,2))
plt.hist(np.ravel(ref_ct_dists), bins=np.arange(0,5000,100))
plt.figure(figsize=(4,2))
plt.hist(np.ravel(ref_local_dists), bins=np.arange(0,5000,100))
plt.show()
# scoring
sel_hzxys, sel_scores, all_scores = EM_pick_scores_in_population(
sel_dna_cand_hzxys_list[:nkeep], dna_reg_ids, sel_dna_hzxys_list[-1],
ref_ct_dists, ref_local_dists, ref_ints,
sel_dna_hzxys_list[-1], dna_reg_ids, num_threads=num_threads,
)
update_rate = evaluate_differences(sel_hzxys, sel_dna_hzxys_list[-1])
print(f"-- region kept: {update_rate:.4f}")
sel_dna_hzxys_list.append(sel_hzxys)
sel_dna_scores_list.append(sel_scores)
all_dna_scores_list.append(all_scores)
if update_rate > 0.995:
break
# -
plt.figure()
plt.hist(np.log(sel_dna_scores_list[-1][5]), 40)
plt.show()
# +
sel_iter = -1
final_dna_hzxys_list = []
distmap_list = []
score_th = np.exp(-8)
bad_spot_percentage = 0.6
for _hzxys, _scores in zip(sel_dna_hzxys_list[sel_iter], sel_dna_scores_list[sel_iter]):
_kept_hzxys = np.array(_hzxys).copy()
_kept_hzxys[_scores < score_th] = np.nan
if np.mean(np.isnan(_kept_hzxys).sum(1)>0)<bad_spot_percentage:
final_dna_hzxys_list.append(_kept_hzxys)
distmap_list.append(squareform(pdist(_kept_hzxys[:,1:4])))
from scipy.spatial.distance import pdist, squareform
distmap_list = np.array(distmap_list)
median_distmap = np.nanmedian(distmap_list, axis=0)
# -
loss_rates = np.mean(np.sum(np.isnan(final_dna_hzxys_list), axis=2)>0, axis=0)
print(np.mean(loss_rates))
fig, ax = plt.subplots(figsize=(4,2),dpi=200)
ax.plot(loss_rates, '.-')
ax.set_xticks(np.arange(0,len(fov.unique_ids),20))
plt.show()
kept_inds = np.where(loss_rates<0.3)[0]
# +
cy7_zs = np.array(final_dna_hzxys_list)[:,fdf8:f53e:61e4::18,1]
cy5_zs = np.array(final_dna_hzxys_list)[:,fd00:c2b6:b24b:be67:2827:688d:e6a1:6a3b,1]
cy3_zs = np.array(final_dna_hzxys_list)[:,fc00:e968:6179::de52:7100,1]
plt.figure(dpi=100)
plt.hist(np.ravel(cy7_zs), bins=np.arange(0,6000, 200),
alpha=0.5, color='r', label='750')
plt.hist(np.ravel(cy5_zs), bins=np.arange(0,6000, 200),
alpha=0.5, color='y', label='647')
plt.hist(np.ravel(cy3_zs), bins=np.arange(0,6000, 200),
alpha=0.5, color='g', label='561')
plt.legend()
plt.show()
# +
cy7_ints = np.array(final_dna_hzxys_list)[:,0::3,0]
cy5_ints = np.array(final_dna_hzxys_list)[:,1::3,0]
cy3_ints = np.array(final_dna_hzxys_list)[:,fc00:e968:6179::de52:7100,0]
plt.figure(dpi=100)
plt.hist(np.ravel(cy7_ints), bins=np.arange(0,20, 0.5),
alpha=0.5, color='r', label='750')
plt.hist(np.ravel(cy5_ints), bins=np.arange(0,20, 0.5),
alpha=0.5, color='y', label='647')
plt.hist(np.ravel(cy3_ints), bins=np.arange(0,20, 0.5),
alpha=0.5, color='g', label='561')
plt.legend()
plt.show()
# -
fig, ax = plt.subplots(figsize=(4,3),dpi=200)
ax = ia.figure_tools.distmap.plot_distance_map(#median_distmap,
median_distmap[2::3,2::3],
color_limits=[0,400],
ax=ax,
ticks=np.arange(0,len(fov.unique_ids),20),
figure_dpi=200)
ax.set_title(f"SI13-5kb IMR90, n={len(distmap_list)}", fontsize=7.5)
plt.gcf().subplots_adjust(bottom=0.1)
plt.show()
fig, ax = plt.subplots(figsize=(4,3),dpi=200)
ax = ia.figure_tools.distmap.plot_distance_map(#median_distmap,
median_distmap[kept_inds][:,kept_inds],
color_limits=[0,350],
ax=ax,
ticks=np.arange(0,len(fov.unique_ids),20),
figure_dpi=200)
ax.set_title(f"SI13-5kb IMR90, n={len(distmap_list)}", fontsize=7.5)
plt.gcf().subplots_adjust(bottom=0.1)
plt.show()
# ######
# ## visualize single example
# +
# %matplotlib inline
chrom_id = 0
valid_inds = np.where(np.isnan(final_dna_hzxys_list[chrom_id]).sum(1) == 0)[0]
fig, ax = plt.subplots(figsize=(4,3),dpi=200)
ax = ia.figure_tools.distmap.plot_distance_map(#distmap_list[chrom_id],
distmap_list[chrom_id][valid_inds][:,valid_inds],
color_limits=[0,400],
ax=ax,
ticks=np.arange(0,150,20),
figure_dpi=200)
ax.set_title(f"proB bone marrow IgH+/+ chrom: {chrom_id}", fontsize=7.5)
plt.gcf().subplots_adjust(bottom=0.1)
plt.show()
reload(figure_tools.image)
ax3d = figure_tools.image.chromosome_structure_3d_rendering(#final_dna_hzxys_list[chrom_id][:,1:],
final_dna_hzxys_list[chrom_id][valid_inds, 1:],
marker_edge_line_width=0,
reference_bar_length=100, image_radius=200,
line_width=0.5, figure_dpi=300, depthshade=True)
plt.show()
# -
| 5kb_DNA_analysis/20200707_single_fov_SI13-5kb_IMR90_3color_Crick.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Principles of Automatic Control
#
# > This is a course on principles of automatic control using classical control theory. The course uses Python to implement some of the concepts described in class.
# <table style='margin: 0 auto' rules=none>
# <tr>
# <td> <img src="img/0.philos-and-sensors.png" alt="0.robot" style="height: 300px;"/> </td>
# <td> <img src="img/0.iNeptune-pMarineViewer-sim3.png" alt="0.iNeptune-pMarineViewer-sim3" style="height: 300px;"/> </td>
# </tr>
# </table>
#
# <table style='margin: 0 auto' rules=none>
# <tr>
# <td> <img src="img/0.pathplan_philos_07-21_1312_combined.png" alt="0.pathplan_philos_07-21_1312_combined" style="width: 750px;"/> </td>
# </tr>
# </table>
# ## Install
# The notebooks run with python 3.9 and use the following python libraries:
# - sympy
# - python control
# - numpy
# - pandas
# - matplotlib
# - opencv-python
#
# Notebook `01_Getting_started_with_Python_and_Jupyter_Notebook.ipynb` provides a short introduction on how to set up an anaconda environment to get you started.
# ## How to use
# Each notebook is thought to be independent from every other, so it is possible to run them in any order you prefer.
# ## Acknowledgements and references
# - Images above are from the paper _<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Coordinating Multiple Autonomies to Improve Mission Performance, OCEANS 2021 MTS/IEEE, October, 2021_
#
# - _Some of the images and content used in the notebooks have been based on resources available from [engineeringmedia.com](https://engineeringmedia.com/map-of-control). The website and the youtube videos are a fantastic resource on control systems._
#
# - The pendulum example is inspired by [Control tutorials for Matlab and Simulink](hhttps://ctms.engin.umich.edu/CTMS/index.php?aux=Activities_Pendulum)
#
# - Relevant textbooks used to prepare these notebooks are reported in `00_Syllabus.ipynb`.
# ## Additional resources
#
# - [Control systems academy](http://www.controlsystemsacademy.com/)
# - [Process Dynamics and Control in Python](https://apmonitor.com/pdc/index.php)
# - [<NAME> and <NAME>, Feedback Systems: An Introduction for Scientists and Engineers](http://www.cds.caltech.edu/~murray/amwiki/index.php/Main_Page)
# - [Lecture series on Control Engineering by Prof. <NAME>](https://www.youtube.com/playlist?list=PLghJObT_RyfLmKRT86TquJhG6QuiHZ6Pi)
# - [Designing Lead and Lag Compensators in Matlab and Simulink](https://ctms.engin.umich.edu/CTMS/index.php?aux=Extras_Leadlag)
# --------------------
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Science in Medicine using Python
#
# ### Author: Dr <NAME>
# Today's code has been inspired and modified from these books' code examples
# <img src="./images/cover.jpg" alt="Drawing" style="width: 300px;"/>
# <img src="./images/Geron_book_cover.png" alt="Drawing" style="width: 300px;"/>
# <img src="./images/raschka_book.png" alt="Drawing" style="width: 300px;"/>
# We will use these data science packages from the Python data stack
# + hide_input=false
import sys
print("Python version:", sys.version)
import pandas as pd
print("pandas version:", pd.__version__)
import numpy as np
print("NumPy version:", np.__version__)
import scipy as sp
print("SciPy version:", sp.__version__)
import matplotlib
print("matplotlib version:", matplotlib.__version__)
import matplotlib.pyplot as plt
import sklearn
print("scikit-learn version:", sklearn.__version__)
# + [markdown] hide_input=false
# ## Introduction
#
# ### What is machine learning
#
# - Learning from data
#
# ### Why Machine Learning?
#
# - Relationship between data are too complex to write explitic rule based algorithm
#
# - Relation between data is not obvious to humans
#
#
# #### Problems Machine Learning Can Solve
#
# ##### Low level
#
# -- Supervised learning
#
# - Classification
# - Regression
#
# -- Unsupervised learning
#
# - Clustering
# - Dimensionality reduction
# - Outlier removal
#
# -- Reinforment learning
#
# - Solving complex tasks
#
# ##### High level
#
# - image recognition / classification / object identification
# - text recognition / translation
# - generating images / text / art
# - playing computer games, chess, GO etc.
#
#
# #### Problems Machine Learning Can Solve
#
# - If the information is not in the data
#
#
#
# #### Knowing Your Task and Knowing Your Data
#
# - Domain knowledge is extremely important
# - Feature selection engineering is also very important ("garbage in garbage out")
# -
# ### Essential Libraries and Tools
# #### Jupyter Notebook
# Data used for machine learning are typically numeric data with two or more dimensions.
# #### pandas
# + uuid="ad1b06f7-e03a-4938-9d59-5bb40e848553"
import pandas as pd
# create a simple dataset of people
data = {'Name': ["John", "Anna", "Peter", "Linda"],
'Location' : ["New York", "Paris", "Berlin", "London"],
'Age' : [24, 13, 53, 33]}
data_pandas = pd.DataFrame(data)
data_pandas
# -
# Select all rows that have an age column greater than 30
data_pandas[data_pandas['Age'] > 30]
# #### NumPy
# +
# Two dimensional data can be represented as list of lists but they are inefficient.
x = [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12],]
x
# -
try:
x + 1
except:
print('This will throw and error')
x * 2
# + uuid="e2b8e959-75f0-4fa9-a878-5ab024f89223"
x = np.array(x)
x
# -
x.shape
x.ndim
x + 1
x * 2
# #### SciPy
# Create a 2D NumPy array with a diagonal of ones, and zeros everywhere else
eye = np.eye(4)
eye
# Convert the NumPy array to a SciPy sparse matrix in CSR format
# Only the nonzero entries are stored
sparse_matrix = sp.sparse.csr_matrix(eye)
print(sparse_matrix)
# #### matplotlib
# + uuid="30faf136-0ef7-4762-bd82-3795eea323d0"
# Generate a sequence of numbers from -10 to 10 with 100 steps in between
x = np.linspace(-10, 10, 100)
x
# -
# Create a second array using sine
y = np.sin(x)
y
# The plot function makes a line chart of one array against another
plt.plot(x, y, marker="o")
# ### The 101 of machine learning: Classifying Iris Species
# <img src="./images/iris_petal_sepal.png" alt="Drawing" style="width: 300px;"/>
# #### Meet the Data
# +
# Many famous datasets are directly avaialable from scikit learn
from sklearn.datasets import load_iris
iris_dataset = load_iris()
# -
iris_dataset.keys()
print(iris_dataset['DESCR'])
type(iris_dataset['data'])
iris_dataset['data'].shape
iris_dataset['data'].ndim
iris_dataset['data'][:5]
iris_dataset['feature_names']
type(iris_dataset['target'])
iris_dataset['target'].shape
iris_dataset['target']
iris_dataset['target_names']
# #### Measuring Success: Training and Testing Data
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
iris_dataset['data'], iris_dataset['target'], random_state=0)
# +
# train_test_split?
# -
X_train.shape
y_train.shape
X_test.shape
y_test.shape
# #### First Things First: Look at Your Data
# Create dataframe from data in X_train
# Label the columns using the strings in iris_dataset.feature_names
iris_dataframe = pd.DataFrame(X_train, columns=iris_dataset['feature_names'])
iris_dataframe.head()
# Create a scatter matrix from the dataframe, color by y_train
pd.plotting.scatter_matrix(iris_dataframe, c=y_train, figsize=(12,12),
marker='o', hist_kwds={'bins': 20}, s=60, alpha=0.8,);
# #### Building Your First Model: k-Nearest Neighbors
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn
knn.fit(X_train, y_train)
# #### Making Predictions
X_new = np.array([[5, 2.9, 1, 0.2]])
X_new.shape
prediction = knn.predict(X_new)
prediction
iris_dataset['target_names'][prediction]
# +
# knn?
# -
# #### Evaluating the Model
y_pred = knn.predict(X_test)
y_pred
y_pred == y_test
np.mean(y_pred == y_test)
knn.score(X_test, y_test)
# ### Summary and Outlook
# +
X_train, X_test, y_train, y_test = train_test_split(
iris_dataset['data'], iris_dataset['target'], random_state=0)
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
knn.score(X_test, y_test)
# -
# ### Typical regression analysis
from sklearn.datasets import load_boston
boston = load_boston()
boston.keys()
print(boston.DESCR)
boston['feature_names']
type(boston['data'])
boston['data'].shape
boston['data'].ndim
boston['data'][:5]
type(boston['target'])
boston['target'].shape
boston['target'][:50]
# +
from sklearn.linear_model import LinearRegression
X_train, X_test, y_train, y_test = train_test_split(boston['data'], boston['target'], random_state=42)
lr = LinearRegression().fit(X_train, y_train)
lr
# +
# lr?
# -
lr.coef_ , lr.intercept_
lr.score(X_train, y_train)
lr.score(X_test, y_test)
lr.score(X_train, y_train)
| Lecture_10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
#Lil_data
Lil_data.keys()
Lil_data['artists'].keys()
Lil_artists = Lil_data['artists']['items']
# ## 1.Searching and Printing a List of 50 'Lil' Musicians
#
# With "<NAME>" and "<NAME>" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.
# +
#With "<NAME>" and "<NAME>" there are a lot of "Lil" musicians. Do a search and print a list of 50
#that are playable in the USA (or the country of your choice), along with their popularity score.
count =0
for artist in Lil_artists:
count += 1
print(count,".", artist['name'],"has the popularity of", artist['popularity'])
# -
# ## 2 Genres Most Represented in the Search Results
#
# What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
# +
# What genres are most represented in the search results? Edit your previous printout to also display a list of their genres
#in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
#Tip: "how to join a list Python" might be a helpful search
# if len(artist['genres']) == 0 )
# print ("no genres")
# else:
# genres = ", ".join(artist['genres'])
genre_list = []
genre_loop = Lil_data['artists']['items']
for item in genre_loop:
#print(item['genres'])
item_gen = item['genres']
for i in item_gen:
genre_list.append(i)
#print(sorted(genre_list))
#COUNTING the most
genre_counter = {}
for word in genre_list:
if word in genre_counter:
genre_counter[word] += 1
else:
genre_counter[word] = 1
popular_genre = sorted(genre_counter, key = genre_counter.get, reverse = True)
top_genre = popular_genre[:1]
print("The genre most represented is", top_genre)
#COUNTING the most with count to confirm
from collections import Counter
count = Counter(genre_list)
most_count = count.most_common(1)
print("The genre most represented and the count are", most_count)
print("-----------------------------------------------------")
for artist in Lil_artists:
num_genres = 'no genres listed'
if len(artist['genres']) > 0:
num_genres= str.join(',', (artist['genres']))
print(artist['name'],"has the popularity of", artist['popularity'], ", and has", num_genres, "under genres")
# -
# ## More Spotify - LIL' GRAPHICS ##
#
# Use Excel, Illustrator or something like https://infogr.am/ to make a graphic about the Lil's, or the Lil's vs. the Biggies.
# Just a simple bar graph of their various popularities sounds good to me.
# Link to the Line Graph of Lil's Popularity chart
#
# [Lil Popularity Graph](https://infogr.am/b4473739-b764-4a24-896f-c7878796d826)
#
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
#Lil_data
# ## The Second Highest Popular Artist
#
# Use a for loop to determine who <NAME> has the highest popularity rating. Is it the same artist who has the largest number of followers?
# +
#Use a for loop to determine who <NAME> has the highest popularity rating.
#Is it the same artist who has the largest number of followers?
name_highest = ""
name_follow =""
second_high_pop = 0
highest_pop = 0
high_follow = 0
for artist in Lil_artists:
if (highest_pop < artist['popularity']) & (artist['name'] != "<NAME>"):
#second_high_pop = highest_pop
#name_second = artist['name']
highest_pop = artist['popularity']
name_highest = artist['name']
if (high_follow < artist['followers']['total']):
high_follow = artist ['followers']['total']
name_follow = artist['name']
#print(artist['followers']['total'])
print(name_highest, "has the second highest popularity, which is", highest_pop)
print(name_follow, "has the highest number of followers:", high_follow)
#print("the second highest popularity is", second_high_pop)
# -
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
#Lil_data
# ## 4. List of Lil's Popular Than Lil' Kim
# +
Lil_artists = Lil_data['artists']['items']
#Print a list of Lil's that are more popular than <NAME>.
count = 0
for artist in Lil_artists:
if artist['popularity'] > 62:
count+=1
print(count, artist['name'],"has the popularity of", artist['popularity'])
#else:
#print(artist['name'], "is less popular with a score of", artist['popularity'])
# -
# ## 5.Two Favorite Lils and Their Top Tracks
response = requests.get("https://api.spotify.com/v1/search?query=Lil&type=artist&limit=2&country=US")
data = response.json()
for artist in Lil_artists:
#print(artist['name'],artist['id'])
if artist['name'] == "<NAME>":
wayne = artist['id']
print(artist['name'], "id is",wayne)
if artist['name'] == "<NAME>":
yachty = artist['id']
print(artist['name'], "id is", yachty)
# +
#Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
#Tip: You're going to be making two separate requests, be sure you DO NOT save them into the same variable.
response = requests.get("https://api.spotify.com/v1/artists/" +wayne+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
print("<NAME>ne's top tracks are: ")
for track in tracks:
print("-", track['name'])
print("-----------------------------------------------")
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
print("Lil Yachty 's top tracks are: ")
for track in tracks:
print("-", track['name'])
# -
# ## 6. Average Popularity of My Fav Musicians (Above) for Their explicit songs vs. their non-explicit songs
# Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
#print(tracks)
#for track in tracks:
#print(track.keys())
# +
#Get an average popularity for their explicit songs vs. their non-explicit songs.
#How many minutes of explicit songs do they have? Non-explicit?
# How explicit is Lils?
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
# counter for tracks for explicit and clean
track_count = 0
clean_count = 0
#counter to find avg popularity
popular_exp = 0
popular_clean = 0
#counter for avg time in minutes are below:
timer = 0
data_timer = 0
timer_clean = 0
for track in tracks:
print("The track,", track['name'],", with the id",track['id'], "is", track['explicit'],"for explicit content, and has the popularity of", track['popularity'])
track_id = track['id']
time_ms = track['duration_ms']
if True:
track_count = track_count + 1
popular_exp = popular_exp + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_track = response.json()
print("and has the duration of", data_track['duration_ms'], "milli seconds.")
timer = timer + time_ms
timer_minutes = ((timer / (1000*60)) % 60)
if not track['explicit']:
clean_count = clean_count + 1
popular_clean = popular_clean + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_tracks = response.json()
timer_clean = timer_clean + time_ms
timer_minutes_clean = ((data_timer / (1000*60)) % 60)
print(", and has the duration of", timer_minutes_clean, "minutes")
print("------------------------------------")
avg_pop = popular_exp / track_count
print("I have found", track_count, "tracks, and has the average popularity of", avg_pop, "and has the average duration of", timer_minutes,"minutes and", clean_count, "are clean")
#print("Overall, I discovered", track_count, "tracks")
#print("And", clean_count, "were non-explicit")
#print("Which means", , " percent were clean for Lil Wayne")
# +
#Get an average popularity for their explicit songs vs. their non-explicit songs.
#How many minutes of explicit songs do they have? Non-explicit?
# How explicit is Lils?
response = requests.get("https://api.spotify.com/v1/artists/" +wayne+ "/top-tracks?country=US")
data = response.json()
# counter for tracks for explicit and clean
track_count = 0
clean_count = 0
#counter to find avg popularity
popular_exp = 0
popular_clean = 0
#counter for avg time in minutes are below:
timer = 0
#data_timer = 0
timer_clean = 0
for track in tracks:
print("The track,", track['name'],", with the id",track['id'], "is", track['explicit'],"for explicit content, and has the popularity of", track['popularity'])
track_id = track['id']
time_ms = data_track['duration_ms']
if True:
track_count = track_count + 1
popular_exp = popular_exp + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_track = response.json()
print("and has the duration of", data_track['duration_ms'], "milli seconds.")
timer = timer + time_ms
timer_minutes = ((timer / (1000*60)) % 60)
if not track['explicit']:
clean_count = clean_count + 1
popular_clean = popular_clean + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_tracks = response.json()
timer_clean = timer_clean + time_ms
timer_minutes_clean = ((data_timer / (1000*60)) % 60)
print(", and has the duration of", timer_minutes_clean, "minutes")
print("------------------------------------")
avg_pop = popular_exp / track_count
print("I have found", track_count, "tracks, and has the average popularity of", avg_pop, "and has the average duration of", timer_minutes,"minutes and", clean_count, "are clean")
#print("Overall, I discovered", track_count, "tracks")
#print("And", clean_count, "were non-explicit")
#print("Which means", , " percent were clean for Lil Wayne")
# -
# ## 7a. Number of Biggies and Lils
#
# Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
# +
#How many total "Biggie" artists are there? How many total "Lil"s?
#If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
biggie_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&country=US')
biggie_data = biggie_response.json()
biggie_artists = biggie_data['artists']['total']
print("Total number of Biggie artists are", biggie_artists)
lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&country=US')
lil_data = lil_response.json()
lil_artists = lil_data['artists']['total']
print("Total number of Lil artists are", lil_artists)
# -
# ## 7b. Time to Download All Information on Lil and Biggies
# +
#If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
limit_download = 50
biggie_artists = biggie_data['artists']['total']
Lil_artist = Lil_data['artists']['total']
#1n 5 sec = 50
#in 1 sec = 50 / 5 req = 10 no, for 1 no, 1/10 sec
# for 4501 = 4501/10 sec
# for 49 49/ 10 sec
big_count = biggie_artists/10
lil_count = Lil_artist / 10
print("It would take", big_count, "seconds for Biggies, where as it would take", lil_count,"seconds for Lils" )
# -
# ## 8. Highest Average Popular Lils and Biggies Out of The Top 50
# +
#Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average?
biggie_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50&country=US')
biggie_data = biggie_response.json()
biggie_artists = biggie_data['artists']['items']
big_count_pop = 0
for artist in biggie_artists:
#count_pop = artist['popularity']
big_count_pop = big_count_pop + artist['popularity']
print("Biggie has a total popularity of ", big_count_pop)
big_pop = big_count_pop / 49
print("Biggie is on an average", big_pop,"popular")
#Lil
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
Lil_artists = Lil_data['artists']['items']
lil_count_pop = 0
for artist in Lil_artists:
count_pop_lil = artist['popularity']
lil_count_pop = lil_count_pop + count_pop_lil
lil_pop = lil_count_pop / 50
print("Lil is on an average", lil_pop,"popular")
# -
| homework05/Homework05_Spotify_radhika.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# # Continuous Linear Programming
# ## Definition
# Continuous Linear Programming (CLP) spans a wide set of OR problems that can be modeled based on the following assumptions:
#
# - Unknown variables are continuous: The result variable and the decision variables can take any real value. Also, all the different coefficients can take any real value.
# - Objectives and constraints are linear expressions: The relationships between the variables are expressed as linear expressions.
#
# Clearly, CLP establishes a rather simplified model of real world problems. Many assumptions are needed to represent real problems as a set of linear expressions. Yet, CLP is a very powerful tool in OR and it can provide valuable insights to decision making in different fields. The main reason is that, once these assumptions are taken, linear algebra provides valuable tools to find an optimal solution and analyse it.
#
# The main types of problems that will be covered in the CLP exercises belong to two problem types:
#
# - Blending problems: In blending problems, the objective is to find the optimal combination of components in a set of products that either minimise costs or maximise profits
# - Production mix: Production mix problems help us determine the optimal allocation of resources in the production of good or services to either minimise costs or maximise profits
#
# ## Set up
# As mentioned above, in CLP the result unknown variable, noted as $z$, can take any real value:
#
# $z \in \mathbb{R}$
#
# The value of this unknown $z$ is a function of another set of unknowns, which are called decision variables and that represent our decisions. These decision variables are noted as $x_j$, where $j$ is an integer (sub)index that goes from 1 to $n$:
#
# $z = \operatorname{f}(x_1, x_2, ..., x_n)$
#
# $x_1, x_2, ..., x_n \in \mathbb{R}$
#
# Therefore, we will have $n$ different decision variables, that are going to be continuous. And, in extension, z is also going to be continuous. So now we know from where does the C in CLP comes from. Next, in CLP, the function f is a linear function. What we want is to maximise or minimise z, which as we said is a linear function and therefore can be expressed as the sum of the product of the decision variables multiplied by a set of coefficients, which are noted as $c_1$ to $c_n$.
#
# $\max or \min z = \operatorname{f}(x_1, x_2, ..., x_n) = c_1·x_1+c_2·x_2+...+c_n·x_n$
#
# And our optimisation function is subject to a set of constraints, which are also linear functions of the decision variables, that is, the sum product of the decision variables times a set of coefficients noted as $a_{ij}$ must be less than, greater than, or equal to another coefficient noted as $b_j$.
#
# $s.t. \\
# a_{11}·x_1+a_{12}·x_2+...+a_{1n}·x_n \leq b_1 \\
# a_{21}·x_1+a_{22}·x_2+...+a_{2n}·x_n \leq b_2 \\
# ... \\
# a_{m1}·x_1+a_{m2}·x_2+...+a_{mn}·x_n \leq b_m$
#
#
# Note that coefficients a have two sub-indexes, the second is equal to the index of the corresponding decision variable, and the first is equal to the sub-index of b.
# These new index $i$ ranges from 1 to $m$ and therefore, our optimisation function is subject to a set of $m$ constraints.
# We will refer to the expressions to the left of the inequality as the Left Hand Side (LHS), and to the other side of the relationship as the Right Hand Side (RHS).
#
# Now, note that we can use the sum operator to represent the objective function in a more compressed form, that is, the LHS is equal to the sum of the product of cj times xj for j equal to 1 up to n. Note that this is can also be represented as the dot product of two vectors, the vector $x$ with the decision variables and the vector $c$ of the coefficients.
#
# $\max or \min z = c_1·x_1+c_2·x_2+...+c_n·x_n = \sum_{j=1}^{n}{c_j·x_j}=c·x$
#
# $x = [x_1, x_2, ..., x_n]^T \\
# c = [c_1, c_2, ..., c_n]^T$
#
# We can do the same transformations to the left hand sides of the constraints, expressing them as the sum product of the decision variables times the a coefficients. And, we may as well express all the left hand sides of the constraints as the product of a matrix A that contains all the different coefficients times the decision variable vector.
#
# $\max or \min z = \sum_{j=1}^{n}{c_j·x_j} \\
# s.t.
# \sum_{j=1}^{n}{a_{1j}·x_j}\leq b_1 \\
# \sum_{j=1}^{n}{a_{2j}·x_j}\leq b_2 \\
# ...
# \sum_{j=1}^{n}{a_{mj}·x_j}\leq b_m$
#
# or
# $\max or \min z = c·x \\
# s.t. \\
# A·x \leq b \\
# A = \begin{bmatrix}
# a_{11} & a_{12} & ... & a_{1n}\\
# a_{21} & a_{22} & ... & a_{2n}\\
# ...\\
# a_{m1} & a_{m2} & ... & a_{mn}\\
# \end{bmatrix}$
#
# $b = [b_1, b_2, ..., b_m]$
#
# These alternative compact forms allow us to deal with problems with an arbitrary number of decision variables and an arbitrary number of constraints.
#
#
| docs/source/CLP/tutorials/CLP intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.8 64-bit (''base'': conda)'
# language: python
# name: python36864bitbaseconda8143bf76a2004fda854d7e537f0cc215
# ---
# +
# import MongoDB Client & necessary packages
from pymongo import MongoClient
import pprint
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# Requests sends and recieves HTTP requests
import requests
# Beautiful Soup parses HTML documents in python
from bs4 import BeautifulSoup
import json
import time
# -
# Search Page Number 1 - 200 = good to go
api_url = 'https://www.goodreads.com/search?page=1&q=Computer+Science&qid=iHFbTUVsHL&search_type=books&tab=books&utf8=%E2%9C%93'
r = requests.get(api_url)
r.status_code
# simple print raw text
import pprint
pprint.pprint(r.text[:1000])
# parses HTML documents in python
soup = BeautifulSoup(r.text, 'html.parser')
# ### Understanding the HTML strcuture and obtaining a random book info
#
# We are still in the search main page. We will obtain book info, including the sublink.
x = soup.find('table').find_all('tr')[17]
soup.find('table').find_all('tr')
# +
pprint.pprint(x.find('a',{'class':'bookTitle'}).span.text)
pprint.pprint(x.find('a',{'class':'authorName'}).span.text)
pprint.pprint(x.find('span',{'class':'greyText smallText uitext'}).span.text.strip())
pprint.pprint(x.find('a',{'class':'greyText'}).text)
# book sub-link
get_url = x.find('a',{'class':'bookTitle'})
url = get_url.get('href')
pprint.pprint('https://www.goodreads.com{}'.format(url))
# obtains other info for book
y = x.find('span',{'class':'greyText smallText uitext'}).text.replace('\n','')
y = ' '.join(y.split())
y = y.split(' — ')
for i in y:
pprint.pprint(i)
# -
# create columns strcuture to store data
columns = {'book_title':None,
'author_name':None,
'avg_rating':None,
'rating_count':None,
'publish year':None,
'edition':None,
'link':None}
# ### Web Scraping Part 1 - Obtain book info from Search Bar Page
#
# You should always click on the second page in the search page. If you in the default page, the search page is without number and you CANNOT do web scraping.
#
# For example:
# * [Computer Science Default Page](https://www.goodreads.com/search?utf8=%E2%9C%93&q=Computer+Science&search_type=books) - without page number
# * [Computer Science Page 2](https://www.goodreads.com/search?page=2&q=Computer+Science&qid=A44Zff48B9&search_type=books&tab=books&utf8=%E2%9C%93) - has ***page=2&q=Computer+Science*** in the link
#
# * [Mathematics Default Page](https://www.goodreads.com/search?utf8=%E2%9C%93&q=Mathematics&search_type=books)
# * [Mathematics Page 2](https://www.goodreads.com/search?page=2&q=Mathematics&qid=tHGardUG6k&search_type=books&tab=books&utf8=%E2%9C%93) - has ***page=2&q=Mathematics***
#
# Then you can use search bar and replace the link inside the code to do simple web scraping
#
# You should:
# * replace the url
# * change ```for i range(1, 2)``` to ```for i range(1,101)``` 101 means 100 search pages, total 2000 books related to your subject
data = []
for i in range(1, 2):
# replace your url with genre or subject that you are interested in
api_url = 'https://www.goodreads.com/search?page={}&q=Computer+Science&qid=iHFbTUVsHL&search_type=books&tab=books&utf8=%E2%9C%93'.format(i)
r = requests.get(api_url)
soup = BeautifulSoup(r.text, 'html.parser')
# sleep time to 1 second
time.sleep(1)
# obtain clean data
for x in soup.find('table').find_all('tr'):
columns = {}
columns['book_title'] = x.find('a',{'class':'bookTitle'}).span.text
columns['author_name'] = x.find('a',{'class':'authorName'}).span.text
y = x.find('span',{'class':'greyText smallText uitext'}).text.replace('\n','')
y = ' '.join(y.split())
y = y.split(' — ')
if len(y) == 4:
columns['avg_rating'] = y[0]
columns['rating_count'] = y[1]
columns['publish year'] = y[2]
columns['edition'] = y[3]
else:
columns['avg_rating'] = y[0]
columns['rating_count'] = y[1]
columns['publish year'] = None
columns['edition'] = y[2]
get_url = x.find('a',{'class':'bookTitle'})
url = get_url.get('href')
columns['link'] = 'https://www.goodreads.com{}'.format(url)
data.append(columns)
len(data)
# 20 books on the first search page
for i in data:
print(i)
# convert to dataframe
new_data = pd.DataFrame(data)
new_data.head()
# create book MongoDB
client = MongoClient('localhost', 27017)
db = client['book']
book_info = db['book_info']
# insert your data
book_info.insert_many(data)
# see how many book stored in your database
book_info.count()
# print first 5 sublinks of books
for x in book_info.find({},{'_id':False}).limit(5):
print(x['link'])
# ### Web Scraping Part 2 - Obtain book info from Sublink Page
# After we have obtain the sublink of each book, we can loop through each link and obtain more detailed info for each book.
#
# We will also repeat the step to understand the HTML structure of the sublink to better obtain the book info
# store sublinks into a list
temp_list = []
for x in book_info.find({},{'_id':False}):
temp_list.append(x['link'])
temp_list[:5]
# one sublink
api_url = 'https://www.goodreads.com/book/show/533070.Computer_Science?from_search=true&from_srp=true&qid=iHFbTUVsHL&rank=21'
r = requests.get(api_url)
r.status_code
soup = BeautifulSoup(r.text, 'html.parser')
# title
soup.find('h1',{'class':'gr-h1 gr-h1--serif'}).text.strip()
# author - may more than 1
for x in soup.find_all('div',{'class':'authorName__container'}):
print(x.find('a',{'class':"authorName"}).span.text)
# use regular expression to find info
import re
# rating
float(soup.find('span',{'itemprop':"ratingValue"}).text.strip())
# total rating
soup.find('meta',{'itemprop':"ratingCount"}).text.replace('\n','').strip()
# rating count Number ONLY
rateCount = 0
for i in (re.findall(r'\d+', soup.find('meta',{'itemprop':"ratingCount"}).text.replace('\n','').strip())):
rateCount = int(i)
print(rateCount)
# review count Number ONLY
reviewCount = 0
for i in (re.findall(r'\d+', soup.find('meta',{'itemprop':"reviewCount"}).text.replace('\n','').strip())):
reviewCount = int(i)
print(reviewCount)
# format
soup.find('span',{'itemprop':"bookFormat"}).text.replace('\n','').strip()
# number of pages
soup.find('span',{'itemprop':"numberOfPages"}).text.replace('\n','').strip()
# number of pages Number ONLY
numberOfPages = 0
for i in (re.findall(r'\d+', soup.find('span',{'itemprop':"numberOfPages"}).text.replace('\n','').strip())):
numberOfPages = int(i)
numberOfPages
# language
soup.find('div',{'itemprop':"inLanguage"}).text
# ISBN
ISBN = ''
for idx, val in enumerate(soup.find('div',{'id':"bookDataBox"}).find_all('div',{'class':'clearFloats'})):
val = val.text.replace('\n','').strip()
val = val.replace(' ','')
if 'ISBN' in val:
ISBN += val
print(ISBN)
# data strcuture
second_columns = {'book_title':None,
'author_name':None,
'avg_rating':None,
'rating_count':None,
'review_count':None,
'format':None,
'number_of_page':None,
'language':None,
'ISBN':None}
# start new database book detail
db = client['book']
book_detail = db['book_detail']
# ### Web Scraping Part 2 - Continue...
# If you have tried to understand the HTML structure, you may realize that there are some issues.
# Some include:
# * book may have more than two authors
# * book format not provided
# * number of page not provided
# * language of book not provided
# * ISBN not provided
#
# Therefore, I have included many for try-except block to prevent the for loop crash
#
# You should change `temp_list[:5]` to `temp_list` in the beginning of the `for loop` if you want to obtain all the books!
book_data = []
for x in temp_list[:5]:
# url
api_url = '{}'.format(x)
r = requests.get(api_url)
soup = BeautifulSoup(r.text, 'html.parser')
# time sleep
time.sleep(6)
# calling second_columns
second_columns = {}
# title
second_columns['book_title'] = soup.find('h1',{'class':'gr-h1 gr-h1--serif'}).text.strip()
# author
if len(soup.find_all('div',{'class':'authorName__container'})) >= 2:
second_columns['author_name'] = []
for x in soup.find_all('div',{'class':'authorName__container'}):
second_columns['author_name'].append(x.find('a',{'class':"authorName"}).span.text)
else:
for x in soup.find_all('div',{'class':'authorName__container'}):
second_columns['author_name'] = x.find('a',{'class':"authorName"}).span.text
# avg rating
second_columns['avg_rating'] = float(soup.find('span',{'itemprop':"ratingValue"}).text.strip())
rateCount = 0
for i in (re.findall(r'\d+', soup.find('meta',{'itemprop':"ratingCount"}).text.replace('\n','').strip())):
rateCount = int(i)
second_columns['rating_count'] = rateCount
# Below: this line of code is to obtain rating with words such as 311 ratings
# second_columns['rating_count'] = soup.find('meta',{'itemprop':"ratingCount"}).text.replace('\n','').strip()
# review count
reviewCount = 0
for i in (re.findall(r'\d+', soup.find('meta',{'itemprop':"reviewCount"}).text.replace('\n','').strip())):
reviewCount = int(i)
second_columns['review_count'] = reviewCount
# Below: this line of code is to obtain rating with words such as 23 reviews
# second_columns['review_count'] = soup.find('meta',{'itemprop':"reviewCount"}).text.replace('\n','').strip()
# format
try:
second_columns['format'] = soup.find('span',{'itemprop':"bookFormat"}).text.replace('\n','').strip()
except:
second_columns['format'] = None
# number of pages
try:
numberOfPages = 0
for i in (re.findall(r'\d+', soup.find('span',{'itemprop':"numberOfPages"}).text.replace('\n','').strip())):
numberOfPages = int(i)
second_columns['number_of_page'] = numberOfPages
# soup.find('span',{'itemprop':"numberOfPages"}).text.replace('\n','').strip()
except:
second_columns['number_of_page'] = None
# language
try:
second_columns['language'] = soup.find('div',{'itemprop':"inLanguage"}).text
except:
second_columns['language'] = None
# ISBN
ISBN = ''
for idx, val in enumerate(soup.find('div',{'id':"bookDataBox"}).find_all('div',{'class':'clearFloats'})):
val = val.text.replace('\n','').strip()
val = val.replace(' ','')
if 'ISBN' in val:
ISBN += val
second_columns['ISBN'] = ISBN
# store data in MongoDB
book_data.append(second_columns)
# do a search if a book exist
if len(list(book_detail.find({'book_title':soup.find('h1',{'class':'gr-h1 gr-h1--serif'}).text.strip()}))):
continue
else:
book_detail.insert_one(second_columns)
# search a book in your mongoDB - related to above for loop block
if len(list(book_detail.find({'book_title':'Computer Science With Python (textbook XII)'}))) > 0:
print('book exists')
else:
print("book doesn't exist")
book_data[:5]
book_detail.estimated_document_count()
list(book_detail.find({},{'_id':False}).limit(10))
df = pd.DataFrame(book_detail.find({},{'_id':False}))
df.head()
df.tail(5)
df.info()
df.describe()
| Goodreads Web Scraping Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
sns.set(style='whitegrid')
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
import sis
import sis_visualizations as visualizations
from rationale_objects import Rationale, BeerReview, BeerReviewContainer, SIS_RATIONALE_KEY
from IPython.core.display import display, HTML
# -
brc_asp0 = BeerReviewContainer.load_data('../rationale_results/beer_reviews/asp0')
brc_asp1 = BeerReviewContainer.load_data('../rationale_results/beer_reviews/asp1')
brc_asp2 = BeerReviewContainer.load_data('../rationale_results/beer_reviews/asp2')
# +
# Want cases where only 1 SIS and example predicted as interesting for all aspects
asp0_idxs = [review.i for review in brc_asp0.get_all_reviews() \
if len(review.get_rationales(SIS_RATIONALE_KEY)) == 1]
asp1_idxs = [review.i for review in brc_asp1.get_all_reviews() \
if len(review.get_rationales(SIS_RATIONALE_KEY)) == 1]
asp2_idxs = [review.i for review in brc_asp2.get_all_reviews() \
if len(review.get_rationales(SIS_RATIONALE_KEY)) == 1]
# -
common_idxs = sorted(list(set(asp0_idxs).intersection(set(asp1_idxs).intersection(set(asp2_idxs)))))
# +
palette = sns.color_palette('Dark2')
def html_for_i(i):
asp0_sis = brc_asp0.get_review(i).get_rationales(SIS_RATIONALE_KEY)[0]
asp1_sis = brc_asp1.get_review(i).get_rationales(SIS_RATIONALE_KEY)[0]
asp2_sis = brc_asp2.get_review(i).get_rationales(SIS_RATIONALE_KEY)[0]
html = visualizations.highlight_multi_rationale(brc_asp0.get_review(i), [asp0_sis, asp1_sis, asp2_sis],
brc_asp0.index_to_token, underline_annots=False,
color_palette=palette)
return html
# -
for i in common_idxs:
try:
html = html_for_i(i)
print('i: ', i)
display(HTML(html))
except AssertionError:
continue # rationales weren't disjoint
# +
legend_html = visualizations.make_legend(3, labels=['Appearance', 'Aroma', 'Palate'], color_palette=palette)
html = html_for_i(664)
display(HTML(html+legend_html))
# -
| notebooks/SIS on Beer Reviews - Multi-aspect Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#default_exp train_adjacent
# -
#export
from rsna_retro.imports import *
from rsna_retro.metadata import *
from rsna_retro.preprocess import *
from rsna_retro.train import *
from rsna_retro.train3d import *
column='SeriesInstanceUID'
df_series = Meta.df_any
df_series
fn = get_pil_fn(path_jpg256)
sop = 'ID_0969176c0'
idx = df_series.index.get_loc(sop)
img = fn(sop)
img
# +
# sid = df_series.loc[sop].SeriesInstanceUID
# prev_row = df_series.iloc[idx-1]
# prev_item = df_series.index[idx-1]
# prev_row
# tt = ToTensor()
# # if prev_row.SeriesInstanceUID == sid:
# prev = tt(fn(prev_item))
# -
list(range(1,2))
#export
class TfmSlice:
def __init__(self, df, path, windowed=False, num_adj=1):
self.fn = get_pil_fn(path)
self.tt = ToTensor()
self.df = df
self.windowed = windowed
self.num_adj = num_adj
def get_adj(self, idx, x_mid, sid_mid):
if idx < 0 or idx >= self.df.shape[0] \
or self.df.iloc[idx].SeriesInstanceUID != sid_mid:
return torch.zeros_like(x_mid)
adj_item = self.df.index[idx]
return self.tt(self.fn(adj_item))
def __call__(self, item):
idx = self.df.index.get_loc(item)
sid = self.df.loc[item].SeriesInstanceUID
x = self.tt(self.fn(item))
x_prevs, x_nexts = [], []
for i in range(1, self.num_adj+1):
x_prevs.append(self.get_adj(idx-i, x, sid)[:1])
x_nexts.append(self.get_adj(idx+i, x, sid)[:1])
x = x if self.windowed else x[:1]
return TensorCTScan(torch.cat([*x_prevs, x, *x_nexts]))
tfm = TfmSlice(Meta.df_comb, path_jpg256, windowed=True)
tfm(sop).shape
tfm = TfmSlice(Meta.df_comb, path_jpg256, num_adj=2)
tfm(sop).shape
dsets = Datasets(Meta.df_comb.index, [[tfm],[fn2label,EncodedMultiCategorize(htypes)]], splits=Meta.splits_stg1)
dsets[10][0].shape
x,y = dsets[10]
f, ax = plt.subplots(1,5,figsize=(20,20))
for idx,c in enumerate(x):
ax[idx].imshow(c)
# +
#export
mean_5c = [mean[0], *mean, mean[0]]
std_5c = [std[0], *std, std[0]]
mean_adj = [mean[0]]*3
std_adj = [std[0]]*3
# -
# ## Train
#export
def get_adj_data(bs, sz, splits, img_dir=path_jpg256, windowed=False, num_adj=1, df=Meta.df_comb, test=False):
tfm = TfmSlice(df, img_dir, windowed=windowed, num_adj=num_adj)
num_c = (1+2*num_adj)
m,s = (mean_5c, std_5c) if windowed else ([mean[0]]*num_c, [std[0]]*num_c)
return get_data_gen(L(list(df.index)), bs=bs, img_tfm=tfm, sz=sz, splits=splits,
mean=m, std=s, test=test)
dls = get_adj_data(512, 128, Meta.splits_stg1, windowed=False)
xb, yb = dls.one_batch()
xb.shape, yb.shape
learn = get_learner(dls, partial(xresnet18, c_in=5))
do_fit(learn, 1, 4e-2)
# ## Test
#export
def get_adj_test_data(bs=512, sz=256, tst_dir='tst_jpg', windowed=False, num_adj=1, df=Meta.df_tst):
tst_fns = df.index.values
tst_splits = [L.range(tst_fns), L.range(tst_fns)]
tst_dbch = get_adj_data(bs, sz, tst_splits, path/tst_dir, df=df, windowed=windowed, num_adj=num_adj, test=True)
# tst_dbch = get_data_gen(tst_fns, bs=bs, img_tfm=get_pil_fn(path/tst_dir), sz=sz, splits=tst_splits, test=True)
tst_dbch.c = 6
return tst_dbch
dls_tst = get_adj_test_data(512, 128)
xb, = dls_tst.one_batch()
xb.shape
# ## Export
#hide
from nbdev.export import notebook2script
notebook2script()
| 05_train_adjacent.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Curve fitting in python
# ## <NAME> - 2015
# An introduction to various curve fitting routines useful for physics work.
#
# The first cell is used to import additional features so they are available in our notebook. `matplotlib` provides plotting functions and `numpy` provides math and array functions.
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# Next we define `x` as a linear space with 100 points that range from 0 to 10.
x = np.linspace(0,10,100)
# `y` is mock data that we create by linear function with a slope of 1.45. We also add a small amount of random data to simulate noise as if this were a measured quantity.
y = 1.45 * x + 1.3*np.random.random(len(x))
plt.plot(x,y,".")
# The data is pretty clearly linear, but we can fit a line to determine the slope. A 1st order polynomial is a line, so we use `polyfit`:
# execute the fit on the data; a 1-dim fit (line)
fit = np.polyfit(x, y, 1,full=True)
# The fit is stored in a variable called `fit` which has several elements. We can print them out with nice labels using the following cell:
print("coeffients:", fit[0])
print("residuals:", fit[1])
print("rank:", fit[2])
print("singular_values:", fit[3])
print("rcond:", fit[4])
# The main thing we want is the list of coefficients. These are the values in the polynomial that was a best fit. We can create a function (called `f`) that is the best fit polynomial. Then it is easy to plot both together and see that the fit is reasonable.
f = np.poly1d(fit[0]) # create a function using the fit parameters
plt.plot(x,y)
plt.plot(x,f(x))
# ## General function fitting
# ### For more than just polynomials
# > "When choosing a fit, Polynomial is almost always the wrong answer"
#
# Often there is a better model that describes the data. In most cases this is a known function; something like a power law or an exponential. In these cases, there are two options:
# 1. Convert the variables so that a plot will be linear (i.e. plot the `log` of your data, or the square root, or the square etc.). This is highly effective becuase a linear fit is always (yes always) more accurate than a fit of another function.
# 2. Perform a nonlinear fit to the function that models your data. We'll illustrate this below and show how even a "decent" fit gives several % error.
#
# First, we import the functions that do nonlinear fitting:
from scipy.optimize import curve_fit
# Then define a function that we expect models our system. In this case, exponential decay with an offset.
def func(x, a, b, c):
return a * np.exp(-b * x) + c
# Create a pure (i.e. exact) set of data with some parameters, and then simulate some data of the same system (by adding random noise).
y = func(x, 2.5, 0.6, 0.5)
ydata = y + 0.2 * np.random.normal(size=len(x))
# Now carry out the fit. `curve_fit` returns two outputs, the fit parameters, and the covariance matrix. We won't use the covariance matrix yet, but it's good practice to save it into a variable.
parameters, covariance = curve_fit(func, x, ydata)
parameters #the fit results for a, b, c
# We can see the parameters are a reasonable match to the pure function we created above. Next, we want to create a "best fit" data set but using the parameters in the model function `func`. The "splat" operator is handy for this, it unpacks the `parameters` array into function arguments `a`, `b`, and `c`.
yfit = func(x, *parameters)
# the splat operator unpacks an array into function arguments
plt.plot(x,ydata,".")
plt.plot(x,yfit)
plt.plot(x,y)
# Looks pretty good as far as fits go. Let's check out the error:
plt.plot(x,((yfit-y)/y)*100)
plt.title("Fit error %")
# To further illustrate the variation in this fit, repeat all the cells (to get new random noise in the data) and you'll see the fit changes. Sometimes, the error is as large as 10%. Compare this to a linear fit of log data and I bet you see much less variation in the fit!
# ## Modeling by rescaling data
# ### The "fit a line to anything" approach
# > "With a small enough data set, you can always fit it to a line"
ylog = np.log(ydata[:25] - ydata[-1])
plt.plot(x[:25],ylog,".")
fitlog = np.polyfit(x[:25], ylog[:25], 1,full=True)
fitlog
ylog.shape
flog = np.poly1d(fitlog[0])
plt.plot(x[:25],ylog)
plt.plot(x[:25],flog(x[:25]))
# Now to finally back out the exponential from the linear fit:
ylogfit = np.exp(flog(x))
plt.plot(x,ylogfit+ydata[-1])
plt.plot(x,ydata)
# Clearly the tail is a bit off, the next iteration is to average the tail end and use that as the y shift instead of using just the last point.
yshift = np.average(ydata[-20:])
yshift
ylog = np.log(ydata[:25] - yshift)
fitlog = np.polyfit(x[:25], ylog[:25], 1,full=True)
flog = np.poly1d(fitlog[0])
plt.plot(x[:25],ylog)
plt.plot(x[:25],flog(x[:25]))
ylogfit = np.exp(flog(x))
plt.plot(x,ylogfit+yshift)
plt.plot(x,ydata)
# Very nice.
| Curve Fitting/Curve fitting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import json
import pickle
import random
from collections import defaultdict, Counter
from indra.literature.adeft_tools import universal_extract_text
from indra.databases.hgnc_client import get_hgnc_name, get_hgnc_id
from adeft.discover import AdeftMiner
from adeft.gui import ground_with_gui
from adeft.modeling.label import AdeftLabeler
from adeft.modeling.classify import AdeftClassifier
from adeft.disambiguate import AdeftDisambiguator
from adeft_indra.ground.ground import AdeftGrounder
from adeft_indra.model_building.s3 import model_to_s3
from adeft_indra.model_building.escape import escape_filename
from adeft_indra.db.content import get_pmids_for_agent_text, get_pmids_for_entity, \
get_plaintexts_for_pmids
# -
adeft_grounder = AdeftGrounder()
shortforms = ['TS']
model_name = ':'.join(sorted(escape_filename(shortform) for shortform in shortforms))
results_path = os.path.abspath(os.path.join('../..', 'results', model_name))
miners = dict()
all_texts = {}
for shortform in shortforms:
pmids = get_pmids_for_agent_text(shortform)
if len(pmids) > 10000:
pmids = random.choices(pmids, k=10000)
text_dict = get_plaintexts_for_pmids(pmids, contains=shortforms)
text_dict = {pmid: text for pmid, text in text_dict.items() if len(text) > 5}
miners[shortform] = AdeftMiner(shortform)
miners[shortform].process_texts(text_dict.values())
all_texts.update(text_dict)
# +
longform_dict = {}
for shortform in shortforms:
longforms = miners[shortform].get_longforms(weight_decay_param=0.001)
longforms = [(longform, count, score) for longform, count, score in longforms
if count*score > 2]
longform_dict[shortform] = longforms
combined_longforms = Counter()
for longform_rows in longform_dict.values():
combined_longforms.update({longform: count for longform, count, score
in longform_rows})
grounding_map = {}
names = {}
for longform in combined_longforms:
groundings = adeft_grounder.ground(longform)
if groundings:
grounding = groundings[0]['grounding']
grounding_map[longform] = grounding
names[grounding] = groundings[0]['name']
longforms, counts = zip(*combined_longforms.most_common())
pos_labels = []
# -
list(zip(longforms, counts))
grounding_map, names, pos_labels = ground_with_gui(longforms, counts,
grounding_map=grounding_map,
names=names, pos_labels=pos_labels, no_browser=True, port=8890)
result = [grounding_map, names, pos_labels]
result
grounding_map, names, pos_labels = [{'solitary tract': 'ungrounded',
'suppressor t cells': 'ungrounded',
't suppressor': 'ungrounded',
'tactile stimulation': 'ungrounded',
'tafro syndrome': 'ungrounded',
'tail suspension': 'ungrounded',
'takotsubo syndrome': 'MESH:D054549',
'talaporfin sodium': 'ungrounded',
'tape stripping': 'ungrounded',
'tapioca starch': 'ungrounded',
'tardive syndrome': 'ungrounded',
'target sequence': 'ungrounded',
'target strand': 'ungrounded',
'tea saponins': 'ungrounded',
'technical skills': 'ungrounded',
'temperature sensitive': 'ungrounded',
'temperature shift': 'ungrounded',
'template strand': 'ungrounded',
'template switching': 'ungrounded',
'temporal summation': 'ungrounded',
'tensile strength': 'MESH:D013718',
'terminal spikelet': 'ungrounded',
'test stimulus': 'ungrounded',
'testosterone': 'CHEBI:CHEBI:17347',
'tetanic stimulation': 'ungrounded',
'the striatum': 'ungrounded',
'thermal sensation': 'ungrounded',
'thin section': 'ungrounded',
'thiostrepton': 'CHEBI:CHEBI:29693',
'thiosulfate': 'CHEBI:CHEBI:26977',
'threonine synthase': 'ungrounded',
'threshold selector': 'ungrounded',
'threshold shift': 'ungrounded',
'threshold suspend': 'ungrounded',
'thrombospondin': 'FPLX:THBS',
'thymidilate synthase': 'HGNC:12361',
'thymidine synthase': 'HGNC:12361',
'thymidylate synthase': 'HGNC:12361',
'thymidylate synthetase': 'HGNC:12361',
'thymostimulin': 'MESH:C031407',
'thyroid storm': 'MESH:D013958',
'thyrotoxic storm': 'MESH:D013958',
'timothy syndrome': 'DOID:DOID:0060173',
'tissue specific': 'ungrounded',
'tobacco smoke': 'MESH:D014028',
'tobacco smoke exposure': 'MESH:D014028',
'tocopherol succinate': 'CHEBI:CHEBI:135821',
'tocopheryl hemisuccinate': 'CHEBI:CHEBI:135821',
'tocopheryl succinate': 'CHEBI:CHEBI:135821',
'tokishakuyakusan': 'MESH:C413220',
'toona sinensis': 'ungrounded',
'tooth sensitive': 'ungrounded',
'top strand': 'ungrounded',
'total salvage': 'ungrounded',
'total saponins': 'CHEBI:CHEBI:26605',
'total score': 'ungrounded',
'total solids': 'ungrounded',
'total soy saponins': 'ungrounded',
'total starch': 'ungrounded',
'total sugar': 'MESH:D000073893',
'tourette s syndrome': 'MESH:D005879',
'tourette syndrome': 'MESH:D005879',
'toxicity stress': 'ungrounded',
'tpa + sb': 'CHEBI:CHEBI:30513',
'tracheal stenosis': 'MESH:D014135',
'tractus solitarius': 'ungrounded',
'trailing sif': 'ungrounded',
'training status': 'ungrounded',
'training stimuli': 'ungrounded',
'trans sialidase': 'IP:IPR008377',
'trans splicing': 'MESH:D020040',
'trans stilbene': 'CHEBI:CHEBI:36007',
'transcribed strand': 'ungrounded',
'transcription sites': 'ungrounded',
'transcription slippage': 'ungrounded',
'transferrin saturation': 'ungrounded',
'transition state': 'ungrounded',
'transmural stimulation': 'ungrounded',
'transseptal': 'ungrounded',
'transsulfuration': 'GO:GO:0019346',
'traumatic stress': 'ungrounded',
'treated with endoscopic surgery': 'ungrounded',
'triangularis sterni': 'ungrounded',
'triceps skinfold': 'ungrounded',
'triceps surae': 'ungrounded',
'triclosan': 'CHEBI:CHEBI:164200',
'trigeminovascular system': 'ungrounded',
'trimethoprim sulfamethoxazole': 'CHEBI:CHEBI:3770',
'tripartite synapses': 'ungrounded',
'troglitazone sulfate': 'MESH:C549564',
'trophectoderm stem': 'ungrounded',
'trophoblast stem': 'BTO:BTO:0004774',
'trophoblast stem cells': 'BTO:BTO:0004774',
'tropical sprue': 'MESH:D013182',
'tryptophan synthase': 'MESH:D014367',
'ts65dn': 'ungrounded',
'tuberous sclerosis': 'MESH:D014402',
'tuberous shape': 'ungrounded',
'tumor size': 'EFO:0004134',
'tumor sphere': 'ungrounded',
'tumor spheroids': 'ungrounded',
'tumor stiffness': 'ungrounded',
'tumor supernatant': 'ungrounded',
'tumor suppressing': 'ungrounded',
'tumor suppressor': 'MESH:D016147',
'turbulence slope': 'ungrounded',
'turner s syndrome': 'MESH:D014424',
'turner syndrome': 'MESH:D014424'},
{'MESH:D054549': 'Takotsubo Cardiomyopathy',
'MESH:D013718': 'Tensile Strength',
'CHEBI:CHEBI:17347': 'testosterone',
'CHEBI:CHEBI:29693': 'thiostrepton',
'CHEBI:CHEBI:26977': 'thiosulfate',
'FPLX:THBS': 'THBS',
'HGNC:12361': 'TYMS',
'MESH:C031407': 'thymostimulin',
'MESH:D013958': 'Thyroid Crisis',
'DOID:DOID:0060173': 'Timothy syndrome',
'MESH:D014028': 'Tobacco Smoke Pollution',
'CHEBI:CHEBI:135821': 'tocopherol succinate',
'MESH:C413220': 'toki-shakuyaku-san',
'CHEBI:CHEBI:26605': 'saponin',
'MESH:D000073893': 'Sugars',
'MESH:D005879': 'Tourette Syndrome',
'CHEBI:CHEBI:30513': 'antimony atom',
'MESH:D014135': 'Tracheal Stenosis',
'IP:IPR008377': 'Trypanosome sialidase',
'MESH:D020040': 'Trans-Splicing',
'CHEBI:CHEBI:36007': 'trans-stilbene',
'GO:GO:0019346': 'transsulfuration',
'CHEBI:CHEBI:164200': 'triclosan',
'CHEBI:CHEBI:3770': 'co-trimoxazole',
'MESH:C549564': 'troglitazone sulfate',
'BTO:BTO:0004774': 'trophoblast stem cell',
'BTO:BTO:0004774': 'trophoblast stem cell',
'MESH:D013182': 'Sprue, Tropical',
'MESH:D014367': 'Tryptophan Synthase',
'MESH:D014402': 'Tuberous Sclerosis',
'EFO:0004134': 'tumor size',
'MESH:D016147': 'Genes, Tumor Suppressor',
'MESH:D014424': 'Turner Syndrome'},
['BTO:BTO:0004774',
'CHEBI:CHEBI:135821',
'MESH:C413220',
'DOID:DOID:0060173',
'FPLX:THBS',
'HGNC:12361',
'IP:IPR008377',
'MESH:D005879',
'MESH:D054549',
'MESH:D014028',
'MESH:D014402',
'MESH:D014424']]
excluded_longforms = []
# +
grounding_dict = {shortform: {longform: grounding_map[longform]
for longform, _, _ in longforms if longform in grounding_map
and longform not in excluded_longforms}
for shortform, longforms in longform_dict.items()}
result = [grounding_dict, names, pos_labels]
if not os.path.exists(results_path):
os.mkdir(results_path)
with open(os.path.join(results_path, f'{model_name}_preliminary_grounding_info.json'), 'w') as f:
json.dump(result, f)
# -
additional_entities = {}
unambiguous_agent_texts = {}
# +
labeler = AdeftLabeler(grounding_dict)
corpus = labeler.build_from_texts((text, pmid) for pmid, text in all_texts.items())
agent_text_pmid_map = defaultdict(list)
for text, label, id_ in corpus:
agent_text_pmid_map[label].append(id_)
entity_pmid_map = {entity: set(get_pmids_for_entity(*entity.split(':', maxsplit=1),
major_topic=True))for entity in additional_entities}
# -
intersection1 = []
for entity1, pmids1 in entity_pmid_map.items():
for entity2, pmids2 in entity_pmid_map.items():
intersection1.append((entity1, entity2, len(pmids1 & pmids2)))
intersection2 = []
for entity1, pmids1 in agent_text_pmid_map.items():
for entity2, pmids2 in entity_pmid_map.items():
intersection2.append((entity1, entity2, len(set(pmids1) & pmids2)))
intersection1
intersection2
# +
all_used_pmids = set()
for entity, agent_texts in unambiguous_agent_texts.items():
used_pmids = set()
for agent_text in agent_texts:
pmids = set(get_pmids_for_agent_text(agent_text))
new_pmids = list(pmids - all_texts.keys() - used_pmids)
text_dict = get_plaintexts_for_pmids(new_pmids, contains=agent_texts)
corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items()])
used_pmids.update(new_pmids)
all_used_pmids.update(used_pmids)
for entity, pmids in entity_pmid_map.items():
new_pmids = list(set(pmids) - all_texts.keys() - all_used_pmids)
if len(new_pmids) > 10000:
new_pmids = random.choices(new_pmids, k=10000)
text_dict = get_plaintexts_for_pmids(new_pmids, contains=['RTCA', 'RTCD1', 'RPC', 'RTC1', 'RTC'])
corpus.extend([(text, entity, pmid) for pmid, text in text_dict.items()])
# -
names.update(additional_entitie)
# +
# %%capture
classifier = AdeftClassifier(shortforms, pos_labels=pos_labels, random_state=1729)
param_grid = {'C': [100.0], 'max_features': [10000]}
texts, labels, pmids = zip(*corpus)
classifier.cv(texts, labels, param_grid, cv=5, n_jobs=5)
# -
classifier.stats
disamb = AdeftDisambiguator(classifier, grounding_dict, names)
disamb.dump(model_name, results_path)
print(disamb.info())
model_to_s3(disamb)
| model_notebooks/TS/model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import nltk
import numpy as np
import pandas as pd
import gensim
from gensim import corpora, models, similarities
#reading the excel file containing the dataset
df = pd.read_csv("../bible_data_set/bible_data_set.csv")
new = ['Matthew', 'Mark', 'Luke', 'John', 'Acts', 'Romans', '1 Corinthians',
'2 Corinthians', 'Galatians', 'Ephesians', 'Philippians', 'Colossians',
'1 Thessalonians', '2 Thessalonians', '1 Timothy', '2 Timothy', 'Titus', 'Philemon',
'Hebrews', 'James', '1 Peter', '2 Peter', '1 John', '2 John', '3 John', 'Jude',
'Revelation']
df['class'] = np.where(df['book'].isin(new), 1, 0)
df = df[['class','text']]
df.columns = ['label', 'data']
#word2vec
x=df['data'].values.tolist()
corpus = x
tok_corp= [nltk.word_tokenize(sent) for sent in corpus]
model = gensim.models.Word2Vec(tok_corp, min_count=1, size = 32, seed = 45)
#calculating the extra column with nouns in the respective sentence
noun = ['NN','NNS','NNP','NNPS']
df['noun'] = np.empty((len(df), 0)).tolist()
it=0
for i in tok_corp[0:1000]:
pos = nltk.pos_tag(i)
for n in pos:
if(n[1] in noun):
df.iloc[it]['noun']=df.iloc[it]['noun'].append(n[0])
it+=1
#saving the dataframe with nouns to csv
df.to_csv('../bible_data_set/data_with_noun.csv')
| Nouns_seggregation/noun_seggregation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to MulensModel
#
# How to create and plot a model and then add some data and fit for the source and blend fluxes.
#
# This example shows OGLE-2003-BLG-235/MOA-2003-BLG-53, the first microlensing planets. See Bond et al. 2004
# Import basic packages
import MulensModel
import matplotlib.pyplot as pl # MulensModel uses matplotlib for plotting.
# +
# Define a point lens model:
my_pspl_model = MulensModel.Model(t_0=2452848.06, u_0=0.133, t_E=61.5)
# Or a model with 2-bodies:
my_1S2L_model = MulensModel.Model(
t_0=2452848.06, u_0=0.133, t_E=61.5, rho=0.00096, q=0.0039, s=1.120,
alpha=43.8)
# -
# Plot those models:
my_pspl_model.plot_magnification(
t_range=[2452810, 2452890], subtract_2450000=True, color='red',
linestyle=':', label='PSPL')
my_1S2L_model.plot_magnification(
t_range=[2452810, 2452890], subtract_2450000=True, color='black',
label='1S2L')
pl.legend(loc='best')
pl.show()
# +
# Suppose you also had some data you want to import:
OGLE_data = MulensModel.MulensData(
file_name='../data/OB03235/OB03235_OGLE.tbl.txt', comments=['\\','|'])
MOA_data = MulensModel.MulensData(
file_name='../data/OB03235/OB03235_MOA.tbl.txt', phot_fmt='flux',
comments=['\\','|'])
# +
# Now suppose you wanted to combine the two together:
my_event = MulensModel.Event(
datasets=[OGLE_data, MOA_data], model=my_1S2L_model)
# And you wanted to plot the result:
my_event.plot_model(
t_range=[2452810,2452890], subtract_2450000=True, color='black')
my_event.plot_data(subtract_2450000=True, label_list=['OGLE','MOA'])
# MulensModel automatically fits for the source and blend flux for the
# given model.
# Customize the output
pl.legend(loc='best')
pl.title('OGLE-2003-BLG-235/MOA-2003-BLG-53')
pl.ylim(19., 16.5)
pl.xlim(2810,2890)
pl.show()
# -
# If you want to see how good the fit is, output the chi2:
print('Chi2 of the fit: {0:8.2f}'.format(my_event.get_chi2()))
# If you want to optimize the chi2, we leave it up to you to determine the best method for doing this.
| examples/.ipynb_checkpoints/MulensModelTutorial-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/NoCodeProgram/CodingTest/blob/main/mathBit/primeNumbers.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="djO9BHEGqjqM"
# Title : Prime Numbers
#
# Chapter : Math, Bit
#
# Link :
#
# ChapterLink :
#
# 문제: 주어진 숫자 n 보다 작은 모든 소수를 구하여라
# + colab={"base_uri": "https://localhost:8080/"} id="LBt9VQQWqilX" outputId="00f510fd-23d1-4850-d657-4604a199bf17"
def primeNumbers(n: int) -> int:
if n<=2:
return []
primes = []
for i in range (2,n):
prime = True
for j in range(2,i):
if i % j == 0:
prime = False
continue
if prime:
primes.append(i)
return primes
print(primeNumbers(50))
# + colab={"base_uri": "https://localhost:8080/"} id="U6RHRcVKNlUa" outputId="736aeca3-08c9-488c-abe5-866ff6cb1328"
# %timeit primeNumbers(10000)
# + colab={"base_uri": "https://localhost:8080/"} id="004uox2Uq7ah" outputId="ecb16ae9-f164-4bf8-efd3-89da8f7ef083"
import math
def primeNumbers2(n: int) -> int:
if n <= 2:
return []
numbers = [True]*n
numbers[0] = False
numbers[1] = False
for idx in range(2, int(math.sqrt(n)) + 1):
if numbers[idx] == True:
for i in range(idx*idx, n, idx):
numbers[i] = False
primes = []
for idx,prime in enumerate(numbers):
if prime==True:
primes.append(idx)
return primes
print(primeNumbers2(50))
# + id="q4Cm5co80I4z" colab={"base_uri": "https://localhost:8080/"} outputId="7df9f6f9-32dc-4af7-eed4-fbafd1232d04"
# %timeit primeNumbers2(10000)
| mathBit/primeNumbers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/tugbargn/Machine-Learning-/blob/main/Densenet.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="aoeupv1RVsIo"
# #DENSENET
#
# + [markdown] id="jnOfoPQCV6Yg"
# ###DENSENET 121
# + id="rSw7e3NxpDcS" colab={"base_uri": "https://localhost:8080/"} outputId="57dd4a39-7526-4898-9e8a-ee0bafb779ce"
import numpy as np
import pandas as pd
from keras.preprocessing import image
from PIL import Image
from scipy import misc
import seaborn as sns
from keras.models import Sequential
import tensorflow as tf
import matplotlib.pyplot as plt
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from sklearn.model_selection import train_test_split
from keras.layers import Input, Dense, Convolution2D, MaxPooling2D, UpSampling2D, Softmax, ZeroPadding2D, GlobalAveragePooling2D,BatchNormalization, Flatten
from tensorflow.keras import datasets, layers, models
from tensorflow import keras
from tensorflow.keras import layers
from keras.layers.core import Flatten, Dense, Dropout
from tensorflow.keras.preprocessing import image
from keras.optimizers import Adam, SGD, Adamax, Adagrad
from tensorflow.keras.applications import DenseNet121
from tensorflow.keras.models import Model
from tensorflow.keras.applications.densenet import preprocess_input
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from sklearn.pipeline import Pipeline
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from keras.callbacks import ModelCheckpoint, EarlyStopping
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
vertical_flip=True,
horizontal_flip=True,
rotation_range=90,
width_shift_range=0.1,
height_shift_range=0.1,
validation_split=0.3)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'/content/drive/MyDrive/Colab Notebooks/Train',
target_size=(224, 224),
batch_size=32,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
'/content/drive/MyDrive/Colab Notebooks/Train',
target_size=(224, 224),
batch_size=32,
class_mode='categorical')
test_generator = test_datagen.flow_from_directory(
'/content/drive/MyDrive/Colab Notebooks/Train',
target_size=(224, 224),
batch_size=32,
class_mode='categorical')
validation_datagen = ImageDataGenerator(rescale=1./255)
validation_generator = validation_datagen.flow_from_directory(
'/content/drive/MyDrive/Colab Notebooks/Train',
target_size=(224, 224),
batch_size=32,
class_mode='categorical')
y_pred = model.predict(test_generator)
STEP_SIZE_TRAIN=train_generator.n//train_generator.batch_size
STEP_SIZE_VALID=validation_generator.n//validation_generator.batch_size
# + id="gEBnmsaYiI_z"
def build_densenet121():
densenet = DenseNet121(weights='imagenet', include_top=False)
input = Input(shape=(SIZE, SIZE, N_ch))
x = Conv2D(3, (3, 3), padding='same')(input)
x = densenet(x)
x = GlobalAveragePooling2D()(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(256, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
# multi output
output = Dense(2,activation = 'softmax', name='root')(x)
# model
model = Model(input,output)
optimizer = Adam(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=0.1, decay=0.0)
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
model.summary()
history = model.fit(train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
epochs=10,
validation_data=validation_generator,
validation_steps=STEP_SIZE_VALID)
return model
model=build_densenet121()
# + id="BcCTJdjnUEVZ"
model = DenseNet121(include_top= True, weights='imagenet', input_shape=(224,224,3))
x = model.output
predictions = Dense(2, activation='softmax')(x)
model = Model(inputs = model.input, outputs=predictions)
for layer in model.layers:
layer.trainable = False
model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['accuracy'])
history = model.fit(train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
epochs=10,
validation_data=validation_generator,
validation_steps=STEP_SIZE_VALID)
# + id="2JlC_6DRP1KO"
plt.plot(history.history['accuracy'],color = 'red')
plt.plot(history.history['val_accuracy'],color = 'blue')
plt.title('Densenet 121 Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train','Validation'], loc = 'best')
plt.show()
plt.plot(history.history['loss'],color = 'red')
plt.plot(history.history['val_loss'],color = 'blue')
plt.title('Densenet 121 Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train','Validation'],loc = 'best')
plt.show()
# + [markdown] id="xgQkZsWwWGPt"
# ###DENSENET 201
#
# + id="6EKNrElNWJ9F"
# + [markdown] id="VJVonglVWLhf"
# ###DENSENET 201
#
# + id="aCAQlxtSWO8s"
# + [markdown] id="tbVYcIjnWQP5"
# ###DENSENET 161
# + id="rYJ24TCqWTJ8"
# + [markdown] id="s_2MZbrTWT2g"
# ###DENSENET 169
#
# + id="ub606R73WWty"
| Densenet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.4 64-bit (''base'': conda)'
# language: python
# name: python37464bitbasecondad572fd0cb7ab471fbfff7fdb64f304d8
# ---
# # Bolivian Musicians - Artist Segmentation
#
# ### Overview
# Bolivian musicians are very talented but unfortunately they're not given the attention they deserve. This data-based project aims to promote bolivian artists and make them more visible for locals and the world.
#
# The project consists in:
# - The construction of a dataset of bolivian artists by fetching public Spotify data.
#
# - The application of data clustering techniques to segment the collected data and gain
# artists description.
# - The implementation of a web app for data visualization and knowledge-based artist recommendations.
#
#
# ### Problem Statement
# Since I could not find any dataset of bolivian artists, I had to build one from scratch. As far as I know this is the first dataset of Bolivian artists. The details of the data collection are [here](https://github.com/leanguardia/bolivian-music#dataset-construction).
#
# By nature, music is very varied and therefore artists are even more. Artists can play different genres of music in different languages, with collaborations and sound influences. There are old, traditional and trending artists with different listener targets. For this matter, unsupervised learning is applied to perform automatic artists segmentation or clustering. The expected outcome is to gain deeper understanding of the data and use it the organize the recommendation system.
#
# This notebook contains:
# - Data Analysis
# - Artist Clustering
# - Results and Discussion
# - Conclusions and Recommendations
#
# ### Metrics
# - **Number of Partitions:** Different number of groups is tested with different subsets of the dataset and with extended features as well. Different interpretations of this values are discussed in the context of the business problem.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from ast import literal_eval
# %matplotlib inline
df = pd.read_csv('data/artists.csv')
df.shape
df.head()
# ## Analysis
# ### Data Exploration
df.info()
quantitative_cols = ['popularity', 'followers']
desc = df[quantitative_cols].describe()
desc
qualitative_cols = [col for col in df.columns if col not in quantitative_cols]
df[qualitative_cols].describe()
# #### Missing Values
df.isna().sum()
# **Note:** Despite the first 4 columns not having any missing values. The `genre` column should be treated differently becuase it represents a list. Counting missing values should be treated differently because this series is a string, so `[]` evaluates as _non-null_.
# Genres
no_genres = df[df.genres == '[]']
print(f"There are {no_genres.shape[0]} / {df.shape[0]} artists with no genres associated")
no_genres
# img_url
df[df.img_url.isnull()]
# It is not necessary to remove this elements because they represent artists existing in real life who might have not created an account in Spotify or simply did not upload any picture.
# #### Duplicated values
# Duplicates by id or name.
df[df.duplicated('artist_id') | df.duplicated('name')]
# **Note:** Duplicates were removed right after this dataset was created, the implementation is in the `clean_artists` function found in `artists_collector.py`.
# ### Data Visualization
# #### Popularity
df.sort_values('popularity', ascending=False).head(10)
bins = np.arange(0, 60, 5)
ax = df.popularity.plot.hist(bins=bins, rwidth=0.9, figsize=(10,5));
ax.set_xlabel('Popularity bins'); ax.set_title('Popularity histogram')
plt.xticks(bins, bins);
# Bimodal distribution skewed to the right. Half of the data has 16 popularity points or less.
ax = df.boxplot(column='popularity', vert=False, figsize=(16,3));
ax.annotate(df.popularity.median(), (df.popularity.median(), 0.85))
ax.annotate(desc.popularity['25%'], (desc.popularity['25%'], 0.85))
ax.annotate(desc.popularity['75%'], (desc.popularity['75%'], 0.85));
# No outliers found.
# #### Followers
# +
fig, ax = plt.subplots(1, 1, figsize=(15,6))
plt.xscale('log')
positive_followers = df.followers[df.followers > 0]
no_followers = df[df.followers == 0].shape[0]
positive_followers_log = np.log10(positive_followers)
bin_edges = 10 ** np.arange(0, positive_followers_log.max()+0.1, 0.1)
ax.hist(positive_followers, bins=bin_edges, rwidth=0.90)
plt.bar(0.1, no_followers, width = 1)
xticks = [1,4,10, 40, 100, 400, 1000, 4000, 10000, 40000, 100000]
plt.xticks(xticks, xticks)
yticks = np.arange(0, 16, 2)
plt.yticks(yticks, yticks)
plt.title("Followers Distribution"); plt.grid()
plt.ylabel("Frequency"); plt.xlabel("Followers bins (log10)");
# -
# Outliers
ax = df.boxplot(column='followers', vert=False, figsize=(16,3));
# In the case of `followers`, many artists are outliers.
# +
Q1 = desc.followers['25%']
Q3 = desc.followers['75%']
IQR = Q3 - Q1
followers_limit = Q3 + 1.5 * IQR
outliers = df[(df.followers >= followers_limit)].copy()
print("Outliers:", outliers.shape[0], '/', df.shape[0], "elements with more than", followers_limit, 'followers',
"(", round(outliers.shape[0] / df.shape[0], 2) * 100, "%).")
outliers.head()
# -
def remove_outliers(df):
Q1 = df.followers.quantile(.25)
Q3 = df.followers.quantile(.75)
IQR = Q3 - Q1
followers_limit = Q3 + 1.5 * IQR
return df[df.followers <= followers_limit].copy()
non_outliers = remove_outliers(df)
print(non_outliers.shape)
ax = non_outliers.followers.plot.hist(rwidth=0.9, bins=np.arange(0,4800+100,100), figsize=(14,5))
ax.set_title('Followers distribution with no outliers'); ax.set_xlabel("Followers bins"); plt.grid()
# #### Popularity and Followers
# +
ax = df.plot.scatter(x='popularity', y='followers', figsize=(14,8))
ax.spines['right'].set_visible(False); ax.spines['top'].set_visible(False); plt.grid()
popularity_thrs = 40
followers_thrs = 18000
top_artists = df[(df.popularity >= popularity_thrs) | (df.followers >= followers_thrs)]
for i, row in top_artists.iterrows():
ax.annotate(row['name'], (row['popularity']+0.5, row['followers']), rotation=15)
ax.set_title("Scattered Artists: Popularity vs Followers FULL");
# +
# Zoomed Plot
zoomed_artists = df[(df.popularity < popularity_thrs) & (df.followers < followers_thrs)]
ax = zoomed_artists.plot.scatter(x='popularity', y='followers', figsize=(18,16))
ax.spines['right'].set_visible(False); ax.spines['top'].set_visible(False); plt.grid()
ax.set_title("Scattered Artists: Popularity vs Followers ZOOMED");
mid_artists = zoomed_artists[(zoomed_artists.popularity >= 30) | (zoomed_artists.followers >= 1600)]
for i, row in mid_artists.iterrows():
ax.annotate(row['name'], (row['popularity']+0.3, row['followers']), rotation=15)
# -
# #### Genres
df.genres = df.genres.apply(literal_eval)
def plot_genre_dist(df):
exploded = df.genres.explode()
print("Number of genres:", len(exploded.unique()))
ax = exploded.value_counts(dropna=False).sort_values(ascending=True).plot.barh(figsize=(12,8))
ax.set_xlabel('Frequency'); ax.set_ylabel('Genre'); ax.set_title('Genre distribution');
ax.spines['right'].set_visible(False); ax.spines['top'].set_visible(False); plt.grid();
ax.spines['left'].set_visible(False); ax.spines['bottom'].set_visible(False);
plot_genre_dist(df)
non_outliers.genres = non_outliers.genres.apply(literal_eval)
plot_genre_dist(non_outliers)
# Removing outliers results in loosing 29 artists and 3 genres.
# ## Methodology
#
# Kmeans is the unsupervised learning technique used for data clustering. It is a general purpose algorithm that assigns data points to a given number of clusters by calculating the similarity of each one of them with the closets center. The centers are reallocated in each iteration and the algorithm converges when the total sum-of-squares within cluster is minimized. https://scikit-learn.org/stable/modules/clustering.html#k-means.
#
# Data will be prepared, and then KMeans analysis will be implemented for the following subsets of data:
# 1. Followers and Popularity
# 1. Followers and Popularity without outliers
# 2. Followers, Popularity and genres
# 3. Followers, Popularity and genres without outliers
#
# ### Data Preprocessing
#
# The `genres` column contains a list of genres. These will be split in dummy columns to represent wheter an artist is labeled with a genre `1` or not `0`.
genres = pd.Series(df.genres.explode().unique())
genres = genres.replace(np.nan, 'NaN').tolist()
genres
# **Note:** In this case it makes sense to include `NaN` values to the model. There might be artists which genre was not assigned by Spotify for a specific reason. E.g. low popularity.
# +
# Create a container to fill dummy variables
zeroes = np.zeros((df.shape[0], len(genres)))
dummies = pd.DataFrame(zeroes, columns=genres, index=df.index, dtype=int)
# Fill dummy variables for each genre in the list.
for i, artist_genres in enumerate(df.genres):
indices = np.unique(dummies.columns.get_indexer(artist_genres))
dummies.iloc[i, indices] = 1
# Merge dummies to df
df_model = pd.concat([df, dummies], axis=1)
df_model.head()
# -
# ### Implementation
# +
from sklearn.cluster import KMeans
import numpy as np
def plot_clusters(dataframe, clusters):
cluster_series = pd.Series(clusters, name="cluster", dtype=int, index=dataframe.index)
df_clusters = pd.concat([dataframe, cluster_series], axis=1)
k = len(pd.unique(clusters))
fig, ax = plt.subplots(figsize=(12,7))
sns.scatterplot(data=df_clusters, x='popularity', y='followers', hue='cluster', ax=ax, palette="tab20")
ax.set_title("Artist Segmentation (k={})".format(k)); ax.set_xlabel('popularity')
ax.spines['right'].set_visible(False); ax.spines['top'].set_visible(False); plt.grid();
return df_clusters
def segment_data(X, k):
kmeans = KMeans(n_clusters=k, random_state=0).fit(X)
clusters = kmeans.labels_
return kmeans, clusters
features = ['popularity', 'followers'] + genres
kmeans, clusters = segment_data(df_model[features], k=4)
_ = plot_clusters(df_model, clusters)
# -
kmeans.cluster_centers_[:, :2]
# **Note:** The Previous segmentation is manly using followers data since the features have not been scaled. `KMeans` uses Euclidean Distance to measure similarity between data points, therefore, data needs to be scaled before fitting the data. (Data Engineering Lesson in course content).
# ### Refinement
# #### Scaling Features
# +
from sklearn.preprocessing import scale
df_model['popularity_std'] = scale(df_model['popularity'])
df_model['followers_std'] = scale(df_model['followers'])
# scaling dummy genres
genres_std = [genre + ' std' for genre in genres]
dummies_std = pd.DataFrame(scale(dummies), columns=genres_std)
# merge scaled data to main df
df_model = pd.concat([df_model, dummies_std], axis=1)
# Segment artists
features = ['popularity_std', 'followers_std'] + genres_std
kmeans, clusters = segment_data(df_model[features], k=4)
_ = plot_clusters(df_model, clusters)
# -
# In the previous analysis, popularity, followers, and 19 genres were included. It is clear to see that the algorithm grouped data in more diverse ways. However, the number of clusters was taken arbitrarily.
#
# ### Define the number of Clusters
# It is necessary to define the most appripriate number of clusters (`k`) for the available data. In some cases [domain knowledge](https://towardsdatascience.com/k-means-clustering-algorithm-applications-evaluation-methods-and-drawbacks-aa03e644b48a) is enough. However, in this work the [elbow method](https://towardsdatascience.com/clustering-metrics-better-than-the-elbow-method-6926e1f723a6) is applied.
#
# #### Elbow Method
#
# Consists in running the clustering algorithm for a list of numbers of clusters, find the smallest `k` with less decrease of `wcss` (within cluster squared sums) in the skree plot (the elbow), that is a potential good `k`.
# +
# Analysis 1: popularity and followers
features = ['popularity_std', 'followers_std']
clusters_list = np.arange(1,30)
wcss = []; labels = []
X = df_model[features]
for k in clusters_list:
kmeans, clusters = segment_data(X, k)
wcss.append(kmeans.inertia_)
labels.append(clusters)
def plot_skree(clusters_list, wcss):
clusters_res = pd.DataFrame({'k': clusters_list, 'wcss': wcss})
ax = clusters_res.plot(x='k', y='wcss', marker='.', figsize=(8,8))
ax.set_title('Skree plot'); ax.set_ylabel('wcss'); plt.grid()
plot_skree(clusters_list, wcss)
# -
num_of_clusters = 6
clusters_1 = labels[num_of_clusters-1]
df_clusters = plot_clusters(df_model, clusters_1)
# From left to right: the first three groups could be labeled as emergent artists with increasing popularity. The folowing groups could be labeled as popular artists because of both, their popularity and number of followers, seem to have a linear correlation. The final two groups could be merged into and named the top artists.
# +
# Analysis 2: Popularity and Followers, no outliers
features = ['popularity_std', 'followers_std']
clusters_list = np.arange(1,30)
wcss = []; labels = []
non_outliers = remove_outliers(df_model) # Remove outliers from prepared data
X = non_outliers[features]
for k in clusters_list:
kmeans, clusters = segment_data(X, k)
wcss.append(kmeans.inertia_)
labels.append(clusters)
plot_skree(clusters_list, wcss)
# -
num_of_clusters = 4
clusters_2 = labels[num_of_clusters-1]
df_clusters_2 = plot_clusters(non_outliers, clusters_2)
# Similar to the previous analysis, data is stratified by ranges of popularity. Grouping the followers dimension is not that robust since there is high variance. If the 29 outliers are taken as the top artists, the data could be grouped into 5 ranks (of popularity and number of followers).
# +
# Analysis 3: popularity, followers and genres
features = ['popularity_std', 'followers_std'] + genres_std
clusters_list = np.arange(1,30)
wcss = []
labels = []
X = df_model[features]
for k in clusters_list:
kmeans, clusters = segment_data(X, k)
wcss.append(kmeans.inertia_)
labels.append(clusters)
plot_skree(clusters_list, wcss)
# -
# Interestingly enough the appropriate number of clusters increases to 20 when genres are included in the analysis.
num_of_clusters = 20
clusters_3 = labels[num_of_clusters-1]
df_clusters = plot_clusters(df_model, clusters_3)
# Since the outliers make this graphic too difficul to interpret, the same analysis will be made below with the subset with no outliers. However, a heatmap is plotted below to visualize the genres-clusters relation.
def plot_genre_cluster_map(df):
genres_cluster = df.explode('genres').astype(str).groupby(['genres', 'cluster']) \
.count()['name'].astype(int).unstack(fill_value=0)
genres_cluster['sum'] = genres_cluster.apply(sum, axis=1)
genres_cluster = genres_cluster.sort_values(by='sum', ascending=False)
genres_cluster = genres_cluster.drop(columns='sum')
total_by_cluster = genres_cluster.apply(sum, axis=0)
genres_cluster.loc['TOTAL'] = total_by_cluster
fig, ax = plt.subplots(figsize=(9,8))
sns.heatmap(genres_cluster, linewidth=0.5, cmap="YlGnBu", ax=ax);
genres_cluster_vals = genres_cluster.values
for row in range(genres_cluster_vals.shape[0]):
for col in range(genres_cluster_vals.shape[1]):
ax.annotate(genres_cluster_vals[row][col], (col+0.3, row+0.55))
plot_genre_cluster_map(df_clusters)
# In the previous analysis KMeans grouped artists by their music genders. Most of the clusters collected artists from specific genres exclusively, i.e. _chamame, andean flute, bolero, latin pop_ and _No Genre (nan)_. On the other side, some clusters contain pairs of genres such as cluster 16 with _folklore_ and _andean_ which is related with cluster for _charango_ music.
#
# Similarly, some instances of _rock_ and _indie_ have been grouped in cluster 0. The rest of the clusters group few artists belonging to the rest of unfrequent genres. Some of them are subgenres of the mos populated ones. For example _hayno_ and _chamame_ are two subtypes of _folklore_.
#
# Below the distribution between clusters and popularity and followers is shown.
sns.pairplot(df_clusters, x_vars=["followers", "popularity"], y_vars=['cluster'], height=5, aspect=1.5, hue='cluster');
# Again, it is difficult to find correlations for the followers distributions because of the outliers present. On the other side, the distribution between cluster and probability does not show an obvious relation.
# Since 20 clusters are too many groups. The elbow method is re-aplied after removing the outliers from the dataset.
# +
# Analysis 4: Popularity, Followers, genres, no outliers.
features = ['popularity_std', 'followers_std'] + genres_std
clusters_list = np.arange(1,30)
wcss = []
labels = []
X = non_outliers[features]
for k in clusters_list:
kmeans, clusters = segment_data(X, k)
wcss.append(kmeans.inertia_)
labels.append(clusters)
plot_skree(clusters_list, wcss)
# -
# After removing the 29 outliers identified above, the data subset has now 16 genres, exactly the same number of apparent best number of clusters.
num_of_clusters = 16
clusters_4 = labels[num_of_clusters-1]
df_clusters_reduced = plot_clusters(non_outliers, clusters_4)
plot_genre_cluster_map(df_clusters_reduced)
sns.pairplot(df_clusters_reduced, x_vars=["followers", "popularity"], y_vars=['cluster'], height=5, aspect=1.2, hue='cluster');
# The segmentation of the subset of data looks similar regardless of the absence of outliers. No new information could be found in comparison with the segmentation of the full dataset. The final analysis is discussed in the following section.
# ## Results
#
# ### Model Evaluation
#
# KMeans is an unsupervised learning algorithm, this implies that data is not labeled and no optimization or objective evaluation can be applied. However, after preparing the data and analysing convenient number of clusters with the elbow method in 4 different subsets of the data, the following results have been met:
# - The segmentation of the data including followers and popularity only results in segments of artists which are strongly grouped by popularity ranges of length 10 to 12. (e.g. poularity between 0 to 10, 11 to 23 and so on). This segmentation could be usefil to rank artists and label them with the following example: `top`, `popular`, `thriving`, `emerging`, etc.
# - When genres data is included, Kmeans groups artists by that feature and the importance of followers and popularity decreses notably. Some clusters include different genres but these are only models of overlapping genres found in the data, i.e. `Kjarkas` with genres _folklore, andean_ and _charango_. No other valuable insight was found in that analysis for both with and without outliers.
#
# ### Justification
#
# - KMeans worked well for the anlaysis of popularity and follwers, the resulting clustering will be used to group artists into ranks in the end of this notebook.
# - The segmentation of the data when including genre might have failed because of the following reasons:
# 1. The genres data became more important since popularity and followers are poor features. Popularity is highly concentrated in low values. And followers has a large variance and many outliers. Forcing Kmeans to group dataponts by popularity. However, the scaled 20 genres were still more important.
# 2. The fundamental problem of this dataset is that it is small, only 200+ datapoints.
# 3. The quality of the genres associated to artists is low and misses information in many cases. It would be ideal to improve that information. Some of them were manually added but that work requires more effort. It is highly likely that Spotify labels artsts automatically or perhaps manually, however, it is not representative enough of the reality.
# ## Conclusion
# ### Reflection
# By the motivation of performing a data science project of my interest I realise that many projects can be done. Opportunities are infinite. I decided to analyse on Music because it is something I am passionate about and I have learned very much the music my home country produces. The main outcome for me is to see in practice how much time and effort it requries to (1) collect data, (2) explore it (3) prepare it and feed it to ML and (3) reflect on the results. All of the stages had their specific challanges and I have ideas on how to improve all of them. Next, I will mention a couple of interesting learnin outcomes I gained throughout this work and then list improvement points for future work.
#
# Learning outcomes:
# - I fully agree with the [pyramid of needs of DS](https://hackernoon.com/the-ai-hierarchy-of-needs-18f111fcc007). In this case, I decided to build my own dataset from scratch, and the outcome of the top level ML analysis (tip) strongly depends and is completely founded in the collection of the data. I spent lot of time refining the data, filtering it, trying to include more variety and richness. However, it is still not enough and it can be improved in many ways to make it richer and extend features. I plan to upload it to Kaggle and motivate colleages to work on it when they have time.
# - Working in a business problem I decided myself was different from the projects of the Nanodegree in the sense that I really wanted to generate quality outcome. That made me learn more deeply in many fields, specially in statistics, visualization and data clustering. There is much more to learn but now I am much more prepared.
#
# ### Improvements
# - Unfortunately the API does not offer popularity or followers. However, `tracks_collector.py` can be implemented in order to pull the following information:
# - Number of tracks released
# - First release date
# - Last release date
# - Number of albums
# The previous data might help finding non-obvious groups such as new, traditional artists within and across music genres.
# - The web version of Spotify shows `Monthly listeners`, data data could be pulled with Web Scrapping techniques.]
# - The quality of the genres would need to be improved. For that, more genres should be identified and crowd data-labelling could be implemented in the web app.
# - Other scaling methods and segmentation algorithms could be tried.
# - Other music streaming services (e.g. Amazon Music, Apply Music, Deezer) might be used to enrich the existing data.
# - Advanced: The music spectograms could be encoded and analyzed
# ### Save Labelled Data
# Clusters of Analysis 2 will be taken as incremental ranks and the outliers will be the `top` artist.
labelled = df_clusters_2.copy() # Analysis 2
labelled = labelled[['artist_id', 'name', 'popularity', 'followers', 'genres', 'img_url', 'cluster']]
labelled
outliers.loc[:, 'cluster'] = -1
outliers.head()
plot_clusters(non_outliers, clusters_2); # Analysis #2
# +
# Re join outliers
labelled = pd.concat([labelled, outliers])
# Map cluster to descriptive rankings.
labelled['ranking'] = labelled.cluster.map({-1: 'top', 2: 'popular', 3: 'blooming', 0: 'starters', 1: 'unknown'})
labelled.ranking.value_counts().plot.barh(title = 'Raking Distribution');
# -
labelled = labelled.drop(columns='cluster')
labelled.sort_values('followers', ascending=False).to_csv('data/artists_grouped.csv', index=False)
| Capstone-Project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="RcbU7uu7akGj"
# # 머신 러닝 교과서 3판
# + [markdown] id="WOFUIVf8akGn"
# # 7장 - 다양한 모델을 결합한 앙상블 학습
# + [markdown] id="CwcCkUCsakGn"
# **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.**
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://nbviewer.jupyter.org/github/rickiepark/python-machine-learning-book-3rd-edition/blob/master/ch07/ch07.ipynb"><img src="https://jupyter.org/assets/main-logo.svg" width="28" />주피터 노트북 뷰어로 보기</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/rickiepark/python-machine-learning-book-3rd-edition/blob/master/ch07/ch07.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
# </td>
# </table>
# + [markdown] id="vC0qpBcbakGo"
# ### 목차
# + [markdown] id="rq9yuQBxakGo"
# - 앙상블 학습
# - 다수결 투표를 사용한 분류 앙상블
# - 간단한 다수결 투표 분류기 구현
# - 다수결 투표 방식을 사용하여 예측 만들기
# - 앙상블 분류기의 평가와 튜닝
# - 배깅: 부트스트랩 샘플링을 통한 분류 앙상블
# - 배깅 알고리즘의 작동 방식
# - 배깅으로 Wine 데이터셋의 샘플 분류
# - 약한 학습기를 이용한 에이다부스트
# - 부스팅 작동 원리
# - 사이킷런에서 에이다부스트 사용
# - 요약
# + [markdown] id="om-FBdErakGo"
# <br>
# <br>
# + colab={"base_uri": "https://localhost:8080/"} id="826Z42JPakGo" outputId="5f5152b2-cc54-415e-8b42-b0e0180200a9"
# 코랩에서 실행할 경우 최신 버전의 사이킷런을 설치합니다.
# !pip install --upgrade scikit-learn
# + id="b852HoJ8akGp"
from IPython.display import Image
# + [markdown] id="X5lC1_K5akGp"
# # 앙상블 학습
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="n6opzu9iakGp" outputId="98d68492-c37b-4e07-e9b0-7c77c1fa3630"
Image(url='https://git.io/JtskW', width=500)
# + colab={"base_uri": "https://localhost:8080/", "height": 445} id="Fl869VXJakGq" outputId="1d2aa7ec-d1f4-403e-e588-ad45a9d0d6a1"
Image(url='https://git.io/Jtskl', width=500)
# + id="oL_CWVhXakGq"
from scipy.special import comb
import math
def ensemble_error(n_classifier, error):
k_start = int(math.ceil(n_classifier / 2.))
probs = [comb(n_classifier, k) * error**k * (1-error)**(n_classifier - k)
for k in range(k_start, n_classifier + 1)]
return sum(probs)
# + colab={"base_uri": "https://localhost:8080/"} id="443t5C3wakGq" outputId="67503487-aaeb-488e-cca8-ab04c945623a"
ensemble_error(n_classifier=11, error=0.25)
# + [markdown] id="xTpKbW31akGq"
# scipy의 `binom.cdf()`를 사용하여 계산할 수도 있습니다. 성공 확률이 75%인 이항 분포에서 11번의 시도 중에 5개 이하로 성공할 누적 확률은 다음과 같이 계산합니다.
# + colab={"base_uri": "https://localhost:8080/"} id="AlAEXPx5akGr" outputId="5a00a619-3ec9-4665-8dc7-f1ff8c545aa6"
from scipy.stats import binom
binom.cdf(5, 11, 0.75)
# + id="ThyN7iTOakGr"
import numpy as np
error_range = np.arange(0.0, 1.01, 0.01)
ens_errors = [ensemble_error(n_classifier=11, error=error)
for error in error_range]
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="aC3ntuQ2akGr" outputId="65f495aa-60f1-47d0-cb82-c23b197e5562"
import matplotlib.pyplot as plt
plt.plot(error_range,
ens_errors,
label='Ensemble error',
linewidth=2)
plt.plot(error_range,
error_range,
linestyle='--',
label='Base error',
linewidth=2)
plt.xlabel('Base error')
plt.ylabel('Base/Ensemble error')
plt.legend(loc='upper left')
plt.grid(alpha=0.5)
# plt.savefig('images/07_03.png', dpi=300)
plt.show()
# + [markdown] id="3tU6DbrwakGr"
# <br>
# <br>
# + [markdown] id="jXQSBulMakGr"
# # 다수결 투표를 사용한 분류 앙상블
# + [markdown] id="RukYYCUsakGr"
# ## 간단한 다수결 투표 분류기 구현
# + colab={"base_uri": "https://localhost:8080/"} id="99_QlAANakGs" outputId="28951ece-21e2-4889-b468-b480c06f0477"
import numpy as np
np.argmax(np.bincount([0, 0, 1],
weights=[0.2, 0.2, 0.6]))
# + colab={"base_uri": "https://localhost:8080/"} id="G5GRoopWakGs" outputId="acef39de-a42e-4583-a001-72dd36310ef9"
ex = np.array([[0.9, 0.1],
[0.8, 0.2],
[0.4, 0.6]])
p = np.average(ex,
axis=0,
weights=[0.2, 0.2, 0.6])
p
# + colab={"base_uri": "https://localhost:8080/"} id="lECkYPLkakGs" outputId="4ce67e91-2816-4e04-a038-bd4ed524d4c8"
np.argmax(p)
# + id="Qnn5GfoYakGs"
from sklearn.base import BaseEstimator
from sklearn.base import ClassifierMixin
from sklearn.preprocessing import LabelEncoder
from sklearn.base import clone
from sklearn.pipeline import _name_estimators
import numpy as np
import operator
class MajorityVoteClassifier(BaseEstimator,
ClassifierMixin):
"""다수결 투표 앙상블 분류기
매개변수
----------
classifiers : 배열 타입, 크기 = [n_classifiers]
앙상블에 사용할 분류기
vote : str, {'classlabel', 'probability'}
기본값: 'classlabel'
'classlabel'이면 예측은 다수인 클래스 레이블의 인덱스가 됩니다
'probability'면 확률 합이 가장 큰 인덱스로
클래스 레이블을 예측합니다(보정된 분류기에 추천합니다)
weights : 배열 타입, 크기 = [n_classifiers]
선택 사항, 기본값: None
'int' 또는 'float' 값의 리스트가 주어지면 분류기가 이 중요도로 가중치됩니다
'weights=None'이면 동일하게 취급합니다
"""
def __init__(self, classifiers, vote='classlabel', weights=None):
self.classifiers = classifiers
self.named_classifiers = {key: value for key, value
in _name_estimators(classifiers)}
self.vote = vote
self.weights = weights
def fit(self, X, y):
"""분류기를 학습합니다
매개변수
----------
X : {배열 타입, 희소 행렬},
크기 = [n_samples, n_features]
훈련 샘플 행렬
y : 배열 타입, 크기 = [n_samples]
타깃 클래스 레이블 벡터
반환값
-------
self : 객체
"""
if self.vote not in ('probability', 'classlabel'):
raise ValueError("vote는 'probability' 또는 'classlabel'이어야 합니다"
"; (vote=%r)이 입력되었습니다."
% self.vote)
if self.weights and len(self.weights) != len(self.classifiers):
raise ValueError('분류기와 가중치 개수는 같아야 합니다'
'; 가중치 %d 개, 분류기 %d 개'
% (len(self.weights), len(self.classifiers)))
# self.predict 메서드에서 np.argmax를 호출할 때
# 클래스 레이블이 0부터 시작되어야 하므로 LabelEncoder를 사용합니다
self.lablenc_ = LabelEncoder()
self.lablenc_.fit(y)
self.classes_ = self.lablenc_.classes_
self.classifiers_ = []
for clf in self.classifiers:
fitted_clf = clone(clf).fit(X, self.lablenc_.transform(y))
self.classifiers_.append(fitted_clf)
return self
def predict(self, X):
"""X에 대한 클래스 레이블을 예측합니다
매개변수
----------
X : {배열 타입, 희소 행렬},
크기 = [n_samples, n_features]
샘플 데이터 행렬
반환값
----------
maj_vote : 배열 타입, 크기 = [n_samples]
예측된 클래스 레이블
"""
if self.vote == 'probability':
maj_vote = np.argmax(self.predict_proba(X), axis=1)
else: # 'classlabel' 투표
# clf.predict 메서드를 사용하여 결과를 모읍니다
predictions = np.asarray([clf.predict(X)
for clf in self.classifiers_]).T
maj_vote = np.apply_along_axis(
lambda x:
np.argmax(np.bincount(x,
weights=self.weights)),
axis=1,
arr=predictions)
maj_vote = self.lablenc_.inverse_transform(maj_vote)
return maj_vote
def predict_proba(self, X):
"""X에 대한 클래스 확률을 예측합니다
매개변수
----------
X : {배열 타입, 희소 행렬},
크기 = [n_samples, n_features]
n_samples는 샘플의 개수고 n_features는 특성의 개수인
샘플 데이터 행렬
반환값
----------
avg_proba : 배열 타입,
크기 = [n_samples, n_classes]
샘플마다 가중치가 적용된 클래스의 평균 확률
"""
probas = np.asarray([clf.predict_proba(X)
for clf in self.classifiers_])
avg_proba = np.average(probas, axis=0, weights=self.weights)
return avg_proba
def get_params(self, deep=True):
"""GridSearch를 위해 분류기의 매개변수 이름을 반환합니다"""
if not deep:
return super(MajorityVoteClassifier, self).get_params(deep=False)
else:
out = self.named_classifiers.copy()
for name, step in self.named_classifiers.items():
for key, value in step.get_params(deep=True).items():
out['%s__%s' % (name, key)] = value
return out
# + [markdown] id="aHGH-uuTakGt"
# <br>
# <br>
# + [markdown] id="1j1WkdY6akGt"
# ## 다수결 투표 방식을 사용하여 예측 만들기
# + id="fQ4xAyljakGt"
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
X, y = iris.data[50:, [1, 2]], iris.target[50:]
le = LabelEncoder()
y = le.fit_transform(y)
X_train, X_test, y_train, y_test =\
train_test_split(X, y,
test_size=0.5,
random_state=1,
stratify=y)
# + colab={"base_uri": "https://localhost:8080/"} id="P06IStfuakGt" outputId="70ef0b92-dc32-4d70-81c7-36ef16fe7073"
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
from sklearn.model_selection import cross_val_score
clf1 = LogisticRegression(penalty='l2',
C=0.001,
random_state=1)
clf2 = DecisionTreeClassifier(max_depth=1,
criterion='entropy',
random_state=0)
clf3 = KNeighborsClassifier(n_neighbors=1,
p=2,
metric='minkowski')
pipe1 = Pipeline([['sc', StandardScaler()],
['clf', clf1]])
pipe3 = Pipeline([['sc', StandardScaler()],
['clf', clf3]])
clf_labels = ['Logistic regression', 'Decision tree', 'KNN']
print('10-겹 교차 검증:\n')
for clf, label in zip([pipe1, clf2, pipe3], clf_labels):
scores = cross_val_score(estimator=clf,
X=X_train,
y=y_train,
cv=10,
scoring='roc_auc')
print("ROC AUC: %0.2f (+/- %0.2f) [%s]"
% (scores.mean(), scores.std(), label))
# + colab={"base_uri": "https://localhost:8080/"} id="xSQ4fuVrakGu" outputId="758bdf7d-bc75-42f0-e44a-60e9be61b594"
# 다수결 (하드) 투표
mv_clf = MajorityVoteClassifier(classifiers=[pipe1, clf2, pipe3])
clf_labels += ['Majority voting']
all_clf = [pipe1, clf2, pipe3, mv_clf]
for clf, label in zip(all_clf, clf_labels):
scores = cross_val_score(estimator=clf,
X=X_train,
y=y_train,
cv=10,
scoring='roc_auc')
print("ROC AUC: %0.2f (+/- %0.2f) [%s]"
% (scores.mean(), scores.std(), label))
# + [markdown] id="AY6EfPXmakGu"
# 사이킷런의 `VotingClassifier`를 사용해 보겠습니다. `estimators` 매개변수에는 분류기 이름과 객체로 구성된 튜플의 리스트를 입력합니다. 앞에서 만든 `MajorityVoteClassifier`는 `vote` 매개변수에 상관없이 `predict_proba` 메서드를 실행할 수 있지만 사이킷런의 `VotingClassifier`는 `voting='hard'`일 경우 `predict_proba` 메서드를 지원하지 않습니다. ROC AUC를 계산하기 위해서는 예측 확률이 필요하므로 `voting='soft'`로 지정합니다.
# + colab={"base_uri": "https://localhost:8080/"} id="Y4gJx2eIakGu" outputId="f49cdbee-4ee5-45df-dc78-9e5cbf7524cd"
from sklearn.model_selection import cross_validate
from sklearn.ensemble import VotingClassifier
vc = VotingClassifier(estimators=[
('lr', pipe1), ('dt', clf2), ('knn', pipe3)], voting='soft')
scores = cross_validate(estimator=vc, X=X_train, y=y_train,
cv=10, scoring='roc_auc')
print("ROC AUC: : %0.2f (+/- %0.2f) [%s]"
% (scores['test_score'].mean(),
scores['test_score'].std(), 'VotingClassifier'))
# + [markdown] id="0y7VBUZLakGv"
# `VotingClassifier`의 `fit` 메서드를 호출할 때 진행 과정을 출력하려면 0.23버전에서 추가된 `verbose` 매개변수를 `True`로 지정해야 합니다. 여기에서는 앞서 만든 `vc` 객체의 `set_params` 메서드를 사용해 `verbose` 매개변수를 설정하겠습니다.
# + colab={"base_uri": "https://localhost:8080/"} id="4-bZpjhgakGv" outputId="b28f51fd-f742-4491-c190-cce326651d30"
vc.set_params(verbose=True)
vc = vc.fit(X_train, y_train)
# + [markdown] id="wSnjo5y2akGv"
# `voting='soft'`일 때 `predict` 메서드는 `predict_proba` 메서드에서 얻은 가장 큰 확률의 클래스를 예측으로 삼습니다. `predict_proba` 메서드는 각 분류기의 클래스 확률을 평균합니다.
# + colab={"base_uri": "https://localhost:8080/"} id="0cq16Vh8akGv" outputId="34c0fa23-b404-45db-b982-8d864b197dd3"
vc.predict_proba(X_test[:10])
# + [markdown] id="_7cUP7wLakGv"
# <br>
# <br>
# + [markdown] id="b3-jCM6YakGw"
# # 앙상블 분류기의 평가와 튜닝
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="G3irAKs-akGw" outputId="b4df0e84-d450-4094-f23c-194b41eddc39"
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
colors = ['black', 'orange', 'blue', 'green']
linestyles = [':', '--', '-.', '-']
for clf, label, clr, ls \
in zip(all_clf,
clf_labels, colors, linestyles):
# 양성 클래스의 레이블이 1이라고 가정합니다
y_pred = clf.fit(X_train,
y_train).predict_proba(X_test)[:, 1]
fpr, tpr, thresholds = roc_curve(y_true=y_test,
y_score=y_pred)
roc_auc = auc(x=fpr, y=tpr)
plt.plot(fpr, tpr,
color=clr,
linestyle=ls,
label='%s (auc = %0.2f)' % (label, roc_auc))
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1],
linestyle='--',
color='gray',
linewidth=2)
plt.xlim([-0.1, 1.1])
plt.ylim([-0.1, 1.1])
plt.grid(alpha=0.5)
plt.xlabel('False positive rate (FPR)')
plt.ylabel('True positive rate (TPR)')
# plt.savefig('images/07_04', dpi=300)
plt.show()
# + id="RhGuSK9QakGw"
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 356} id="wgzBjUvRakGw" outputId="71605321-be6c-4053-8783-48707fa880a1"
from itertools import product
all_clf = [pipe1, clf2, pipe3, mv_clf]
x_min = X_train_std[:, 0].min() - 1
x_max = X_train_std[:, 0].max() + 1
y_min = X_train_std[:, 1].min() - 1
y_max = X_train_std[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
f, axarr = plt.subplots(nrows=2, ncols=2,
sharex='col',
sharey='row',
figsize=(7, 5))
for idx, clf, tt in zip(product([0, 1], [0, 1]),
all_clf, clf_labels):
clf.fit(X_train_std, y_train)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx[0], idx[1]].contourf(xx, yy, Z, alpha=0.3)
axarr[idx[0], idx[1]].scatter(X_train_std[y_train==0, 0],
X_train_std[y_train==0, 1],
c='blue',
marker='^',
s=50)
axarr[idx[0], idx[1]].scatter(X_train_std[y_train==1, 0],
X_train_std[y_train==1, 1],
c='green',
marker='o',
s=50)
axarr[idx[0], idx[1]].set_title(tt)
plt.text(-3.5, -5.,
s='Sepal width [standardized]',
ha='center', va='center', fontsize=12)
plt.text(-12.5, 4.5,
s='Petal length [standardized]',
ha='center', va='center',
fontsize=12, rotation=90)
# plt.savefig('images/07_05', dpi=300)
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="fyd8gETTakGw" outputId="7ef5d543-6072-49ac-a8a1-2f2936f4ebd0"
mv_clf.get_params()
# + colab={"base_uri": "https://localhost:8080/"} id="JUBENziVakGw" outputId="ae93f1a9-67a8-4ffb-c256-d7af5bd8f545"
from sklearn.model_selection import GridSearchCV
params = {'decisiontreeclassifier__max_depth': [1, 2],
'pipeline-1__clf__C': [0.001, 0.1, 100.0]}
grid = GridSearchCV(estimator=mv_clf,
param_grid=params,
cv=10,
scoring='roc_auc')
grid.fit(X_train, y_train)
for r, _ in enumerate(grid.cv_results_['mean_test_score']):
print("%0.3f +/- %0.2f %r"
% (grid.cv_results_['mean_test_score'][r],
grid.cv_results_['std_test_score'][r] / 2.0,
grid.cv_results_['params'][r]))
# + colab={"base_uri": "https://localhost:8080/"} id="HiK5DNs7akGx" outputId="ba3197b0-9902-43c1-8c39-39a0d36849ae"
print('최적의 매개변수: %s' % grid.best_params_)
print('정확도: %.2f' % grid.best_score_)
# + [markdown] id="eX201uAMakGx"
# **노트**
# `GridSearchCV`의 `refit` 기본값은 `True`입니다(즉, `GridSeachCV(..., refit=True)`). 훈련된 `GridSearchCV` 추정기를 사용해 `predict` 메서드로 예측을 만들 수 있다는 뜻입니다. 예를 들면:
#
# grid = GridSearchCV(estimator=mv_clf,
# param_grid=params,
# cv=10,
# scoring='roc_auc')
# grid.fit(X_train, y_train)
# y_pred = grid.predict(X_test)
#
# 또한 `best_estimator_` 속성으로 "최상"의 추정기를 얻을 수 있습니다.
# + colab={"base_uri": "https://localhost:8080/"} id="uwwtsxffakGx" outputId="78783831-0d6f-4913-fdb6-d610183a0d54"
grid.best_estimator_.classifiers
# + id="VU1_NO8yakGx"
mv_clf = grid.best_estimator_
# + colab={"base_uri": "https://localhost:8080/"} id="bD8zksABakGx" outputId="aac0fe7b-d402-4799-fca4-0780f3515d6c"
mv_clf.set_params(**grid.best_estimator_.get_params())
# + colab={"base_uri": "https://localhost:8080/"} id="wdSmd4SBakGx" outputId="44f803e5-0048-4046-fd2e-6542854cc79c"
mv_clf
# + [markdown] id="IdQPEazxakGy"
# 사이킷런 0.22버전에서 `StackingClassifier`와 `StackingRegressor`가 추가되었습니다. 앞서 만든 분류기를 사용해 `StackingClassifier`에 그리드 서치를 적용해 보겠습니다. `StackingClassifier`는 `VotingClassifier`와 비슷하게 `estimators` 매개변수로 분류기 이름과 객체로 구성된 튜플의 리스트를 입력받습니다. `final_estimator` 매개변수로는 최종 결정을 위한 분류기를 지정합니다. 매개변수 그리드를 지정할 때는 튜플에 사용한 분류기 이름을 접두사로 사용합니다.
# + colab={"base_uri": "https://localhost:8080/"} id="-1jihrWQakGy" outputId="c2b10aad-71e5-4574-ad7b-0f4e51d0bcf3"
from sklearn.ensemble import StackingClassifier
stack = StackingClassifier(estimators=[
('lr', pipe1), ('dt', clf2), ('knn', pipe3)],
final_estimator=LogisticRegression())
params = {'dt__max_depth': [1, 2],
'lr__clf__C': [0.001, 0.1, 100.0]}
grid = GridSearchCV(estimator=stack,
param_grid=params,
cv=10,
scoring='roc_auc')
grid.fit(X_train, y_train)
for r, _ in enumerate(grid.cv_results_['mean_test_score']):
print("%0.3f +/- %0.2f %r"
% (grid.cv_results_['mean_test_score'][r],
grid.cv_results_['std_test_score'][r] / 2.0,
grid.cv_results_['params'][r]))
# + colab={"base_uri": "https://localhost:8080/"} id="2Ak6GUa3akGy" outputId="1df5c32c-b86d-4420-b151-9df1f4fe0b6d"
print('최적의 매개변수: %s' % grid.best_params_)
print('정확도: %.2f' % grid.best_score_)
# + [markdown] id="qApI8cbfakGy"
# <br>
# <br>
# + [markdown] id="99K1btErakGy"
# # 배깅: 부트스트랩 샘플링을 통한 분류 앙상블
# + colab={"base_uri": "https://localhost:8080/", "height": 401} id="9wqPbCzrakGy" outputId="17ddb21e-a1ee-4672-9605-6b8f4616f5d2"
Image(url='https://git.io/Jtsk4', width=500)
# + [markdown] id="bCnhSNYuakGy"
# ## 배깅 알고리즘의 작동 방식
# + colab={"base_uri": "https://localhost:8080/", "height": 372} id="_Ksm7utrakGz" outputId="8f061641-bedd-4b71-b028-784315b2ff83"
Image(url='https://git.io/JtskB', width=400)
# + [markdown] id="ja5bQkOuakGz"
# ## 배깅으로 Wine 데이터셋의 샘플 분류
# + id="hhl59rXCakGz"
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines',
'Proline']
# UCI 머신 러닝 저장소에서 Wine 데이터셋을 다운로드할 수 없을 때
# 다음 주석을 해제하고 로컬 경로에서 데이터셋을 적재하세요:
# df_wine = pd.read_csv('wine.data', header=None)
# 클래스 1 제외
df_wine = df_wine[df_wine['Class label'] != 1]
y = df_wine['Class label'].values
X = df_wine[['Alcohol', 'OD280/OD315 of diluted wines']].values
# + id="dPFZfXfqakGz"
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
le = LabelEncoder()
y = le.fit_transform(y)
X_train, X_test, y_train, y_test =\
train_test_split(X, y,
test_size=0.2,
random_state=1,
stratify=y)
# + id="jud0d9B1akGz"
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion='entropy',
max_depth=None,
random_state=1)
bag = BaggingClassifier(base_estimator=tree,
n_estimators=500,
max_samples=1.0,
max_features=1.0,
bootstrap=True,
bootstrap_features=False,
n_jobs=1,
random_state=1)
# + colab={"base_uri": "https://localhost:8080/"} id="_Z_RDmRfakGz" outputId="eea9d445-e075-4c1f-f820-1dc8201785ad"
from sklearn.metrics import accuracy_score
tree = tree.fit(X_train, y_train)
y_train_pred = tree.predict(X_train)
y_test_pred = tree.predict(X_test)
tree_train = accuracy_score(y_train, y_train_pred)
tree_test = accuracy_score(y_test, y_test_pred)
print('결정 트리의 훈련 정확도/테스트 정확도 %.3f/%.3f'
% (tree_train, tree_test))
bag = bag.fit(X_train, y_train)
y_train_pred = bag.predict(X_train)
y_test_pred = bag.predict(X_test)
bag_train = accuracy_score(y_train, y_train_pred)
bag_test = accuracy_score(y_test, y_test_pred)
print('배깅의 훈련 정확도/테스트 정확도 %.3f/%.3f'
% (bag_train, bag_test))
# + colab={"base_uri": "https://localhost:8080/", "height": 247} id="M8lTR1uyakGz" outputId="7c216080-8d9d-4ea2-ccaa-8753310b02d7"
import numpy as np
import matplotlib.pyplot as plt
x_min = X_train[:, 0].min() - 1
x_max = X_train[:, 0].max() + 1
y_min = X_train[:, 1].min() - 1
y_max = X_train[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
f, axarr = plt.subplots(nrows=1, ncols=2,
sharex='col',
sharey='row',
figsize=(8, 3))
for idx, clf, tt in zip([0, 1],
[tree, bag],
['Decision tree', 'Bagging']):
clf.fit(X_train, y_train)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx].contourf(xx, yy, Z, alpha=0.3)
axarr[idx].scatter(X_train[y_train == 0, 0],
X_train[y_train == 0, 1],
c='blue', marker='^')
axarr[idx].scatter(X_train[y_train == 1, 0],
X_train[y_train == 1, 1],
c='green', marker='o')
axarr[idx].set_title(tt)
axarr[0].set_ylabel('Alcohol', fontsize=12)
plt.tight_layout()
plt.text(0, -0.2,
s='OD280/OD315 of diluted wines',
ha='center',
va='center',
fontsize=12,
transform=axarr[1].transAxes)
# plt.savefig('images/07_08.png', dpi=300, bbox_inches='tight')
plt.show()
# + [markdown] id="OYR6k6QQakG0"
# 랜덤 포레스트와 배깅은 모두 기본적으로 부트스트랩 샘플링을 사용하기 때문에 분류기마다 훈련에 사용하지 않는 여분의 샘플이 남습니다. 이를 OOB(out of bag) 샘플이라고 합니다. 이를 사용하면 검증 세트를 만들지 않고 앙상블 모델을 평가할 수 있습니다. 사이킷런에서는 `oob_score` 매개변수를 `True`로 설정하면 됩니다. 이 매개변수의 기본값은 `False`입니다.
# 사이킷런의 랜덤 포레스트는 분류일 경우 OOB 샘플에 대한 각 트리의 예측 확률을 누적하여 가장 큰 확률을 가진 클래스를 타깃과 비교하여 정확도를 계산합니다. 회귀일 경우에는 각 트리의 예측 평균에 대한 R2 점수를 계산합니다. 이 점수는 `oob_score_` 속성에 저장되어 있습니다. `RandomForestClassifier`에 Wine 데이터셋을 적용하여 OOB 점수를 계산해 보겠습니다.
# + colab={"base_uri": "https://localhost:8080/"} id="soY4R5nUakG0" outputId="eab318da-db1f-401c-9268-29e2b7404307"
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(oob_score=True,
random_state=1)
rf.fit(X_train, y_train)
rf_train_score = rf.score(X_train, y_train)
rf_test_score = rf.score(X_test, y_test)
print('랜덤 포레스트의 훈련 정확도/테스트 정확도 %.3f/%.3f' %
(rf_train_score, rf_test_score))
print('랜덤 포레스트의 OOB 정확도 %.3f' % rf.oob_score_)
# + [markdown] id="30eZACHEakG0"
# 배깅의 OOB 점수 계산 방식은 랜덤 포레스트와 거의 동일합니다. 다만 `base_estimator`에 지정된 분류기가 `predict_proba` 메서드를 지원하지 않을 경우 예측 클래스를 카운팅하여 가장 높은 값의 클래스를 사용해 정확도를 계산합니다. 본문에서 만든 것과 동일한 `BaggingClassifier` 모델를 만들고 OOB 점수를 계산해 보겠습니다.
# + colab={"base_uri": "https://localhost:8080/"} id="Xo_7TLQJakG0" outputId="69338d37-0dd7-4b04-a133-477be247f494"
bag = BaggingClassifier(base_estimator=tree,
n_estimators=500,
oob_score=True,
random_state=1)
bag.fit(X_train, y_train)
bag_train_score = bag.score(X_train, y_train)
bag_test_score = bag.score(X_test, y_test)
print('배깅의 훈련 정확도/테스트 정확도 %.3f/%.3f' %
(bag_train_score, bag_test_score))
print('배깅의 OOB 정확도 %.3f' % bag.oob_score_)
# + [markdown] id="GPagbVtQakG0"
# <br>
# <br>
# + [markdown] id="exAngFFZakG0"
# # 약한 학습기를 이용한 에이다부스트
# + [markdown] id="iasLA54vakG0"
# ## 부스팅 작동 원리
# + colab={"base_uri": "https://localhost:8080/", "height": 363} id="3mPoPzn9akG0" outputId="e482ce59-db2c-4be8-a705-bfd33df56eb4"
Image(url='https://git.io/Jtsk0', width=400)
# + colab={"base_uri": "https://localhost:8080/", "height": 287} id="TQ4NDP-ZakG1" outputId="ebf18e11-fea1-474a-e707-5fa0a45af463"
Image(url='https://git.io/Jtskg', width=500)
# + [markdown] id="_8fMbcMnakG1"
# ## 사이킷런에서 에이다부스트 사용
# + id="2PFYhXvXakG1"
from sklearn.ensemble import AdaBoostClassifier
tree = DecisionTreeClassifier(criterion='entropy',
max_depth=1,
random_state=1)
ada = AdaBoostClassifier(base_estimator=tree,
n_estimators=500,
learning_rate=0.1,
random_state=1)
# + colab={"base_uri": "https://localhost:8080/"} id="PWayLlO7akG1" outputId="09ecdee0-5157-4650-9e65-5abcbcfe2474"
tree = tree.fit(X_train, y_train)
y_train_pred = tree.predict(X_train)
y_test_pred = tree.predict(X_test)
tree_train = accuracy_score(y_train, y_train_pred)
tree_test = accuracy_score(y_test, y_test_pred)
print('결정 트리의 훈련 정확도/테스트 정확도 %.3f/%.3f'
% (tree_train, tree_test))
ada = ada.fit(X_train, y_train)
y_train_pred = ada.predict(X_train)
y_test_pred = ada.predict(X_test)
ada_train = accuracy_score(y_train, y_train_pred)
ada_test = accuracy_score(y_test, y_test_pred)
print('에이다부스트의 훈련 정확도/테스트 정확도 %.3f/%.3f'
% (ada_train, ada_test))
# + colab={"base_uri": "https://localhost:8080/", "height": 247} id="U95NMQQuakG1" outputId="2486a73e-80b2-47bc-9965-96cab516648d"
x_min, x_max = X_train[:, 0].min() - 1, X_train[:, 0].max() + 1
y_min, y_max = X_train[:, 1].min() - 1, X_train[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
f, axarr = plt.subplots(1, 2, sharex='col', sharey='row', figsize=(8, 3))
for idx, clf, tt in zip([0, 1],
[tree, ada],
['Decision tree', 'AdaBoost']):
clf.fit(X_train, y_train)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx].contourf(xx, yy, Z, alpha=0.3)
axarr[idx].scatter(X_train[y_train == 0, 0],
X_train[y_train == 0, 1],
c='blue', marker='^')
axarr[idx].scatter(X_train[y_train == 1, 0],
X_train[y_train == 1, 1],
c='green', marker='o')
axarr[idx].set_title(tt)
axarr[0].set_ylabel('Alcohol', fontsize=12)
plt.tight_layout()
plt.text(0, -0.2,
s='OD280/OD315 of diluted wines',
ha='center',
va='center',
fontsize=12,
transform=axarr[1].transAxes)
# plt.savefig('images/07_11.png', dpi=300, bbox_inches='tight')
plt.show()
# + [markdown] id="70GXzKtSakG1"
# 그레이디언트 부스팅은 에이다부스트와는 달리 이전의 약한 학습기가 만든 잔차 오차(residual error)에 대해 학습하는 새로운 학습기를 추가합니다. 신경망 알고리즘이 잘 맞는 이미지, 텍스트 같은 데이터를 제외하고 구조적인 데이터셋에서 현재 가장 높은 성능을 내는 알고리즘 중 하나입니다. 사이킷런에는 `GradientBoostingClassifier`와 `GradientBoostingRegressor` 클래스로 구현되어 있습니다. 앞에서 사용한 훈련 데이터를 이용하여 그레이디언트 부스팅 모델을 훈련시켜 보죠.
# + colab={"base_uri": "https://localhost:8080/"} id="xH51w1rlakG1" outputId="17dbb058-47cb-4f5a-857b-c10db5f1892c"
from sklearn.ensemble import GradientBoostingClassifier
gbrt = GradientBoostingClassifier(n_estimators=20, random_state=42)
gbrt.fit(X_train, y_train)
gbrt_train_score = gbrt.score(X_train, y_train)
gbrt_test_score = gbrt.score(X_test, y_test)
print('그래디언트 부스팅의 훈련 정확도/테스트 정확도 %.3f/%.3f'
% (gbrt_train_score, gbrt_test_score))
# + colab={"base_uri": "https://localhost:8080/", "height": 247} id="H5nh3RUBakG2" outputId="361ada77-2a7e-48e5-81c3-3d3ce20581db"
x_min, x_max = X_train[:, 0].min() - 1, X_train[:, 0].max() + 1
y_min, y_max = X_train[:, 1].min() - 1, X_train[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
f, axarr = plt.subplots(1, 2, sharex='col', sharey='row', figsize=(8, 3))
for idx, clf, tt in zip([0, 1],
[tree, gbrt],
['Decision tree', 'GradientBoosting']):
clf.fit(X_train, y_train)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx].contourf(xx, yy, Z, alpha=0.3)
axarr[idx].scatter(X_train[y_train == 0, 0],
X_train[y_train == 0, 1],
c='blue', marker='^')
axarr[idx].scatter(X_train[y_train == 1, 0],
X_train[y_train == 1, 1],
c='green', marker='o')
axarr[idx].set_title(tt)
axarr[0].set_ylabel('Alcohol', fontsize=12)
plt.tight_layout()
plt.text(0, -0.2,
s='OD280/OD315 of diluted wines',
ha='center', va='center', fontsize=12,
transform=axarr[1].transAxes)
# plt.savefig('images/07_gradientboosting.png', dpi=300, bbox_inches='tight')
plt.show()
# + [markdown] id="VqPTQtkSakG2"
# 그레이디언트 부스팅에서 중요한 매개변수 중 하나는 각 트리가 오차에 기여하는 정도를 조절하는 `learning_rate`입니다. `learning_rate`이 작으면 성능은 높아지지만 많은 트리가 필요합니다. 이 매개변수의 기본값은 0.1입니다.
#
# 그레이디언트 부스팅이 사용하는 손실 함수는 `loss` 매개변수에서 지정합니다. `GradientBoostingClassifier`일 경우 로지스틱 회귀를 의미하는 `'deviance'`, `GradientBoostingRegressor`일 경우 최소 제곱을 의미하는 `'ls'`가 기본값입니다.
#
# 그레이디언트 부스팅이 오차를 학습하기 위해 사용하는 학습기는 `DecisionTreeRegressor`입니다. `DecisionTreeRegressor`의 불순도 조건은 `'mse'`, `'mae'` 등 입니다. 따라서 그레이디언트 부스팅의 `criterion` 매개변수도 `DecisionTreeRegressor`의 불순도 조건을 따라서 `'mse'`, `'mae'`, 그리고 <NAME>(<NAME>)이 제안한 MSE 버전인 `'friedman_mse'`(기본값)를 사용합니다. 하지만 `'mae'`일 경우 그레이디언트 부스팅의 결과가 좋지 않기 때문에 이 옵션은 사이킷런 0.24버전부터 경고가 발생하고 0.26버전에서 삭제될 예정입니다.
#
# `subsample` 매개변수를 기본값 1.0 보다 작은 값으로 지정하면 훈련 데이터셋에서 `subsample` 매개변수에 지정된 비율만큼 랜덤하게 샘플링하여 트리를 훈련합니다. 이를 확률적 그레이디언트 부스팅이라고 부릅니다. 이는 랜덤 포레스트나 에이다부스트의 부트스트랩 샘플링과 비슷하게 과대적합을 줄이는 효과를 냅니다. 또한 남은 샘플을 사용해 OOB 점수를 계산할 수 있습니다. `subsample` 매개변수가 1.0보다 작을 때 그레이디언트 부스팅 객체의 `oob_improvement_` 속성에 이전 트리의 OOB 손실 값에서 현재 트리의 OOB 손실을 뺀 값이 기록되어 있습니다. 이 값에 음수를 취해서 누적하면 트리가 추가되면서 과대적합되는 지점을 찾을 수 있습니다.
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="UnZx7UUtakG2" outputId="4537457f-7057-49ee-9849-20f3f73e1f41"
gbrt = GradientBoostingClassifier(n_estimators=100,
subsample=0.5,
random_state=1)
gbrt.fit(X_train, y_train)
oob_loss = np.cumsum(-gbrt.oob_improvement_)
plt.plot(range(100), oob_loss)
plt.xlabel('number of trees')
plt.ylabel('loss')
# plt.savefig('images/07_oob_improvement.png', dpi=300)
plt.show()
# + [markdown] id="_x5kQ0w4akG2"
# 사이킷런 0.20 버전부터는 그레이디언트 부스팅에 조기 종료(early stopping) 기능을 지원하기 위한 매개변수 `n_iter_no_change`, `validation_fraction`, `tol`이 추가되었습니다. 훈련 데이터에서 `validation_fraction` 비율(기본값 0.1)만큼 떼어 내어 측정한 손실이 `n_iter_no_change` 반복 동안에 `tol` 값(기본값 1e-4) 이상 향상되지 않으면 훈련이 멈춥니다.
#
# 히스토그램 기반 부스팅은 입력 특성을 256개의 구간으로 나누어 노드를 분할에 사용합니다. 일반적으로 샘플 개수가 10,000개보다 많은 경우 그레이디언트 부스팅보다 히스토그램 기반 부스팅이 훨씬 빠릅니다. 앞에서와 같은 데이터를 히스토그램 기반 부스팅 구현인 `HistGradientBoostingClassifier`에 적용해 보겠습니다.
# + colab={"base_uri": "https://localhost:8080/"} id="ThQj6PsMakG2" outputId="aaa60b94-ac56-4424-91dc-666f0d2d73e9" tags=[]
from sklearn.ensemble import HistGradientBoostingClassifier
hgbc = HistGradientBoostingClassifier(random_state=1)
hgbc.fit(X_train, y_train)
hgbc_train_score = gbrt.score(X_train, y_train)
hgbc_test_score = gbrt.score(X_test, y_test)
print('그래디언트 부스팅 훈련 정확도/테스트 정확도 %.3f/%.3f'
% (hgbc_train_score, hgbc_test_score))
# + [markdown] id="R8VeoG6eakG2"
# 사이킷런 0.24버전부터 `HistGradientBoostingClassifier`와 `HistGradientBoostingRegressor`에서 범주형 특성을 그대로 사용할 수 있습니다. `categorical_features` 매개변수에 불리언 배열이나 정수 인덱스 배열을 전달하여 범주형 특성을 알려주어야 합니다.
#
# XGBoost(https://xgboost.ai/)에서도 `tree_method` 매개변수를 `'hist'`로 지정하여 히스토그램 기반 부스팅을 사용할 수 있습니다. 코랩에는 이미 XGBoost 라이브러리가 설치되어 있으므로 간단히 테스트해 볼 수 있습니다.
# -
np.unique(y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="ZAWbGqlzakG3" outputId="b80679b7-6e0a-4a3b-cc6f-fd72d0ce00ca"
from xgboost import XGBClassifier
xgb = XGBClassifier(tree_method='hist', eval_metric='logloss', use_label_encoder=False, random_state=1)
xgb.fit(X_train, y_train)
xgb_train_score = xgb.score(X_train, y_train)
xgb_test_score = xgb.score(X_test, y_test)
print('XGBoost 훈련 정확도/테스트 정확도 %.3f/%.3f'
% (xgb_train_score, xgb_test_score))
# + [markdown] id="1Ymv6VPqakG3"
# 또 다른 인기 높은 히스토그램 기반 부스팅 알고리즘은 마이크로소프트에서 만든 LightGBM(https://lightgbm.readthedocs.io/)입니다. 사실 사이킷런의 히스토그램 기반 부스팅은 LightGBM에서 영향을 많이 받았습니다. LightGBM도 코랩에서 바로 테스트해 볼 수 있습니다.
# + colab={"base_uri": "https://localhost:8080/"} id="6mFWQZQ7akG3" outputId="f6282f49-9a54-4490-d81e-2e0f162698f4"
from lightgbm import LGBMClassifier
lgb = LGBMClassifier(random_state=1)
lgb.fit(X_train, y_train)
lgb_train_score = lgb.score(X_train, y_train)
lgb_test_score = lgb.score(X_test, y_test)
print('LightGBM 훈련 정확도/테스트 정확도 %.3f/%.3f'
% (lgb_train_score, lgb_test_score))
# + [markdown] id="ejCek1UFakG3"
# <br>
# <br>
| ch07/ch07.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bay Area Job Listings
# +
# Dependencies & Setup
import pandas as pd
import numpy as np
import requests
import json
from os.path import exists
import simplejson as json
# Retrieve Google API Key from config.py
from config_3 import gkey
# +
# File to Load
file_to_load = "data/data_analyst_sf_final.csv"
# Read Scraped Data (CSV File) & Store Into Pandas DataFrame
job_listings_df = pd.read_csv(file_to_load, encoding="ISO-8859-1")
# -
# Drop BA NaN's
revised_job_listings_df = job_listings_df.dropna()
revised_job_listings_df.head()
# Reorganize BA File Column Names
organized_job_listings_df = revised_job_listings_df.rename(columns={"company":"Company Name",
"job_title":"Job Title",
"location":"Location"})
organized_job_listings_df.head()
# +
# # Extract Only Job Titles with "Data" as String
# new_organized_job_listings_df = organized_job_listings_df[organized_job_listings_df["Job Title"].
# str.contains("Data", case=True)]
# new_organized_job_listings_df.head()
# -
print(len(organized_job_listings_df))
# Extract Unique Locations
organized_job_listings_df["company_address"] = organized_job_listings_df["Company Name"] + ", " + organized_job_listings_df["Location"]
unique_locations = organized_job_listings_df["company_address"].unique().tolist()
unique_locations
# Extract Only Company Names to Pass to Google Maps API to Gather GeoCoordinates
company = organized_job_listings_df[["Company Name"]]
company.head()
# +
# What are the geocoordinates (latitude/longitude) of the Company Names?
company_list = list(company["Company Name"])
# Build URL using the Google Maps API
base_url = "https://maps.googleapis.com/maps/api/geocode/json"
new_json = []
for target_company in company_list:
params = {"address": target_company + ", CA", "key": gkey}
# Run Request
response = requests.get(base_url, params=params)
# Extract lat/lng
companies_geo = response.json()
lat = companies_geo["results"][0]["geometry"]["location"]["lat"]
lng = companies_geo["results"][0]["geometry"]["location"]["lng"]
new_json.append({"company": target_company, "lat": lat, "lng": lng})
places = f"{target_company}, {lat}, {lng}"
print(places)
# -
# +
# # Python SQL Toolkit and Object Relational Mapper
# import sqlalchemy
# from sqlalchemy.ext.automap import automap_base
# from sqlalchemy.orm import Session
# from sqlalchemy import create_engine, inspect, func
# from sqlalchemy.ext.declarative import declarative_base
# # Import Other Dependencies
# import pandas as pd
# import numpy as np
# import matplotlib.pyplot as plt
# +
# # Export Pandas DataFrame to PostgreSQL
# engine = create_engine("postgresql://postgres:password@localhost:5432/job_listings_df")
# job_listings.to_sql("job_listings_df", engine)
# # Create Engine and Pass in Postgres Connection
# # Setup to Connect to Database
# engine = create_engine("postgres://postgres:password@localhost:5432/job_listings_df")
# conn = engine.connect()
# -
| .ipynb_checkpoints/job_listings-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install ibm_watson
from ibm_watson import TextToSpeechV1
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
url= 'https://api.us-south.text-to-speech.watson.cloud.ibm.com/instances/39a5ddfe-594e-4a33-a59b-a017e15aaa5e'
apikey= '<KEY>'
authenticator = IAMAuthenticator(apikey)
tts = TextToSpeechV1(authenticator=authenticator)
tts.set_service_url(url)
with open('en_text_file.txt', 'r') as f:
text = f.readlines()
text = [line.replace('\n','') for line in text]
text = ''.join(str(line) for line in text)
with open('C:/Users/malna/source/repos/stt-tts/text-to-speech/English_speech.mp3', 'wb') as audio_file:
res = tts.synthesize(text, accept='audio/mp3', voice='en-US_AllisonV3Voice').get_result()
audio_file.write(res.content)
| text-to-speech/TextToSpeech.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dlb2-pytorch
# language: python
# name: dlb2-pytorch
# ---
# +
import sys
sys.path.append("../src")
from utils.load_model import get_model
from my_model import get_ilsvrc2012
from torch.utils.data import DataLoader
from utils.imagenet1000_classname import imgnet_label_name
from utils.tensortracker import TensorTracker
from utils import plots
# -
# %matplotlib inline
# +
import os
import torch
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
import torch.nn as nn
import svcca
# +
from my_model import my_resnet
from glob import glob
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
# -
from utils.receptive_field_tracker import RFTracker
plain_model = my_resnet.resnet34(pretrained=False, plain=True)
skip_model = my_resnet.resnet34(pretrained=False, plain=False)
root = "/data2/genta/resnet"
# +
plain_paths = sorted(glob(os.path.join(root, '20200814*plain*', '*.model')))[:-2]
plain_paths = sorted(glob(os.path.join(root, '20200814*plain*', 'init.model'))) + plain_paths
# plain_paths = sorted(glob(os.path.join(root, '20200821*plain*', '*.model')))[:-2]
# plain_paths = sorted(glob(os.path.join(root, '20200821*plain*', 'init.model'))) + plain_paths
skip_paths = sorted(glob(os.path.join(root, '20200814*resnet*', '*.model')))[:-2]
skip_paths = sorted(glob(os.path.join(root, '20200814*resnet*', 'init.model'))) + skip_paths
# skip_paths = sorted(glob(os.path.join(root, '20200821*resnet*', '*.model')))[:-2]
# skip_paths = sorted(glob(os.path.join(root, '20200821*resnet*', 'init.model'))) + skip_paths
# -
# +
mode = "test"
test_dataset = get_ilsvrc2012(mode=mode, transform_type="test")
test_labels = np.asarray(test_dataset.targets)
index = []
for i in range(1000):
idx = np.where(test_labels == i)[0][0]
index.append(idx)
index = np.asarray(index)
device = "cuda"
N = 256
np.random.seed(815)
perm = np.random.permutation(len(index))
images = [test_dataset[i][0] for i in index[perm[:N]]]
images = torch.stack(images)
images = images.to(device)
# +
skip_model = skip_model.to(device)
skip_model.eval()
rf_tracker = RFTracker(skip_model)
out = skip_model(images)
# -
for key in rf_tracker.rf_pool:
print(rf_tracker.find_receptive_field(key), key)
for name, m in skip_model.named_modules():
if isinstance(m, (nn.Sequential, )):
print(name, type(m))
rf_tracker.remove_hook()
plain_model = plain_model.to(device)
plain_model.eval()
tracker = TensorTracker(plain_model)
plain_model(images[:2])
keys_skip = tracker.fmap_pool.keys()
keys = np.asarray(list(keys_skip))
target_key = [np.where(keys == k)[0][0] for k in keys if "relu" in k or "maxpool" == k]
# target_key = [np.where(keys == k)[0][0] for k in keys if "relu3" in k or "maxpool" == k]
cand_layers = keys[target_key].tolist()
cand_layers
for l in cand_layers:
print(l, rf_tracker.find_receptive_field(l))
# + active=""
# from utils.receptive_field import get_rf_layer_info
# + active=""
# get_rf_layer_info(skip_model, images, layer="layer4.2.relu2")[0]
# + active=""
# plain_paths[::10]
# -
p_tracker = TensorTracker(plain_model, candidate_layers=cand_layers)
s_tracker = TensorTracker(skip_model, candidate_layers=cand_layers)
# +
model = plain_model
paths = plain_paths[::10]
tracker = p_tracker
packs = [
(plain_model, p_tracker, plain_paths[-1:]),
(skip_model, s_tracker, skip_paths[-1:]),
]
data_list = []
for model, tracker, paths in packs:
datas = []
for path in tqdm(paths, total=len(paths)):
name = os.path.basename(path)
model.load_state_dict(my_resnet.fix_model_state_dict(torch.load(path)))
model = model.eval()
model = model.to(device)
with torch.no_grad():
out = model(images)
data = []
func = lambda x: (len(x) - np.count_nonzero(x)) / len(x)
for l1 in keys[target_key]:
act1 = tracker.find_fmap(l1).to('cpu').numpy()
data.append(func(act1.reshape(-1)))
data = np.asarray(data)
datas.append(data)
datas = np.asarray(datas)
data_list.append(datas)
# -
plain_datas = data_list[0].copy()
skip_datas = data_list[1].copy()
# + active=""
# plain_datas.shape, skip_datas.shape
# -
linestyles = [
"solid",
"dashed",
"dotted"
]
# cmap = plt.get_cmap("viridis")
cmaps = ['viridis', 'plasma', 'inferno', 'magma', 'cividis']
cmap = plt.get_cmap(cmaps[4])
colors = [cmap(i) for i in np.linspace(0, 1, len(datas))]
alpha = 0.7
# +
def plot_helper(data, index=None):
plt.figure(figsize=(14, 7))
if index is None:
index = np.arange(data.shape[1])
for cnt, d in enumerate(data):
x = np.arange(len(index))
name = os.path.basename(paths[cnt])
plt.plot(x, d[index], linestyle=linestyles[cnt % len(linestyles)], color=colors[cnt], alpha=alpha, label=name)
plt.xticks(x, np.asarray(cand_layers)[index], rotation=90)
plt.ylabel("sparse activation rate")
plt.ylim(0, 1)
plt.grid()
plt.legend()
plt.show()
# -
print("plain")
plot_helper(plain_datas)
print("skip")
plot_helper(skip_datas)
index = np.asarray([i for i, cl in enumerate(cand_layers) if "relu2" in cl])
print("plain")
plot_helper(plain_datas, index)
print("skip")
plot_helper(skip_datas, index)
# +
packs = [
("resnet34-skip", "resnet34-skip2", "resnet34-skip3",),
("resnet34-plain", "resnet34-plain2", "resnet34-plain3",)
]
data_list = []
for names in packs:
datas = []
for name in tqdm(names, total=len(names)):
model = get_model(name)
model = model.eval()
model = model.to(device)
tracker = TensorTracker(model, candidate_layers=cand_layers)
with torch.no_grad():
out = model(images)
data = []
func = lambda x: (len(x) - np.count_nonzero(x)) / len(x)
for l1 in keys[target_key]:
act1 = tracker.find_fmap(l1).to('cpu').numpy()
data.append(func(act1.reshape(-1)))
data = np.asarray(data)
datas.append(data)
tracker.remove()
datas = np.asarray(datas)
data_list.append(datas)
# -
plain_datas = data_list[1].copy()
skip_datas = data_list[0].copy()
# +
# cmap = plt.get_cmap("viridis")
cmaps = ['viridis', 'plasma', 'inferno', 'magma', 'cividis']
cmap = plt.get_cmap(cmaps[4])
colors = [cmap(i) for i in np.linspace(0, 1, len(datas))]
alpha = 0.7
def plot_helper(data, index=None, labels=None, colors=None, fillx=None):
plt.figure(figsize=(14, 7))
if index is None:
index = np.arange(data.shape[1])
if fillx is not None:
assert hasattr(fillx, "__iter__")
_cmap = plt.get_cmap("rainbow")
_colors = [_cmap(i) for i in np.linspace(0, 1, len(fillx))]
for cnt, x in enumerate(fillx):
color = _colors[cnt]
plt.fill_between(x, 0, 1, color=color, alpha=0.1)
for cnt, d in enumerate(data):
x = np.arange(len(index))
if labels is None:
name = ""
else:
name = labels[cnt]
if colors is None:
color = "k"
else:
color = colors[cnt]
plt.plot(x, d[index], linestyle=linestyles[cnt % len(linestyles)], color=color, alpha=alpha, label=name)
plt.xticks(x, np.asarray(cand_layers)[index], rotation=90)
plt.ylabel("sparse activation rate")
plt.ylim(0, 1)
plt.grid()
plt.legend()
plt.show()
# -
fill_sepkeys = [
"layer1",
"layer2",
"layer3",
"layer4"
]
# fill_index = np.asarray([i for i, cl in enumerate(cand_layers) if "relu2" in cl])
fill_index = []
for key in fill_sepkeys:
tmp = []
for i, cl in enumerate(cand_layers):
if key in cl:
tmp.append(i)
fill_index.append(np.asarray(tmp))
plain_datas.shape
print("plain")
plot_helper(plain_datas, labels=packs[1], fillx=fill_index)
print("skip")
plot_helper(skip_datas, labels=packs[0], fillx=fill_index)
index = np.asarray([i for i, cl in enumerate(cand_layers) if "relu2" in cl])
fill_sepkeys = [
"layer1",
"layer2",
"layer3",
"layer4"
]
fill_index = []
for key in fill_sepkeys:
tmp = []
for i, cl in enumerate(np.asarray(cand_layers)[index]):
if key in cl:
tmp.append(i)
fill_index.append(np.asarray(tmp))
print("plain")
plot_helper(plain_datas, index=index, labels=packs[1], fillx=fill_index)
print("skip")
plot_helper(skip_datas, index=index, labels=packs[0], fillx=fill_index)
# +
index = []
for i in range(1000):
idx = np.where(test_labels == i)[0]
index.append(idx[0])
index.append(idx[1])
index.append(idx[2])
index = np.asarray(index)
device = "cuda"
N = 256
np.random.seed(815)
perm = np.random.permutation(len(index))
# images = [test_dataset[i][0] for i in index[perm[:N]]]
# images = torch.stack(images)
# images = images.to(device)
# +
packs = [
("resnet34-skip", "resnet34-skip2", "resnet34-skip3",),
("resnet34-plain", "resnet34-plain2", "resnet34-plain3",)
]
data_list = []
for names in packs:
datas = []
for name in tqdm(names, total=len(names)):
model = get_model(name)
model = model.eval()
model = model.to(device)
tracker = TensorTracker(model, candidate_layers=cand_layers)
tmp = []
totals = []
for i in range(0, len(index), N):
_images = torch.stack([test_dataset[k][0] for k in index[perm[i:i + N]]])
_images = _images.to(device)
with torch.no_grad():
out = model(_images)
data = []
# func = np.frompyfunc(lambda x: (len(x) - np.count_nonzero(x)), 1, 1)
func = lambda x: (len(x) - np.count_nonzero(x))
total = []
for l1 in keys[target_key]:
act1 = tracker.find_fmap(l1).to('cpu').numpy()
data.append(func(act1.reshape(-1)))
total.append(len(act1.reshape(-1)))
totals.append(total)
data = np.asarray(data)
tmp.append(data)
tracker.remove()
datas.append(tmp)
datas = np.asarray(datas)
data_list.append(datas)
# -
act1.shape
len(target_key)
data_list[0][0].shape
len(totals)
totals = np.asarray(totals)
totals.sum(0)
data_list[0][0].sum(0) / totals.sum(0)
hoge = np.asarray(data_list)
skip_datas = (hoge[0].sum(axis=1) / totals.sum(0))
plain_datas = (hoge[1].sum(axis=1) / totals.sum(0))
fill_sepkeys = [
"layer1",
"layer2",
"layer3",
"layer4"
]
# fill_index = np.asarray([i for i, cl in enumerate(cand_layers) if "relu2" in cl])
fill_index = []
for key in fill_sepkeys:
tmp = []
for i, cl in enumerate(cand_layers):
if key in cl:
tmp.append(i)
fill_index.append(np.asarray(tmp))
print("plain")
plot_helper(plain_datas, labels=packs[1], fillx=fill_index)
print("skip")
plot_helper(skip_datas, labels=packs[0], fillx=fill_index)
index = np.asarray([i for i, cl in enumerate(cand_layers) if "relu2" in cl])
fill_sepkeys = [
"layer1",
"layer2",
"layer3",
"layer4"
]
fill_index = []
for key in fill_sepkeys:
tmp = []
for i, cl in enumerate(np.asarray(cand_layers)[index]):
if key in cl:
tmp.append(i)
fill_index.append(np.asarray(tmp))
print("plain")
plot_helper(plain_datas, index=index, labels=packs[1], fillx=fill_index)
print("skip")
plot_helper(skip_datas, index=index, labels=packs[0], fillx=fill_index)
# + active=""
# dir_path = "./20200826"
# if not os.path.exists(dir_path):
# os.makedirs(dir_path)
# + active=""
# def show_(data, out_name="out"):
# fig, ax = plt.subplots(figsize=(14, 10))
# ln, = ax.plot([])
# def init():
# ax.set_xticks(range(len(target_key)))
# ax.set_xticklabels(keys[target_key], rotation=90)
# ax.set_ylabel("sparse activation rate")
# ax.set_ylim(0, 1)
# ax.grid()
# return ln,
#
# def update(frame):
# ax.cla()
# init()
# cnt = frame
# d = data[cnt]
# x = np.arange(len(d))
# name = os.path.basename(paths[cnt])
# ax.plot(x, d, linestyle=linestyles[cnt % len(linestyles)], color=colors[cnt], alpha=alpha, label=name)
#
# if len(data) > cnt + 1:
# cnt = cnt + 1
# d = data[cnt]
# x = np.arange(len(d))
# name = os.path.basename(paths[cnt])
# ax.plot(x, d, linestyle=linestyles[cnt % len(linestyles)], color=colors[cnt], alpha=alpha, label=name)
#
# ax.legend(loc="upper left")
# return ln,
#
# ani = FuncAnimation(fig, update, frames=len(data),
# init_func=init, blit=True, interval=50)
# plt.close()
# path = os.path.join(dir_path, "{}.gif".format(out_name))
# ani.save(path, writer="imagemagick", fps=5)
# HTML(ani.to_jshtml())
#
# +
model = plain_model
paths = plain_paths[::10]
tracker = p_tracker
packs = [
(plain_model, p_tracker, plain_paths[-1:]),
(skip_model, s_tracker, skip_paths[-1:]),
]
# func = lambda x: (len(x) - np.count_nonzero(x)) / len(x)
func = None
data_list = []
for model, tracker, paths in packs:
datas = []
for path in tqdm(paths, total=len(paths)):
name = os.path.basename(path)
model.load_state_dict(my_resnet.fix_model_state_dict(torch.load(path)))
model = model.eval()
model = model.to(device)
with torch.no_grad():
out = model(images)
data = []
for l1 in keys[target_key]:
act1 = tracker.find_fmap(l1).to('cpu').numpy()
if func is not None:
data.append(func(act1.reshape(-1)))
else:
data.append(act1)
datas.append(data)
data_list.append(datas)
# -
len(data_list[0][0])
data_list[0][0][0].shape
plain_datas = data_list[0].copy()
skip_datas = data_list[1].copy()
[ print(d.shape) for d in plain_datas[0]]
plain_datas[0][-1].shape
# # focus on the channel
data = plain_datas[0][-1]
len(plain_datas[0][-2:])
list(range(-3, 0))
sort_index.shape
# + active=""
# i = 0
# sort_index = np.argsort(np.mean(plain_datas[0][-1][i] == 0, axis=(-2, -1)))
# for j in range(-3, 0):
# data = plain_datas[0][j]
# name = cand_layers[j]
# title = "{}".format(name)
# plt.figure(figsize=(25, 5))
# plt.title(title)
# x = np.arange(data[i].shape[0])
# y = np.mean(data[i] == 0, axis=(-2, -1))[sort_index]
# plt.bar(x, y)
# plt.xticks(ticks=x[::10], labels=sort_index[::10], rotation=90)
# plt.show()
# + active=""
# i = 0
# sort_index = np.argsort(np.mean(skip_datas[0][-1][i] == 0, axis=(-2, -1)))
# for j in range(-3, 0):
# data = skip_datas[0][j]
# name = cand_layers[j]
# title = "{}".format(name)
# plt.figure(figsize=(25, 5))
# plt.title(title)
# x = np.arange(data[i].shape[0])
# y = np.mean(data[i] == 0, axis=(-2, -1))[sort_index]
# plt.bar(x, y)
# plt.xticks(ticks=x[::10], labels=sort_index[::10], rotation=90)
# plt.show()
# + active=""
# i = 0
# sort_index = np.argsort(np.mean(plain_datas[0][-1][i], axis=(-2, -1)))
# for j in range(-3, 0):
# data = plain_datas[0][j]
# name = cand_layers[j]
# title = "{}".format(name)
# plt.figure(figsize=(25, 5))
# plt.title(title)
# x = np.arange(data[i].shape[0])
# y = np.mean(data[i], axis=(-2, -1))[sort_index]
# plt.bar(x, y)
# plt.xticks(ticks=x[::10], labels=sort_index[::10], rotation=90)
# plt.show()
# + active=""
# i = 0
# sort_index = np.argsort(np.mean(skip_datas[0][-1][i], axis=(-2, -1)))
# for j in range(-3, 0):
# data = skip_datas[0][j]
# name = cand_layers[j]
# title = "{}".format(name)
# plt.figure(figsize=(25, 5))
# plt.title(title)
# x = np.arange(data[i].shape[0])
# y = np.mean(data[i], axis=(-2, -1))[sort_index]
# plt.bar(x, y)
# plt.xticks(ticks=x[::10], labels=sort_index[::10], rotation=90)
# plt.show()
# + active=""
# i = 0
# sort_index = np.argsort(np.mean(skip_datas[0][-1][i], axis=(-2, -1)))
# for j in range(-3, 0):
# data = skip_datas[0][j]
# name = cand_layers[j]
# title = "{}".format(name)
# plt.figure(figsize=(25, 5))
# plt.title(title)
# x = np.arange(data[i].shape[0])
# y = np.mean(data[i] != 0, axis=(-2, -1))[sort_index]
# plt.bar(x, y)
# plt.xticks(ticks=x[::10], labels=sort_index[::10], rotation=90)
# plt.show()
# + active=""
# max_index = np.unravel_index(np.argmax(skip_datas[0][-1][i]), (512, 7, 7))
# np.max(skip_datas[0][-1][i]), skip_datas[0][-1][i][max_index]
# + active=""
#
# + active=""
# sort_index.shape
# + active=""
#
# + active=""
# plt.figure(figsize=(14, 7))
# plt.hist((skip_datas[0][-3].reshape(-1), skip_datas[0][-2].reshape(-1), skip_datas[0][-1].reshape(-1)), density=True, bins=101)
# plt.show()
# + active=""
# max_index = np.unravel_index(np.argmax(skip_datas[0][-1][i]), (512, 7, 7))
# i = 0
#
# sort_index = np.argsort(skip_datas[0][-1][i][:, max_index[1], max_index[2]])
# for j in range(-3, 0):
# data = skip_datas[0][j][i, :, max_index[1], max_index[2]]
# name = cand_layers[j]
# title = "{}".format(name)
# plt.figure(figsize=(25, 5))
# plt.title(title)
# x = np.arange(data.shape[0])
# # y = np.mean(data, axis=(-2, -1))[sort_index]
# y = data[sort_index]
# plt.bar(x, y)
# plt.xticks(ticks=x[::10], labels=sort_index[::10], rotation=90)
# plt.show()
# + active=""
# from functools import partial
# + active=""
# def imshow_helper(data, s=1.0, vmax=None, vmin=None):
# assert data.ndim == 3
# n = data.shape[0]
# m = int(np.ceil(np.sqrt(n)))
# if vmax is None:
# vmax = data.max()
# if vmin is None:
# vmin = data.min()
# plt.figure(figsize=(s * m, s * m))
# for i, d in enumerate(data):
# plt.subplot(m, m, i + 1)
# plt.imshow(d, vmax=vmax, vmin=vmin)
# plt.axis("off")
# plt.show()
#
# def sort_helper(data, key_index=-1, func=None):
# assert data.ndim == 4
# if func is None:
# func = lambda x: np.mean(x, axis=(-2, -1))
#
# sort_index = np.argsort(func(data[key_index]))
# return sort_index
#
# def sort_imshow_helper(data, norm="each"):
# sort_index = sort_helper(np.asarray(data))
# vmax = np.asarray(data).max()
# vmin = np.asarray(data).min()
# for d in data:
# if norm == "each":
# imshow_helper(d[sort_index])
# elif norm == "all":
# imshow_helper(d[sort_index], vmax=vmax, vmin=vmin)
# + active=""
# np.asarray(skip_datas[0][-3:])[:, i].shape
# + active=""
# i = 0
# sort_imshow_helper(np.asarray(skip_datas[0][-3:])[:, i])
# + active=""
# perm[i]
# + active=""
# plt.hist(skip_model.fc.weight.to("cpu").detach().numpy()[perm[i]])
# + active=""
# i = 0
# sort_imshow_helper(np.asarray(plain_datas[0][-3:])[:, i])
# + active=""
# ResNetはチャネルに一貫した意味があるぽい.一方,PlainNetはバラバラ.
#
# かなり異なる情報を埋め込んでいる
#
# PlainNetは値の範囲もバラバラ.
# + active=""
# cand_layers[1:4]
# + active=""
# cand_layers[4:8]
# + active=""
# cand_layers[8:-3]
# + active=""
# i = 0
# slices = [
# slice(1, 4),
# slice(4, 8),
# slice(8, -3),
# ]
#
# for k, sl in enumerate(slices):
# print(cand_layers[sl])
# sort_imshow_helper(np.asarray(skip_datas[0][sl])[:, i], norm="all")
# + active=""
# from utils import plots
# + active=""
# for i in range(10):
# img = plots.input2image(images[i].to("cpu"))
# title = imgnet_label_name[perm[i]]
# print(title)
# plt.figure(figsize=(5, 5))
# plt.title(title)
# plt.imshow(np.transpose(img, (1, 2, 0)))
# plt.show()
# + active=""
# with torch.no_grad():
# out = skip_model(images).detach().to("cpu").numpy()
# + active=""
# i = 0
# out[i].argmax() == perm[i]
# + active=""
# plt.figure(figsize=(20, 4))
# plt.bar(np.arange(len(out[i])), out[i])
# plt.show()
# + active=""
#
| notebooks/20200826_check_resenets_sparse_featuremap-full.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8 - AzureML
# language: python
# name: python38-azureml
# ---
# # Run FLAML in AzureML
# ## Install requirements
# +
import os
on_ci = "CI_NAME" in os.environ
if on_ci:
os.system("/anaconda/envs/azureml_py38/bin/pip install -r requirements.txt")
else:
os.system("pip install -r requirements.txt")
# -
#
#
# ## 1. Introduction
#
# FLAML is a Python library (https://github.com/microsoft/FLAML) designed to automatically produce accurate machine learning models
# with low computational cost. It is fast and cheap. The simple and lightweight design makes it easy
# to use and extend, such as adding new learners. FLAML can
# - serve as an economical AutoML engine,
# - be used as a fast hyperparameter tuning tool, or
# - be embedded in self-tuning software that requires low latency & resource in repetitive
# tuning tasks.
#
# In this notebook, we use one real data example (binary classification) to showcase how to use FLAML library together with AzureML.
# ### Enable mlflow in AzureML workspace
# +
import mlflow
from azureml.core import Workspace
ws = Workspace.from_config()
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
mlflow.set_experiment("notebooks-flaml-intro-example")
# -
# ## 2. Classification Example
# ### Load data and preprocess
#
# Download [Airlines dataset](https://www.openml.org/d/1169) from OpenML. The task is to predict whether a given flight will be delayed, given the information of the scheduled departure.
# +
from flaml.data import load_openml_dataset
X_train, X_test, y_train, y_test = load_openml_dataset(dataset_id=1169, data_dir="./")
# -
# ### Run FLAML
# In the FLAML automl run configuration, users can specify the task type, time budget, error metric, learner list, whether to subsample, resampling strategy type, and so on. All these arguments have default values which will be used if users do not provide them. For example, the default ML learners of FLAML are `['lgbm', 'xgboost', 'catboost', 'rf', 'extra_tree', 'lrl1']`.
# +
""" import AutoML class from flaml package """
from flaml import AutoML
automl = AutoML()
# -
settings = {
"time_budget": 60, # total running time in seconds
"metric": "accuracy", # primary metrics can be chosen from: ['accuracy','roc_auc','f1','log_loss','mae','mse','r2']
"estimator_list": ["lgbm", "rf", "xgboost"], # list of ML learners
"task": "classification", # task type
"sample": False, # whether to subsample training data
"log_file_name": "airlines_experiment.log", # flaml log file
}
with mlflow.start_run() as run:
"""The main flaml automl API"""
automl.fit(X_train=X_train, y_train=y_train, **settings)
# ### Best model and metric
""" retrieve best config and best learner"""
print("Best ML leaner:", automl.best_estimator)
print("Best hyperparmeter config:", automl.best_config)
print("Best accuracy on validation data: {0:.4g}".format(1 - automl.best_loss))
print("Training duration of best run: {0:.4g} s".format(automl.best_config_train_time))
automl.model
# +
""" pickle and save the automl object """
import pickle
with open("automl.pkl", "wb") as f:
pickle.dump(automl, f, pickle.HIGHEST_PROTOCOL)
# -
""" compute predictions of testing dataset """
y_pred = automl.predict(X_test)
print("Predicted labels", y_pred)
print("True labels", y_test)
y_pred_proba = automl.predict_proba(X_test)[:, 1]
# +
""" compute different metric values on testing dataset"""
from flaml.ml import sklearn_metric_loss_score
print("accuracy", "=", 1 - sklearn_metric_loss_score("accuracy", y_pred, y_test))
print("roc_auc", "=", 1 - sklearn_metric_loss_score("roc_auc", y_pred_proba, y_test))
print("log_loss", "=", sklearn_metric_loss_score("log_loss", y_pred_proba, y_test))
# -
# ### Log history
# +
from flaml.data import get_output_from_log
(
time_history,
best_valid_loss_history,
valid_loss_history,
config_history,
train_loss_history,
) = get_output_from_log(filename=settings["log_file_name"], time_budget=60)
for config in config_history:
print(config)
# +
import matplotlib.pyplot as plt
import numpy as np
plt.title("Learning Curve")
plt.xlabel("Wall Clock Time (s)")
plt.ylabel("Validation Accuracy")
plt.scatter(time_history, 1 - np.array(valid_loss_history))
plt.step(time_history, 1 - np.array(best_valid_loss_history), where="post")
plt.show()
| notebooks/flaml/1.intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # In class EM implementation
# ## Stopping iteration
#
# Need to stop somewhere, keep track how much it is improving when it slows down then you stop optimizing.
# How would actually calculate?
#
# - Each sequences have is length $L$
# - How to calculate things we need given the data?
# ## Computing posteriors (E step)
#
# - Posteriors goes into M step
# - At some point need to calculate post for each base for each sequence
# - Already init $\lambda_{l}$ and $\Psi_{l}$
# - Could do nested for loop but not efficient
# In current model each base is sampled independently so each time you see an A you will have same value.
# $ P(C_{ij}=1|X_{ij}=A) = \frac{\lambda_{1}\Psi^{1}_{A}}{\lambda_{0}\Psi^{0}_{A} + \lambda_{1}\Psi^{1}_{A}} $
# For next class write code to calculate the posteriors.
# $C_{i,j}$ converted into $C_{i, j, l}$ because $C_{i, j}$ could take on multiple values due to multiple models? Want to reduce $C$ to either zero or one when converting to expectation (?).
#
# In the E step $q = P(C | X)$ and therefore depends on what value X we have.
# ## For Thursday
import numpy as np
# Initialize the model parameters $\theta$ which include $\lambda_{0}, \lambda_{1}, \Psi^{0}_{k}$ and $\Psi^{1}_{k}$.
# +
def init_params():
lambda_0 = np.random.uniform()
lambda_1 = 1 - lambda_0
def init_psi():
psi = np.random.uniform(size=(4))
psi_norm = psi / psi.sum()
return psi
psi_0 = init_psi()
psi_1 = init_psi()
return {
'l0': lambda_0, 'l1': lambda_1, 'psi0': psi_0, 'psi1': psi_1
}
theta_0 = init_params()
theta_0
# -
# Calculate posterior probability array. Gives the probability $C=1$ given the identity of a specific nucleotide.
# +
def post_probs(theta_0):
return [
(
(theta_0['l1']*theta_0['psi1'][i]) /
(theta_0['l0']*theta_0['psi0'][i] + theta_0['l1']*theta_0['psi1'][i])
) for i in range(4)
]
probs = post_probs(theta_0) # probability of Cij == 1 given the identity of each nucleotide
probs
# -
# Code from sequence reader assignment to read in Quon enhancer data.
# +
def read_seq_file(filepath):
with open(filepath) as handle:
return [s.upper().strip() for s in handle]
def nuc_to_one_hot(nuc):
# Convert nucleotide to the index in one hot encoded array
# that should be hot (==1)
upper_nuc = nuc.upper()
mapping = {'A': 0, 'T': 1, 'G': 2, 'C': 3}
return mapping[upper_nuc]
def make_matrix(seqs):
# input an iterable of sequences and return one hot matrix
num_seqs, length = len(seqs), len(seqs[0])
# assume all sequences are the same length
matrix = np.zeros((num_seqs, length, 4))
for i, each_seq in enumerate(seqs):
for j, each_nuc in enumerate(each_seq):
hot_index = nuc_to_one_hot(each_nuc)
matrix[i][j][hot_index] = 1
return matrix
# -
seqs_path = '../assignments/data/sequence.padded.txt'
seqs = read_seq_file(seqs_path)
seq_matrix = make_matrix(seqs)
# Multiply each one-hot encoded matrix by the posterior matrix. Since only one value in seq matrix is non-zero and at the same index as the posterior multiplying the two together will give 3D matrix that hold posterior probibilities for each base. The array at positions $X_{ij}$ could be reduced to single values by summing in that format makes more sense in the actual E step implementation.
base_probs = seq_matrix * probs
base_probs
# Version of matrix were we take sum of each array at $X_{ij}$.
base_probs_sum = base_probs.sum(axis=2)
base_probs_sum
| scratch/EM-implementation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Network Analysis
# +
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
SOC_URL = "./actorMovies.csv"
df = pd.read_csv(SOC_URL, sep=";")
for i in range(df["Movies"].count()):
df["Movies"][i] = df["Movies"][i].split("|")
df.head(25)
total = 0
for i in range(df["Movies"].count()):
total += len(df["Movies"][i])
print(total)
df.head()
# -
# First we need to convert the .csv that we are given to the format that we want
# ##### Nodes -> Films Edges -> Films with same actor
df["Movies"][18]
# We have duplicated movies in some actors
for i in range(df["Movies"].count()):
df["Movies"][i] = list(set(df["Movies"][i])) #Can't use pandas.unique() cos type of ["Movies"][i] is -> list
df["Movies"][18]
# Now we make the nodes list just by appending all the films
# +
movies = df.iloc[:,1].copy() #We copy the actors series
#movies = movies.to_frame() #Convert it to a dataframe
nodes = pd.DataFrame(columns=['Id','Movie'])
id = 0
for i in range(len(movies)):
for j in range(len(movies[i])):
nodes = nodes.append({"Id":id,"Movie":movies[i][j]}, ignore_index=True)
id+=1
print(len(nodes))
nodes.head()
# -
#We eliminate the duplicate movies
nodes_list = nodes
nodes_list = nodes_list["Movie"].unique()
nodes_list = pd.DataFrame({'Movie': nodes_list[:]})
nodes_list.rename(columns={0:"Id"})
nodes_list.index.name = "Id"
nodes_list.head()
nodes_list.to_csv("nodes_list.csv")
nodes_list.tail()
# +
edges = pd.DataFrame(columns=["Source","Target","Weight","Type"])
#is_empty = lambda x,y: edges.loc[(edges["Source"] == x) & (edges["Target"] == y)].count().all() == 0
is_empty = lambda x,y,edges: edges.loc[(edges["Source"] == x) & (edges["Target"] == y)].count()[0] == 0
get_index = lambda movie_name : nodes_list.loc[nodes_list["Movie"] == movie_name, "Movie"].index[0]
def add_weight(x,y,edges):
edges.loc[(edges["Source"] == x) & (edges["Target"] == y),"Weight"] += 1
def get_edges(edges):
for i in range(df["Movies"].count()):
for j in range(len(df["Movies"][i])):
#Dont go over the full list -> quicker
#why? -> last element connected in previous iter
for k in range(j+1,len(df["Movies"][i])): #Dont add equal edges or already added
source = get_index(df["Movies"][i][j])
target = get_index(df["Movies"][i][k])
if(source != target):
if(not is_empty(source,target,edges)): #if there is one in k->j, we increase weight
add_weight(source,target,edges)
elif(not is_empty(target,source,edges)): #if there is one in j->k, we increase weight
add_weight(target,source, edges)
else:
#both are empty -> add another edge
edges = edges.append({"Source":source,"Target":target,"Weight":1,"Type":"Undirected"}, ignore_index=True)
return edges
edges = get_edges(edges)
len(edges)
edges.tail()
# -
edges.rename(columns={0:"Id"}) #Adding an index column
edges.index.name = "Id"
edges_test = edges.copy()
edges_test.drop_duplicates(subset=["Source","Target"], inplace=True)
print(len(edges))
len(edges_test)
edges_test.to_csv("edges_list.csv")
# +
| P1/actors_and_movies_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: firedrake
# language: python
# name: firedrake
# ---
# # Problem Description
#
# The previous investigation of posterior consistency may be considered unfair on the traditional techniques since their parameters $\sigma$ and $\alpha$ were not tuned.
# This can be done using what is known as an "l-curve analysis".
# Let's take another look at the two functionals: our new method minimises J
#
# $$J[u, q] =
# \underbrace{\frac{1}{2}\int_{\Omega_v}\left(\frac{u_{obs} - I(u, \text{P0DG}(\Omega_v))}{\sigma}\right)^2dx}_{\text{model-data misfit}} +
# \underbrace{\frac{\alpha^2}{2}\int_\Omega|\nabla q|^2dx}_{\text{regularization}}$$
#
# whilst traditional methods minimise $J'$
#
# $$J'[u, q] =
# \underbrace{\frac{1}{2}\int_{\Omega}\left(\frac{u_{interpolated} - u}{\sigma}\right)^2dx}_{\text{model-data misfit}} +
# \underbrace{\frac{\alpha^2}{2}\int_\Omega|\nabla q|^2dx}_{\text{regularization}}.$$
#
# In $J$, $\sigma$ (the standard deviation of $u_{obs}$) serves the purpose of weighting each observation in the musfit functional appropriately given its measurement uncertainty.
# Much like the choice of $\alpha$ encodes prior information about how confident we are that our solution ought to be smoove, $\sigma$ weights our confidence of each measurement in the misfit part of the functional.
#
# This is fine for $J$ but it might be argued that our use of $\sigma$ in $J'$ is unreasonable since the misfit term is between $u$ and $u_{interpolated}$ rather than $u_{obs}$.
# We should therefore replace $\sigma$ with $\hat{\sigma}$ which we should aim to **find an optimal value for**:
#
# $$J''[u, q] =
# \underbrace{\frac{1}{2}\int_{\Omega}\left(\frac{u_{interpolated} - u}{\hat{\sigma}}\right)^2dx}_{\text{model-data misfit}} +
# \underbrace{\frac{\alpha^2}{2}\int_\Omega|\nabla q|^2dx}_{\text{regularization}}.$$
#
# ## Finding $\hat{\sigma}$
#
# Bayesian purists look away!
# We need to find a value of $\hat{\sigma}$ that gets us to the sweet spot between minimising the misfit term and minimising the regularisation term - we want to be close to the minimum of the misfit whilst allowing the regularisation to still have an effect.
# For a chosen value of $\alpha$ we therfore find a $\hat{\sigma}$ in $J''$ such that we sit at a turning point of a plot of $\frac{1}{\hat{\sigma}}$ against $J''_{misfit}$.
#
# Equivalently (and the more usual problem statement), we want to find a $\hat{\sigma}$ in $J''$ such that we sit at a turning point of a plot of
#
# $$\hat{\sigma}\sqrt{J''_{misfit}} = \sqrt{\frac{1}{2}\int_{\Omega}\left(u_{interpolated} - u\right)^2 dx}$$
#
# against
#
# $$\sqrt{J''_{regularization}} = \sqrt{\frac{\alpha^2}{2}\int_\Omega|\nabla q|^2dx}$$.
# +
from scipy.interpolate import (
LinearNDInterpolator,
NearestNDInterpolator,
CloughTocher2DInterpolator,
Rbf,
)
import matplotlib.pyplot as plt
import firedrake
import firedrake_adjoint
from firedrake import Constant, cos, sin
import numpy as np
from numpy import pi as π
from numpy import random
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
# -
# ## Fake $q_{true}$
# +
mesh = firedrake.UnitSquareMesh(32, 32)
# Solution Space
V = firedrake.FunctionSpace(mesh, family='CG', degree=2)
# q (Control) Space
Q = firedrake.FunctionSpace(mesh, family='CG', degree=2)
seed = 1729
generator = random.default_rng(seed)
degree = 5
x = firedrake.SpatialCoordinate(mesh)
q_true = firedrake.Function(Q)
for k in range(degree):
for l in range(int(np.sqrt(degree**2 - k**2))):
Z = np.sqrt(1 + k**2 + l**2)
ϕ = 2 * π * (k * x[0] + l * x[1])
A_kl = generator.standard_normal() / Z
B_kl = generator.standard_normal() / Z
expr = Constant(A_kl) * cos(ϕ) + Constant(B_kl) * sin(ϕ)
mode = firedrake.interpolate(expr, Q)
q_true += mode
print('Made fake q_true')
# -
# ## Fake $u_{true}$
# +
from firedrake import exp, inner, grad, dx
u_true = firedrake.Function(V)
v = firedrake.TestFunction(V)
f = Constant(1.0)
k0 = Constant(0.5)
bc = firedrake.DirichletBC(V, 0, 'on_boundary')
F = (k0 * exp(q_true) * inner(grad(u_true), grad(v)) - f * v) * dx
firedrake.solve(F == 0, u_true, bc)
print('Made fake u_true')
# Clear tape since don't need to have taped above
tape = firedrake_adjoint.get_working_tape()
tape.clear_tape()
# -
# ## Generating Observational Data $u_{obs}$
# We will investigate with $2^8 = 256$ measurements.
# +
i = 8
np.random.seed(0)
# Decide σ
signal_to_noise = 20
U = u_true.dat.data_ro[:]
u_range = U.max() - U.min()
σ = firedrake.Constant(u_range / signal_to_noise)
# Make random point cloud
num_points = 2**i
xs = np.random.random_sample((num_points,2))
# Generate "observed" data
print(f'Generating {num_points} fake observed values')
ζ = generator.standard_normal(len(xs))
u_obs_vals = np.array(u_true.at(xs)) + float(σ) * ζ
# -
# # Loop over $\hat{\sigma}$ values for each method
# +
# Setup methods and σ̂s
methods = ['nearest', 'linear', 'clough-tocher', 'gaussian']
σ̂_values = np.asarray([1.0,
10.0,
100.0,
1000.0,
10000.0,
100000.0,
0.1,
20.0,
30.0,
40.0,
22.0,
24.0,
26.0,
28.0,
25.0,
27.0,
23.0,
23.5,
27.5,
22.5,
22.75,
27.75,
22.25
2.0,
4.0,
6.0,
8.0,
12.0,
14.0,
16.0,
18.0,
0.01,
0.5,
9.0,
11.0,
15.0])
Js = np.zeros((len(σ̂_values), len(methods)))
J_misfits = Js.copy()
J_regularisations = Js.copy()
J_misfit_times_vars = Js.copy()
# Loop over methods first to avoid recreating interpolators then σ̂s
for method_i, method in enumerate(methods):
print(f'using {method} method')
# Interpolating the mesh coordinates field (which is a vector function space)
# into the vector function space equivalent of our solution space gets us
# global DOF values (stored in the dat) which are the coordinates of the global
# DOFs of our solution space. This is the necessary coordinates field X.
print('Getting coordinates field X')
Vc = firedrake.VectorFunctionSpace(mesh, V.ufl_element())
X = firedrake.interpolate(mesh.coordinates, Vc).dat.data_ro[:]
# Pick the appropriate "interpolate" method needed to create
# u_interpolated given the chosen method
print(f'Creating {method} interpolator')
if method == 'nearest':
interpolator = NearestNDInterpolator(xs, u_obs_vals)
elif method == 'linear':
interpolator = LinearNDInterpolator(xs, u_obs_vals, fill_value=0.0)
elif method == 'clough-tocher':
interpolator = CloughTocher2DInterpolator(xs, u_obs_vals, fill_value=0.0)
elif method == 'gaussian':
interpolator = Rbf(xs[:, 0], xs[:, 1], u_obs_vals, function='gaussian')
print('Interpolating to create u_interpolated')
u_interpolated = firedrake.Function(V, name=f'u_interpolated_{method}_{num_points}')
u_interpolated.dat.data[:] = interpolator(X[:, 0], X[:, 1])
for σ̂_i, σ̂_value in enumerate(σ̂_values):
# Run the forward problem with q = 0 as first guess
print('Running forward model')
u = firedrake.Function(V)
q = firedrake.Function(Q)
bc = firedrake.DirichletBC(V, 0, 'on_boundary')
F = (k0 * exp(q) * inner(grad(u), grad(v)) - f * v) * dx
firedrake.solve(F == 0, u, bc)
print(f'σ̂_i = {σ̂_i} σ̂_value = {σ̂_value}')
σ̂ = firedrake.Constant(σ̂_value)
# Two terms in the functional - note σ̂ in misfit term!
misfit_expr = 0.5 * ((u_interpolated - u) / σ̂)**2
α = firedrake.Constant(0.5)
regularisation_expr = 0.5 * α**2 * inner(grad(q), grad(q))
print('Assembling J\'\'')
J = firedrake.assemble(misfit_expr * dx) + firedrake.assemble(regularisation_expr * dx)
# Create reduced functional
print('Creating q̂ and Ĵ\'\'')
q̂ = firedrake_adjoint.Control(q)
Ĵ = firedrake_adjoint.ReducedFunctional(J, q̂)
# Minimise reduced functional
print('Minimising Ĵ to get q_min')
q_min = firedrake_adjoint.minimize(
Ĵ, method='Newton-CG', options={'disp': True}
)
q_min.rename(name=f'q_min_{method}_{num_points}_{σ̂_value:.2}')
# Get size of misfit term by solving PDE again using q_min
print('Running forward model with q_min')
u = firedrake.Function(V)
bc = firedrake.DirichletBC(V, 0, 'on_boundary')
F = (k0 * exp(q_min) * inner(grad(u), grad(v)) - f * v) * dx
firedrake.solve(F == 0, u, bc)
print("Reformulating J\'\' expressions")
misfit_expr = 0.5 * ((u_interpolated - u) / σ̂)**2
misfit_expr_times_var = 0.5 * (u_interpolated - u)**2
regularisation_expr = 0.5 * α**2 * inner(grad(q_min), grad(q_min))
print("Calculating J_misfit")
J_misfit = firedrake.assemble(misfit_expr * dx)
print(f'J_misfit = {J_misfit}')
# Need to reform regularisation term with q_min instead of q
print("Calculating J_regularisation")
J_regularisation = firedrake.assemble(regularisation_expr * dx)
print(f'J_regularisation = {J_regularisation}')
print("Calculating J\'\'")
J = J_misfit + J_regularisation
print(f'J = {J}')
print('Calculating J_misfit_times_var')
J_misfit_times_var = firedrake.assemble(misfit_expr_times_var * dx)
print(f'J_misfit_times_var = {J_misfit_times_var}')
print(f'saving values: σ̂_i = {σ̂_i} method_i = {method_i}')
J_misfits[σ̂_i, method_i] = J_misfit
J_regularisations[σ̂_i, method_i] = J_regularisation
Js[σ̂_i, method_i] = J
J_misfit_times_vars[σ̂_i, method_i] = J_misfit_times_var
print(f'Writing to q_min to q_mins checkpoint: σ̂_i = {σ̂_i} method_i = {method_i}')
with firedrake.DumbCheckpoint("q_mins", mode=firedrake.FILE_UPDATE) as chk:
chk.store(q_min)
# Clear tape to avoid memory leak
print('Clearing tape')
tape.clear_tape()
# -
# # Save to CSV
# Appending if we already have data
# +
import os
import csv
write_header = not os.path.isfile('Js.csv')
file = open('Js.csv', 'a')
writer = csv.writer(file)
if write_header:
writer.writerow(['sigma_hat'] + methods)
writer.writerows(np.concatenate((σ̂_values[:, np.newaxis], Js), axis=1))
file.close()
write_header = not os.path.isfile('J_misfits.csv')
file = open('J_misfits.csv', 'a')
writer = csv.writer(file)
if write_header:
writer.writerow(['sigma_hat'] + methods)
writer.writerows(np.concatenate((σ̂_values[:, np.newaxis], J_misfits), axis=1))
file.close()
write_header = not os.path.isfile('J_regularisations.csv')
file = open('J_regularisations.csv', 'a')
writer = csv.writer(file)
if write_header:
writer.writerow(['sigma_hat'] + methods)
writer.writerows(np.concatenate((σ̂_values[:, np.newaxis], J_regularisations), axis=1))
file.close()
write_header = not os.path.isfile('J_misfit_times_vars.csv')
file = open('J_misfit_times_vars.csv', 'a')
writer = csv.writer(file)
if write_header:
writer.writerow(['sigma_hat'] + methods)
writer.writerows(np.concatenate((σ̂_values[:, np.newaxis], J_misfit_times_vars), axis=1))
file.close()
# -
| 5-l-curve/l-curve.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Logistic Regression with L2 regularization
#
# The goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following:
#
# * Extract features from Amazon product reviews.
# * Convert an SFrame into a NumPy array.
# * Write a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.
# * Implement gradient ascent with an L2 penalty.
# * Empirically explore how the L2 penalty can ameliorate overfitting.
#
# # Fire up GraphLab Create
#
# Make sure you have the latest version of GraphLab Create. Upgrade by
#
# ```
# pip install graphlab-create --upgrade
# ```
# See [this page](https://dato.com/download/) for detailed instructions on upgrading.
from __future__ import division
import graphlab
# ## Load and process review dataset
# For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
products = graphlab.SFrame('amazon_baby_subset.gl/')
# Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
#
# 1. Remove punctuation using [Python's built-in](https://docs.python.org/2/library/string.html) string functionality.
# 2. Compute word counts (only for the **important_words**)
#
# Refer to Module 3 assignment for more details.
# +
# The same feature processing (same as the previous assignments)
# ---------------------------------------------------------------
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
# -
# Now, let us take a look at what the dataset looks like (**Note:** This may take a few minutes).
products
# ## Train-Validation split
#
# We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use `seed=2` so that everyone gets the same result.
#
# **Note:** In previous assignments, we have called this a **train-test split**. However, the portion of data that we don't train on will be used to help **select model parameters**. Thus, this portion of data should be called a **validation set**. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.
# +
train_data, validation_data = products.random_split(.8, seed=2)
print 'Training set : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)
# -
# ## Convert SFrame to NumPy array
# Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
#
# **Note:** The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
# +
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
# -
# We convert both the training and validation sets into NumPy arrays.
#
# **Warning**: This may take a few minutes.
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
# **Are you running this notebook on an Amazon EC2 t2.micro instance?** (If you are using your own machine, please skip this section)
#
# It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running `get_numpy_data` function. Instead, download the [binary file](https://s3.amazonaws.com/static.dato.com/files/coursera/course-3/numpy-arrays/module-4-assignment-numpy-arrays.npz) containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
# ```
# arrays = np.load('module-4-assignment-numpy-arrays.npz')
# feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
# feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
# ```
# ## Building on logistic regression with no L2 penalty assignment
#
# Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
#
# $$
# P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
# $$
#
# where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of **important_words** in the review $\mathbf{x}_i$.
#
# We will use the **same code** as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
product = feature_matrix.dot(coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = 1 / (1 + np.exp(-product))
return predictions
# # Adding L2 penalty
# Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
#
# Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:
# $$
# \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
# $$
#
# ** Adding L2 penalty to the derivative**
#
# It takes only a small modification to add a L2 penalty. All terms indicated in **red** refer to terms that were added due to an **L2 penalty**.
#
# * Recall from the lecture that the link function is still the sigmoid:
# $$
# P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
# $$
# * We add the L2 penalty term to the per-coefficient derivative of log likelihood:
# $$
# \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
# $$
#
# The **per-coefficient derivative for logistic regression with an L2 penalty** is as follows:
# $$
# \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
# $$
# and for the intercept term, we have
# $$
# \frac{\partial\ell}{\partial w_0} = \sum_{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
# $$
# **Note**: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.
# Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:
# * `errors` vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
# * `feature` vector containing $h_j(\mathbf{x}_i)$ for all $i$
# * `coefficient` containing the current value of coefficient $w_j$.
# * `l2_penalty` representing the L2 penalty constant $\lambda$
# * `feature_is_constant` telling whether the $j$-th feature is constant or not.
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
derivative = sum(feature * errors)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
derivative = derivative - (2 * l2_penalty * coefficient)
return derivative
# ** Quiz question:** In the code above, was the intercept term regularized?
# To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
# $$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
# ** Quiz question:** Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$?
# The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
derivative = feature_derivative_with_L2(errors, feature_matrix[:,j], coefficients[j], l2_penalty, is_intercept)
# add the step size times the derivative to the current coefficient
coefficients[j] = coefficients[j] + (step_size * derivative)
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
# # Explore effects of L2 regularization
#
# Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using **L2 regularization** in analyzing sentiment for product reviews. **As iterations pass, the log likelihood should increase**.
#
# Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
# ## Compare coefficients
#
# We now compare the **coefficients** for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
#
# Below is a simple helper function that will help us create this table.
table = graphlab.SFrame({'word': ['(intercept)'] + important_words})
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
# Now, let's run the function `add_coefficients_to_table` for each of the L2 penalty strengths.
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
# Using **the coefficients trained with L2 penalty 0**, find the 5 most positive words (with largest positive coefficients). Save them to **positive_words**. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to **negative_words**.
#
# **Quiz Question**. Which of the following is **not** listed in either **positive_words** or **negative_words**?
# +
positive_words = table.topk('coefficients [L2=0]', 5, reverse = False)['word']
negative_words = table.topk('coefficients [L2=0]', 5, reverse = True)['word']
print positive_words
print negative_words
# -
# Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
# +
import matplotlib.pyplot as plt
# %matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table.filter_by(column_name='word', values=positive_words)
table_negative_words = table.filter_by(column_name='word', values=negative_words)
del table_positive_words['word']
del table_negative_words['word']
for i in xrange(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in xrange(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
# -
# Run the following cell to generate the plot. Use the plot to answer the following quiz question.
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
# **Quiz Question**: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.
#
# **Quiz Question**: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)
# ## Measuring accuracy
#
# Now, let us compute the accuracy of the classifier model. Recall that the accuracy is given by
#
# $$
# \mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
# $$
#
#
# Recall from lecture that that the class prediction is calculated using
# $$
# \hat{y}_i =
# \left\{
# \begin{array}{ll}
# +1 & h(\mathbf{x}_i)^T\mathbf{w} > 0 \\
# -1 & h(\mathbf{x}_i)^T\mathbf{w} \leq 0 \\
# \end{array}
# \right.
# $$
#
# **Note**: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.
#
# Based on the above, we will use the same code that was used in Module 3 assignment.
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
# Below, we compare the accuracy on the **training data** and **validation data** for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
# +
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# -
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print "L2 penalty = %g" % key
print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
print "--------------------------------------------------------------------------------"
# * **Quiz question**: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the **highest** accuracy on the **training** data?
# * **Quiz question**: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the **highest** accuracy on the **validation** data?
# * **Quiz question**: Does the **highest** accuracy on the **training** data imply that the model is the best one?
| Classification/Week 2/Assignment 2/module-4-linear-classifier-regularization-assignment-blank.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/krissydolor/Linear-Algebra/blob/master/Assignmet_5_Baniquit_Dolor.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="aPZe5Ak11YWh"
# # **Linear Algebra for ChE**
# + [markdown] id="JE7-yVyF1eup"
# ### **Laboratory 6 : Matrix Operations**
#
# Now that you have a fundamental knowledge about representing and operating with vectors as well as the fundamentals of matrices, we'll try to the same operations with matrices and even more.
# + [markdown] id="m33KDf9K1lxE"
# **Objectives**
#
# At the end of this activity you will be able to:
#
# 1. Be familiar with the fundamental matrix operations.
# 2. Apply the operations to solve intemrediate equations.
# 3. Apply matrix algebra in engineering solutions.
# + [markdown] id="nvBzqTQs1zK1"
# ## **Discussion**
# + id="l_sYly6V1XR9"
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# + [markdown] id="rr2KEwitOpNk"
# ## Transposition
#
# The numpy.transpose() method is a critical component of matrix multiplication. This function permutes or reserves the dimensions of an array and returns the resulting array. It reverses the order of the row and column items. This method returns a modified array of the original array.
# + [markdown] id="o3h8FZULN0rj"
# $$
# A=\begin{bmatrix} 1 & 2 & 5 \\ 5 & -1 & 0 \\ 0 & -3 & 3\end{bmatrix} \\
# $$
# + [markdown] id="nx5oNd3kOgc5"
# $$
# A^T =\begin{bmatrix} 1 & 5 & 0 \\ 2 & -1 & -3 \\ 5 & 0 & 3\end{bmatrix} \\
# $$
# + [markdown] id="g4kww4qVGBZ1"
# This can now be achieved programmatically by using `np.transpose()` or using the `T` method.
# + colab={"base_uri": "https://localhost:8080/"} id="Y_qJfTsTGCxt" outputId="6613101f-8fb6-4b48-aabf-f65031397f5d"
A = np.array([
[1 ,2, 5],
[5, -1, 0],
[0, -3, 3]
])
A
# + id="gD8LiLw0GFID" colab={"base_uri": "https://localhost:8080/"} outputId="5ccda3a4-9b63-4c8a-b1b0-094f43960fa4"
AT1 = np.transpose(A)
AT1
# + id="dMaFvPNmGGXJ"
AT2 = A.T
# + colab={"base_uri": "https://localhost:8080/"} id="Ps81l4udGHnN" outputId="f2389c3e-50d1-41dd-b23a-5356b05d0d06"
np.array_equiv(AT1, AT2)
# + colab={"base_uri": "https://localhost:8080/"} id="3BZwAYeyGOuT" outputId="ded49b92-73ce-4eb2-be66-2cb63518acde"
B = np.array([
[1,2,3,4],
[1,0,2,1],
])
B.shape
# + colab={"base_uri": "https://localhost:8080/"} id="Nnkdqz2eGQAu" outputId="2e15a7f3-6a20-4f14-e36c-1c427e588fd6"
np.transpose(B).shape
# + colab={"base_uri": "https://localhost:8080/"} id="BztXz9oXGR0m" outputId="e3b19ea6-2d06-4f20-e439-0e094f33b495"
B.T.shape
# + [markdown] id="-9b2r0NOGbpG"
# ### **Dot Product / Inner Product**
#
# Recalling the dot product from laboratory activity before, students will try to implement the same operation with matrices. In matrix dot product we are going to get the sum of products of the vectors by row-column pairs. So if we have two matrices $X$ and $Y$:
#
# $$X = \begin{bmatrix}x_{(0,0)}&x_{(0,1)}\\ x_{(1,0)}&x_{(1,1)}\end{bmatrix}, Y = \begin{bmatrix}y_{(0,0)}&y_{(0,1)}\\ y_{(1,0)}&y_{(1,1)}\end{bmatrix}$$
#
# The dot product will then be computed as:
# $$X \cdot Y= \begin{bmatrix} x_{(0,0)}*y_{(0,0)} + x_{(0,1)}*y_{(1,0)} & x_{(0,0)}*y_{(0,1)} + x_{(0,1)}*y_{(1,1)} \\ x_{(1,0)}*y_{(0,0)} + x_{(1,1)}*y_{(1,0)} & x_{(1,0)}*y_{(0,1)} + x_{(1,1)}*y_{(1,1)}
# \end{bmatrix}$$
#
# So if we assign values to $X$ and $Y$:
# $$X = \begin{bmatrix}1&2\\ 0&1\end{bmatrix}, Y = \begin{bmatrix}-1&0\\ 2&2\end{bmatrix}$$
#
#
# + [markdown] id="8xI5tkEVQ74I"
# $$X \cdot Y= \begin{bmatrix} 1*-1 + 2*2 & 1*0 + 2*2 \\ 0*-1 + 1*2 & 0*0 + 1*2 \end{bmatrix} = \begin{bmatrix} 3 & 4 \\2 & 2 \end{bmatrix}$$
# This could be achieved programmatically using `np.dot()`, `np.matmul()` or the `@` operator.
# + [markdown] id="7IJNGAwDTWLh"
# ######In Python, one method of calculating the dot product would be to take the sum of list comprehension and conduct element-wise multiplication on the elements in the list. Alternatively, we can use the np. dot(), @, and np.matmul() functions to fulfill our objective on an easier method. A dot product of two arrays is the inner product of vectors if both a and b are 1-dimensional arrays, which means there is no complex conjugation. In contrast, matrix multiplication is essential if a and b are 2-dimensional arrays; nonetheless, matmul or a @ b are equally preferable methods. Numpy's multiplier function can be used in place of arithmetic if either a or b is 0D (scalar). Multiplying (a, b) or a * b is preferable.
# + id="bo9PxSqAGn30"
X = np.array([
[1,2],
[0,1]
])
Y = np.array([
[-1,0],
[2,2]
])
# + colab={"base_uri": "https://localhost:8080/"} id="v4V0iZhhGqte" outputId="b819c9e1-73aa-4a9b-8693-581fe0be6aa7"
np.dot(X,Y)
# + colab={"base_uri": "https://localhost:8080/"} id="mav2fMVMGrzO" outputId="b2a5b793-b6ce-4dab-a473-646570bde614"
X.dot(Y)
# + colab={"base_uri": "https://localhost:8080/"} id="OabH3WdKWfCo" outputId="e60f0aa0-ac1c-4028-f9be-e3f31171d9a2"
X @ Y
# + colab={"base_uri": "https://localhost:8080/"} id="6H_IsIXEGthf" outputId="fc944df7-8ab0-45bf-c630-dfb93a81ae1c"
np.matmul(X,Y)
# + [markdown] id="90aAAPoKGv3_"
# In matrix dot products there are additional rules compared with vector dot products. Since vector dot products were just in one dimension there are less restrictions. Since now we are dealing with Rank 2 vectors we need to consider some rules:
#
#
# + [markdown] id="55BJuuD-GyOv"
# ### Rule 1: The inner dimensions of the two matrices in question must be the same.
#
# So given a matrix $A$ with a shape of $(a,b)$ where $a$ and $b$ are any integers. If we want to do a dot product between $A$ and another matrix $B$, then matrix $B$ should have a shape of $(b,c)$ where $b$ and $c$ are any integers. So for given the following matrices:
#
# $$C = \begin{bmatrix}0&1&2&4\\2&5&2&2\\0&1&5&6\end{bmatrix},
# D = \begin{bmatrix}1&1&0\\3&3&0\\2&5&8\end{bmatrix},
# O = \begin{bmatrix}2&0&4&5\\1&5&2&5\end{bmatrix}$$
#
# So in this case $A$ has a shape of $(3,4)$, $B$ has a shape of $(3,3)$ and $C$ has a shape of $(2,2)$. So the only matrix pairs that is eligible to perform dot product is matrices $A \cdot C$, or $B \cdot C$.
#
#
#
#
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="C6gvZhe9G4F1" outputId="c1f46a61-fbd7-4d9b-94e7-c79cd84ddda8"
A = np.array([
[2, 4],
[5, -2],
[0, 1]
])
B = np.array([
[1,1],
[3,3],
[-1,-2]
])
C = np.array([
[0,1,1],
[1,1,2]
])
print(A.shape)
print(B.shape)
print(C.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="BkrJ8y3PW3vx" outputId="8b09d171-4b87-4d36-bec8-2d57e0581c33"
A @ C
# + colab={"base_uri": "https://localhost:8080/"} id="qTP-BxAEW4I1" outputId="9891543c-2791-40f0-b485-11fab06ec51f"
B @ C
# + [markdown] id="Z6MOWQZwW8if"
# If you would notice the shape of the dot product changed and its shape is not the same as any of the matrices we used. The shape of a dot product is actually derived from the shapes of the matrices used. So recall matrix $A$ with a shape of $(a,b)$ and matrix $B$ with a shape of $(b,c)$, $A ⋅ B$ should hace a shape $(a,c)$.
# + colab={"base_uri": "https://localhost:8080/"} id="Sq8woyY2X1A6" outputId="330e5229-3fc8-4a1c-f9a3-8e4bd4f45d00"
A @ B.T
# + colab={"base_uri": "https://localhost:8080/"} id="Ym2Cv8E3X0rH" outputId="d66dcac9-1b55-490a-de67-4842c88738b0"
X = np.array([
[1,2,3,0]
])
Y = np.array([
[1,0,4,-1]
])
print(X.shape)
print(Y.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="KN7SAsVpYE78" outputId="a4c3c49b-7baa-4f7d-8217-7913f6e157bb"
Y.T @ X
# + [markdown] id="WD3-zvh0YH9n"
# And you can see that when you try to multiply A and B, it returns `ValueError` pertaining to matrix shape mismatch.
# + [markdown] id="iBGtwhk3atqm"
# ### Rule 2: Dot Product has special properties
#
# Dot products are prevalent in matrix algebra, this implies that it has several unique properties and it should be considered when formulation solutions:
# 1. $A \cdot B \neq B \cdot A$
# 2. $A \cdot (B \cdot C) = (A \cdot B) \cdot C$
# 3. $A\cdot(B+C) = A\cdot B + A\cdot C$
# 4. $(B+C)\cdot A = B\cdot A + C\cdot A$
# 5. $A\cdot I = A$
# 6. $A\cdot \emptyset = \emptyset$
#
# + id="MQROLPmhcOig"
A = np.array([
[3,2,1],
[4,5,1],
[1,1,0]
])
B = np.array([
[4,1,6],
[4,1,9],
[1,4,8]
])
C = np.array([
[1,1,0],
[0,1,1],
[1,0,1]
])
# + colab={"base_uri": "https://localhost:8080/"} id="_UEzEe-AcQW8" outputId="fa30292f-2f7f-4d43-efca-8e683bbf4c50"
A.dot(np.zeros(A.shape))
# + colab={"base_uri": "https://localhost:8080/"} id="9hcWnZBDcTtZ" outputId="3a985bce-37ac-4ad4-c423-07f2e84dbb52"
z_mat = np.zeros(A.shape)
z_mat
# + colab={"base_uri": "https://localhost:8080/"} id="GCsbVsZxcWNo" outputId="bd8f84d5-167d-433c-e018-96cf80e59037"
a_dot_z = A.dot(np.zeros(A.shape))
a_dot_z
# + colab={"base_uri": "https://localhost:8080/"} id="zIjHT5aAcXfJ" outputId="359b3368-ab6a-4e69-dc45-6b641a6e552f"
np.array_equal(a_dot_z,z_mat)
# + colab={"base_uri": "https://localhost:8080/"} id="P3AehdsPcYwC" outputId="abddd31a-cd29-4474-a9b6-5ac3a02dd638"
null_mat = np.empty(A.shape, dtype=float)
null = np.array(null_mat,dtype=float)
print(null)
np.allclose(a_dot_z,null)
# + colab={"base_uri": "https://localhost:8080/"} id="_3a5iNvhbSdP" outputId="fecc3123-e3e1-4625-f187-814c282ee37f"
np.array_equiv(C.dot(B),A)
# + [markdown] id="NwD9AUDOcpOA"
# ### **Determinant**
# The matrix's determinant is the scalar value obtained for a given square matrix. Linear algebra is concerned with the determinant, which is calculated using the elements of a square matrix. It may be thought of as the scaling factor for a matrix transformation. Helpful in solving linear equations, computing the inverse of a matrix, and performing calculus operations.
#
# + colab={"base_uri": "https://localhost:8080/"} id="Gm6SeSN4dB_3" outputId="ee0de935-e532-433e-85c1-c97f69655872"
A = np.array([
[1,4],
[0,3]
])
np.linalg.det(A)
# + colab={"base_uri": "https://localhost:8080/"} id="ofbAEpPWdDzW" outputId="827a96b0-caaf-409a-8017-1e6e67433d5a"
## Now other mathematics classes would require you to solve this by hand,
## and that is great for practicing your memorization and coordination skills
## but in this class we aim for simplicity and speed so we'll use programming
## but it's completely fine if you want to try to solve this one by hand.
B = np.array([
[1,3,5,6],
[0,3,1,3],
[3,1,8,2],
[5,2,6,8]
])
np.linalg.det(B)
# + [markdown] id="g5FITOA7dJoo"
# ## **Inverse**
# The idea of the inverse matrix is a multidimensional refinement of the reciprocal of a number. The inverse matrix approach uses the inverse of a matrix to obtain the solution to linear equations. The inverse matrix is necessary because if a matrix rotates and scales a set of vectors, the inverse matrix undo the scalings and rotations to return the original vectors. On the other hand, using NumPy in the Python programming language can perform the concept of the inverse matrix. To calculate the inverse of a matrix, run the numpy.linalg.inv() that will serve as the function.
#
# Now to determine the inverse of a matrix we need to perform several steps. So let's say we have a matrix $M$:
# $$M = \begin{bmatrix}1&7\\-3&5\end{bmatrix}$$
# First, we need to get the determinant of $M$.
# $$|M| = (1)(5)-(-3)(7) = 26$$
# Next, we need to reform the matrix into the inverse form:
# $$M^{-1} = \frac{1}{|M|} \begin{bmatrix} m_{(1,1)} & -m_{(0,1)} \\ -m_{(1,0)} & m_{(0,0)}\end{bmatrix}$$
# So that will be:
# $$M^{-1} = \frac{1}{26} \begin{bmatrix} 5 & -7 \\ 3 & 1\end{bmatrix} = \begin{bmatrix} \frac{5}{26} & \frac{-7}{26} \\ \frac{3}{26} & \frac{1}{26}\end{bmatrix}$$
# For higher-dimension matrices you might need to use co-factors, minors, adjugates, and other reduction techinques. To solve this programmatially we can use `np.linalg.inv()`.
#
# + [markdown] id="GLEh8AMhRPlg"
# The inverse of a matrix is another matrix that produces the multiplicative identity when multiplied by the supplied matrix. $A^{-1}$ is the inverse of a matrix $A$, where $A \cdot\ A^{-1} = A^{-1} \cdot\ A = I$, where $I$ is the identity matrix.
# + colab={"base_uri": "https://localhost:8080/"} id="0I0yrWdYcquO" outputId="fc3f224a-7806-4761-bfee-cf5bc4b3652b"
N = np.array([
[2, 2, 9],
[4, -5, 2],
[2, 0, 3]
])
np.array(N @ np.linalg.inv(N), dtype=int)
# + colab={"base_uri": "https://localhost:8080/"} id="alZ3qbwXdnWB" outputId="165cfb9b-73c8-402f-81f1-dd9c8da0fbd0"
P = np.array([
[2, 8, 2],
[5, 2, 0],
[2, 4, 6,]
])
Q = np.linalg.inv(P)
Q
# + colab={"base_uri": "https://localhost:8080/"} id="iSjDxdIbdow8" outputId="a1815f25-7431-4f30-eca3-0ad7025ea7de"
## And now let's test your skills in solving a matrix with high dimensions:
N = np.array([
[18,5,23,1,0,33,5],
[0,45,0,11,2,4,2],
[5,9,20,0,0,0,3],
[1,6,4,4,8,43,1],
[8,6,8,7,1,6,1],
[-5,15,2,0,0,6,-30],
[-2,-5,1,2,1,20,12],
])
N_inv = np.linalg.inv(N)
np.array(N @ N_inv,dtype=int)
# + colab={"base_uri": "https://localhost:8080/"} id="hAANjloYcyYP" outputId="a9c559b7-ff68-445b-f7e5-9130b1fc2c97"
P @ Q
# + [markdown] id="Z8e5JfZqMik6"
# ### **Activity 1**
#
# Prove and implement the remaining 6 matrix multiplication properties. You may create your own matrices in which their shapes should not be lower than $(3,3)$. In your methodology, create individual flowcharts for each property and discuss the property you would then present your proofs or validity of your implementation in the results section by comparing your result to present functions from NumPy.
# + id="Zu05UHogi_g0"
A = np.array([
[3,2,1,5],
[4,5,1,6],
[1,1,0,6],
[1,7,8,7]
])
B = np.array([
[4,1,6,7],
[4,1,9,3],
[1,4,8,3],
[1,4,8,3]
])
C = np.array([
[1,2,3,4],
[5,6,7,8],
[9,0,1,2],
[3,4,5,6]
])
AB = np.matmul(A,B)
BA = np.matmul(B,A)
AC = np.matmul(A,C)
CA = np.matmul(C,A)
BC = np.matmul(B,C)
CB = np.matmul(C,B)
K = (AB)+(AC)
P = (BA)+(CA)
# + [markdown] id="b3pNobv_n_bg"
# 1. $A \cdot\ B \neq\ B \cdot\ A$
# + colab={"base_uri": "https://localhost:8080/"} id="31rQnuQcmCqI" outputId="54296308-e16c-4fb7-e926-ccdb81d34672"
print('Matrix AB:')
print(np.matmul(A,B))
print(f'Shape of Matrix AB:\t{np.matmul(A,B).shape}')
# + colab={"base_uri": "https://localhost:8080/"} id="Xm-NNA3HpJSC" outputId="a425533b-4812-42fe-ab76-588547e31ac1"
print('Matrix BA:')
print(np.matmul(B,A))
print(f'Shape of Matrix BA:\t{np.matmul(B,A).shape}')
# + [markdown] id="aKB7vJ4MoVbp"
# 2. $A \cdot\ (B \cdot\ C) = (A \cdot\ B) \cdot\ C$
# + colab={"base_uri": "https://localhost:8080/"} id="CnN1O2glrjmZ" outputId="9cef1a12-7f68-4ce3-988d-7149955b2709"
print('Matrix A(BC):')
print(np.matmul(A,BC))
print(f'Shape of Matrix A(BC):\t{np.matmul(A,BC).shape}')
# + colab={"base_uri": "https://localhost:8080/"} id="ii00c0dIsE0f" outputId="60ca8546-b257-43b5-d79d-c8ee79a3d438"
print('Matrix (AB)C:')
print(np.matmul(AB,C))
print(f'Shape of Matrix (AB)C:\t{np.matmul(AB,C).shape}')
# + [markdown] id="7LTVg4DfrkMP"
# 3. $A\cdot\ (B+C) = A\cdot\ B + A\cdot\ C$
# + colab={"base_uri": "https://localhost:8080/"} id="HJw7F6jarm4p" outputId="5ec2da44-80b0-4554-8b2a-950b44e6985c"
print('Matrix A(B+C):')
print(np.matmul(A,B+C))
print(f'Shape of Matrix A(B+C):\t{np.matmul(A,B+C).shape}')
# + colab={"base_uri": "https://localhost:8080/"} id="to3kA8xotmHa" outputId="4e810342-0290-4239-fec4-ab23d6cbd40e"
print('Matrix (AB)+(AC):')
print(K)
print(f'Shape of Matrix (AB)C:\t{K.shape}')
# + [markdown] id="pkgdE6efc8YE"
# 3. $A\cdot\ (B+C) = A\cdot\ B + A\cdot\ C$
# + colab={"base_uri": "https://localhost:8080/"} id="c7nyq-RMc8v8" outputId="92c77198-3461-410f-a914-cc48daf5339f"
print('Matrix A(B+C):')
print(np.matmul(A,B+C))
print(f'Shape of Matrix A(B+C):\t{np.matmul(A,B+C).shape}')
# + colab={"base_uri": "https://localhost:8080/"} id="rZbZ6A-NdBxg" outputId="6e69a6c9-8025-4f0d-f1bb-66166242ae0d"
print('Matrix (AB)+(AC):')
print(K)
print(f'Shape of Matrix (AB)C:\t{K.shape}')
# + [markdown] id="ScSnlPSEdEnw"
# 4. $(B+C)\cdot A = B\cdot A + C\cdot A$
# + colab={"base_uri": "https://localhost:8080/"} id="TNFDPyiCdFAT" outputId="6626355a-f581-49b8-a5e9-eb5c70da3ec1"
print('Matrix B+C(A):')
print(np.matmul(B+C,A))
print(f'Shape of Matrix B+C(A):\t{np.matmul(B+C,A).shape}')
# + colab={"base_uri": "https://localhost:8080/"} id="uppSrR-vdFZS" outputId="f8b84235-51e5-47ec-af86-72ae3e06a1b8"
print('Matrix (BA)+(CA):')
print(P)
print(f'Shape of Matrix (BC)A:\t{P.shape}')
# + [markdown] id="3tLWX3xBdKHV"
# 5. $A\cdot I = A$
#
# + colab={"base_uri": "https://localhost:8080/"} id="v4RKuSmadMPf" outputId="f35471a2-2f32-4c07-d41c-26d1e1b2ba9e"
A.dot(1)
# + [markdown] id="Tcig0kugdP_w"
# 6. $A\cdot \emptyset = \emptyset$
# + colab={"base_uri": "https://localhost:8080/"} id="Io5vVxIOdOu-" outputId="5805ce84-4bde-4e21-aac4-6c59265adc25"
A.dot(np.zeros(A.shape))
# + colab={"base_uri": "https://localhost:8080/"} id="n0KZ_G3YdQgc" outputId="93db2ebf-5526-48b1-baa5-a1dc47da65b5"
A.dot(0)
| Assignmet_5_Baniquit_Dolor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="uRikUXZvlGu7"
note_seq = ['g8', 'e8', 'e4', 'f8', 'd8', 'd4', 'c8', 'd8', 'e8', 'f8', 'g8', 'g8', 'g4',
'g8', 'e8', 'e8', 'e8', 'f8', 'd8', 'd4', 'c8', 'e8', 'g8', 'g8', 'e8', 'e8', 'e4',
'd8', 'd8', 'd8', 'd8', 'd8', 'e8', 'f4', 'e8', 'e8', 'e8', 'e8', 'e8', 'f8', 'g4',
'g8', 'e8', 'e4', 'f8', 'd8', 'd4', 'c8', 'e8', 'g8', 'g8', 'e8', 'e8', 'e4']
# + colab={"base_uri": "https://localhost:8080/"} id="Jh6DOXYyoH-q" outputId="2062f4db-ed52-48f6-f96a-49966548826c"
note_seq[0:4], note_seq[1:5], note_seq[2:6]
# + id="6kyvq6UcljAb"
code2idx = {'c4':0, 'd4':1, 'e4':2, 'f4':3, 'g4':4, 'a4':5, 'b4':6,
'c8':7, 'd8':8, 'e8':9, 'f8':10, 'g8':11, 'a8':12, 'b8':13}
# + colab={"base_uri": "https://localhost:8080/"} id="KzI7khFalkKq" outputId="a7bf60e5-e011-49e8-dcd8-cea17c9b50d9"
len(note_seq), range(len(note_seq)-4) # [4, 8, 12, ...]
# + colab={"base_uri": "https://localhost:8080/"} id="jRwG04w2iJ1p" outputId="6589cac9-762c-4273-cf33-77010fbf9e6d"
code2idx['g8']
# + colab={"base_uri": "https://localhost:8080/"} id="nDxklzzwm1VP" outputId="35cda4de-5ef1-4156-f46b-8dfedfec694d"
dataset = list()
for i in range(len(note_seq)-4):
subset = note_seq[i:i+4]
items = list()
# print(subset)
for item in subset:
# print(code2idx[item])
items.append(code2idx[item])
# print(items)
dataset.append(items)
print(dataset)
# + id="dxEW96lsozeC"
import numpy as np
datasets = np.array(dataset)
# + colab={"base_uri": "https://localhost:8080/"} id="vyGZ4ohPrOgI" outputId="4de73f48-d040-4fc6-9f89-6b699ce04758"
x_train = datasets[:,0:3]
x_train.shape, #x_train
# + colab={"base_uri": "https://localhost:8080/"} id="vrqLn_ferOip" outputId="d8326201-3c32-4f08-f8f6-c583b2ad6f9a"
y_train = datasets[:,3]
y_train.shape, #y_train
# + colab={"base_uri": "https://localhost:8080/"} id="fnoZHVjerOlx" outputId="ddcb17ce-960b-4d81-e7b2-91b2b99059f0"
len(code2idx)
# + colab={"base_uri": "https://localhost:8080/"} id="uPBv-EKCrXZ0" outputId="96a303d7-cdbe-48b1-c697-e485b083bdcd"
x_train = x_train / 13 # len(code2idx)
x_train[3]
# + [markdown] id="EGvF9ekde-a9"
# # make model
# + id="zNT-0KnjfAiP"
import tensorflow as tf
# + colab={"base_uri": "https://localhost:8080/"} id="kFz02o_sfEz-" outputId="a390970d-393a-4c59-b739-3797860382a6"
x_train.shape, x_train[2] # --> scale
# + colab={"base_uri": "https://localhost:8080/"} id="YSCqpndnfExz" outputId="c48c232c-2db6-46f1-8378-b0d45135090f"
X_train = np.reshape(x_train, (50, 3, 1)) # tensor
X_train.shape, X_train[2]
# + colab={"base_uri": "https://localhost:8080/"} id="lGQ5macYgnwa" outputId="38f6bc3a-112e-4dbb-b3da-a81c08c75e24"
np.unique(y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="8VemYj5ifEux" outputId="396eff2b-010e-4800-c8f5-ce48445b2e75"
model = tf.keras.models.Sequential()
model.add(tf.keras.Input(shape=(3,1))) # input layer
model.add(tf.keras.layers.LSTM(128)) # hidden layer
model.add(tf.keras.layers.Dense(13, activation='softmax')) # output layer
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # gadget
# + id="g9R-3xNBfErZ"
hist = model.fit(X_train, y_train, epochs=1500, batch_size=10) # 50 / 5 = 10
# + [markdown] id="6-ztbCh_lsD_"
# # Evaluate
# + colab={"base_uri": "https://localhost:8080/"} id="0ykNE4QJfEmn" outputId="772f1f24-3141-499a-cf94-0b2cdfbf0d60"
model.evaluate(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="QxTxZXUqrmzi" outputId="31795cf2-1195-4172-950f-2a4b8b45535f"
X_train[4:5]
# + colab={"base_uri": "https://localhost:8080/"} id="fFfnkjcKltMF" outputId="82b42930-fbb4-4f83-d333-bbbf2a529e95"
model.predict(X_train[4:5])
# + id="uXvOb48Cr6at"
first = 0.61538462
second = 0.07692308
third = 0.53846154
# + id="Gcm6XewDr6W7"
# [[[0.61538462], [0.07692308], [0.53846154]]]
# pred = model.predict([[[first], [second], [third]]])
pred = model.predict(X_train[0:1])
# + colab={"base_uri": "https://localhost:8080/"} id="ICG5rb6EfEj5" outputId="de850a4d-570c-4793-9e28-47280b300882"
np.argmax(pred)
# + id="Qn7tHOMffEhF"
# note_seq = ['g8', 'e8', 'e4', 'f8', 'd8', 'd4', 'c8', 'd8', 'e8', 'f8', 'g8', 'g8', 'g4',
# 'g8', 'e8', 'e8', 'e8', 'f8', 'd8', 'd4', 'c8', 'e8', 'g8', 'g8', 'e8', 'e8', 'e4',
# 'd8', 'd8', 'd8', 'd8', 'd8', 'e8', 'f4', 'e8', 'e8', 'e8', 'e8', 'e8', 'f8', 'g4',
# 'g8', 'e8', 'e4', 'f8', 'd8', 'd4', 'c8', 'e8', 'g8', 'g8', 'e8', 'e8', 'e4']
# + id="ofQJc6HofEeU"
# code2idx = {'c4':0, 'd4':1, 'e4':2, 'f4':3, 'g4':4, 'a4':5, 'b4':6,
# 'c8':7, 'd8':8, 'e8':9, 'f8':10, 'g8':11, 'a8':12, 'b8':13}
# + id="-L3FHemLfEbg"
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/"} id="CxKj_-M4zDhA" outputId="33b1b463-68c1-4dcd-8067-7a55bbb269c5"
hist.history.keys()
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="THdvec2ZzEwH" outputId="92b65958-f45a-4c5a-ea38-61e5298db491"
plt.plot(hist.history['loss'])
plt.plot(hist.history['accuracy'], 'r-')
plt.show()
# + id="6F7pXTLtzGNm"
| Notescale_LSTM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import xgboost as xgb
import xarray as xr
import pandas as pd
import numpy as np
import gc
import matplotlib.pyplot as plt
import logging
import pickle
import seaborn as sns
# logging.disable(logging.CRITICAL)
# import shap
rc={'axes.labelsize': 15.0,
'font.size': 15.0, 'legend.fontsize': 15.0,
'axes.titlesize': 15.0,
'xtick.labelsize': 15.0,
'ytick.labelsize': 15.0}
plt.rcParams.update(**rc)
# -
t_ls = ["daily","monthly"]
# +
fig, axes = plt.subplots(1, 2, figsize=(15, 3),sharex=True)
for i in range(len(t_ls)):
t = t_ls[i]
df = pd.read_csv("./ranking/"+t+"_all.csv").rename(columns={"location":"Region","ranking":"Ranking"})
g = sns.barplot(ax=axes[i], x = "Region", y = "Ranking",
hue = "category", hue_order=["gas","aod","met","emission"],
palette = ["#d7191c","#fdae61","#abdda4","#2b83ba"],
data=df)
if i==0:
axes[i].set_ylabel("Ranking Score")
else:
axes[i].set_ylabel("")
if i==1:
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
else:
g.get_legend().remove()
plt.tight_layout()
plt.savefig("../figures/fig3_ranking_all.pdf")
plt.show()
# +
fig, axes = plt.subplots(1, 2, figsize=(15, 3),sharex=True)
for i in range(len(t_ls)):
t = t_ls[i]
df = pd.read_csv("./ranking/"+t+"_wo_gas.csv").rename(columns={"location":"Region","ranking":"Ranking"})
g = sns.barplot(ax=axes[i], x = "Region", y = "Ranking",
hue = "category", hue_order=["aod","met","emission"],
palette = ["#fdae61","#abdda4","#2b83ba"],
data=df)
if i==0:
axes[i].set_ylabel("Ranking Score")
else:
axes[i].set_ylabel("")
if i==1:
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
else:
g.get_legend().remove()
plt.tight_layout()
plt.savefig("../figures/fig3_ranking_wo_gas.pdf")
plt.show()
# +
fig, axes = plt.subplots(1, 2, figsize=(15, 3),sharex=True)
for i in range(len(t_ls)):
t = t_ls[i]
df = pd.read_csv("./ranking/"+t+"_wo_aod.csv").rename(columns={"location":"Region","ranking":"Ranking"})
g = sns.barplot(ax=axes[i], x = "Region", y = "Ranking",
hue = "category", hue_order=["gas","met","emission"],
palette = ["#d7191c","#abdda4","#2b83ba"],
data=df)
if i==0:
axes[i].set_ylabel("Ranking Score")
else:
axes[i].set_ylabel("")
if i==1:
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
else:
g.get_legend().remove()
plt.tight_layout()
plt.savefig("../figures/fig3_ranking_wo_aod.pdf")
plt.show()
| 3_scripts_for_figures/fig3_ranking_scores.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nlp
# language: python
# name: nlp
# ---
import os, sys;
sys.path.append("/dadosDL/database/spacy/Turku-neural-parser-pipeline")
from tnparser.pipeline import read_pipelines, Pipeline;
available_pipelines=read_pipelines("models_ptbr/pipelines.yaml")
p=Pipeline(available_pipelines["parse_wslines_no_lemmas"])
import spacy;
from spacy.training.converters import conllu_to_docs;
import numpy as np;
# +
def turku2doc(parsed, retiraLinhas=4):
if retiraLinhas > 0:
splitedLines = parsed.split('\n');
conlluText = '\n'.join(splitedLines[4:])
else:
conlluText = parsed;
docs = conllu_to_docs(conlluText);
doc = next(docs, None);
return doc;
def doc2text(doc):
result = '';
for token in doc:
# result = result + token.text + token.whitespace_;
result = result + ' ' + token.text;
return result;
def svg2PNG(doc, outFile, style='dep', linux = True):
from pathlib import Path;
svgFile = os.path.splitext(outFile)[0] + '.svg';
svg = spacy.displacy.render(doc, style=style, jupyter=False);
output_path = Path(svgFile);
output_path.open("w", encoding="utf-8").write(svg);
pngfile = os.path.splitext(outFile)[0] + '.png';
if linux:
os.system('inkscape -e %s %s' % (pngfile, svgFile));
else: # windows
cmd = '"C:\\Program Files\\Inkscape\\bin\\inkscape.com" -o "%s" %s' % (outFile, svgFile);
bat = [cmd];
saveList(bat, 'temp.bat');
os.system('temp.bat');
# -
nlp = spacy.load("pt_core_news_sm")
texto = 'Há uma imagem nodular mediastinal, adjacente à veia cava superior / átrio direito, em contiguidade com a massa do lobo superior direito, medindo 1,5 x 1,5 cm, que pode representar linfonodomegalia ou extensão tumoral';
# +
docSpacy = nlp(texto);
spacyText = doc2text(docSpacy);
parsed=p.parse(spacyText);
doc = turku2doc(parsed, 0);
spacy.displacy.render(doc, style='dep');
svg2PNG(doc, 'diaparser.svg');
# -
docspacy = nlp(texto);
spacy.displacy.render(docspacy, style='dep');
svg2PNG(docspacy, 'spacy.svg');
print(parsed);
| TurkuSpacyPtBr.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.0.3
# language: julia
# name: julia-1.0
# ---
# # Compute the neon K-LL Auger spectrum
using JAC
# In this tutorial, we wish to calculate the K-LL spectrum of atomic neon after $1s$ inner-shell ionization as it may occurs, for instance, following photon, electron or neutron impact. An autoionization of an atom or ion generally occurs if the initially-excited level is **energetically embedded into the continuum of the next higher charge state**, and then often also dominates the decay of the atom.
#
# Since the neon K-shell ($1s$) electron has a binding energy larger than 800 eV, this condition is fulfilled and may, in priciple, even lead to a 4-5 times multiple ionization, although the initial excitation energy does not tell much of how many electrons will be emitted eventually. This depends of course on the particular decay path
# of an atom and how easily further electrons can be *shaked-up* (or *off*) the atom during the de-excitation.
#
# An Auger electron emission often leads to characteristic and energetically isolated lines as the kinetic energy of the emitted electrons are given simply by the difference of the (total) initial- and final-level energies. We therefore need to specify only the initial- and final-state multiplets as well as the process identifier `AugerX` in order to calculate the K-LL spectrum with just the standard settings, but by defining a grid that is appropriate to describe *outgoing* electrons:
wa = Atomic.Computation("Neon K-LL Auger spectrum", Nuclear.Model(10.);
grid=JAC.Radial.Grid("grid: by given parameters"; rnt = 2.0e-6, h = 5.0e-2,
hp = 1.0e-2, NoPoints = 600),
initialConfigs=[Configuration("1s 2s^2 2p^6")],
finalConfigs =[Configuration("1s^2 2s^2 2p^4"), Configuration("1s^2 2s 2p^5"),
Configuration("1s^2 2p^6")], process = JAC.AugerX )
perform(wa)
#
# This computation just applies simple *Bessel-type* continuum orbitals which are normalized on the basis of a pure sine behaviour at large radii. Moreover, the Auger amplitudes are computed by including the Coulomb interaction. Further control about this computation can be gained by re-defining the method for the integration of the continuum orbitals as well as by means of the `Auger.Settings`. For example, we can use a B-spline-Galerkin method and a normalization for a pure asymptotic Coulomb field to generate all continuum orbitals of interest:
#
setDefaults("method: continuum, Galerkin")
setDefaults("method: normalization, pure Coulomb")
#
# Moreover, we can first have a look how the standard settings are defined at present:
#
? AutoIonization.Settings
AutoIonization.Settings()
#
# Obviously, neither the Auger energies nor the symmetries of the (outgoing) partial wave are restricted. Apart from the Coulomb interaction, we could just included the Breit interaction (mainly for test purposes) or both, Coulomb+Breit together, and independent of whether this interaction has been incorporated in the SCF and CI computations before. In addition, we can also calculate the angular anisotropy coefficients as well as specify individual lines or symmetries to be included. We here leave the selection of individual lines to later and define the settings as:
#
autoSettings = AutoIonization.Settings(true, true, false, Tuple{Int64,Int64}[], 0.0, 1.0e5, 10, "Coulomb")
#
# and re-run the computations with these settings:
#
wb = Atomic.Computation("Neon K-LL Auger spectrum", Nuclear.Model(10.);
grid=JAC.Radial.Grid("grid: by given parameters"; rnt = 2.0e-5, h = 5.0e-2,
hp = 1.3e-2, NoPoints = 600),
initialConfigs=[Configuration("1s 2s^2 2p^6")],
finalConfigs =[Configuration("1s^2 2s^2 2p^4"), Configuration("1s^2 2s 2p^5"),
Configuration("1s^2 2p^6")], process = JAC.Auger,
processSettings = autoSettings )
perform(wb)
# In the setting above, we could have specified also a computation for just a few selected transitions or just for transitions within a given energy interval. We shall leave the discussion of these *options* to one of the next tutorials
| tutorials/10-compute-Ne-KLL-Auger-spectrum.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:ds]
# language: python
# name: conda-env-ds-py
# ---
# + [markdown] Collapsed="false"
# # Nonparametric Estimation of Threshold Exceedance Probability
# + Collapsed="false"
import numpy as np
import pandas as pd
from scipy.special import gamma
# + [markdown] Collapsed="false"
# Assume that $\left\{ X(s): s \in D \subset \mathbb R^d \right\}$. For fixed $x_0 \in \mathbb R$ we define the exceedance probability at location $s$ as
#
# $$
# P_{x_0} (s) = P\left[ X(s) \geq x_0 \right]
# $$
#
# we define an estimator as
#
# $$
# \hat P_{x_0} (s) = \frac{
# \sum_{i=1}^n K \left(\frac{s_i - s}{h}\right) \pmb 1_{ \left\{X(s) \geq x_0\right\}}
# }{
# \sum_{i=1}^n K \left(\frac{s_i - s}{h}\right)
# }
# $$
#
# where $h$ represents a bandwidth parameter and $K: \mathbb R^d \to \mathbb R$ is a kernel function.
# + [markdown] Collapsed="false"
# Example:
#
# Epanechnikov Kernel:
#
# $$
# K(u) = \frac{3}{4} \left( 1 - u^2 \right), \qquad |u| \leq 1
# $$
# + Collapsed="false"
d = 2
epanechnikov_cte_2d = gamma(2 + d / 2) / (np.pi ** (d / 2))
# + Collapsed="false"
def epanechnikov(u, cte=epanechnikov_cte_2d):
u_norm = np.linalg.norm(u)
if u_norm <= 1:
return epanechnikov_cte_2d * (1 - u_norm ** 2)
else:
return 0.0
# + Collapsed="false"
print(epanechnikov(0.5))
print(epanechnikov(2))
print(epanechnikov(np.array([0.0, 0.1, 1., 2.])))
# + Collapsed="false"
x = np.array(
[
[0.1, 0.2],
[1.0, 2.0],
[0.0, 0.0]
]
)
# + Collapsed="false"
np.apply_along_axis(epanechnikov, 1, x)
# + [markdown] Collapsed="false"
# Create a mesh
# + Collapsed="false"
x_max = 1
y_max = 1
nx, ny = (6, 6)
x = np.linspace(0, x_max, nx)
y = np.linspace(0, y_max, ny)
xv, yv = np.meshgrid(x, y)
xv
# + Collapsed="false"
yv
# + Collapsed="false"
S = np.stack((xv.ravel(), yv.ravel()), axis=1)
S.shape
# + Collapsed="false"
s_est = np.array([0.2, 0.4])
# + Collapsed="false"
(S - s_est).shape
# + Collapsed="false"
h = 0.01
# + Collapsed="false"
np.apply_along_axis(epanechnikov, 1, (S - s_est) / h)
| nonparametric_estimation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="A9YRrFYNY37s"
# # DISCLAIMER
# Copyright 2021 Google LLC.
#
# *This solution, including any related sample code or data, is made available on an “as is,” “as available,” and “with all faults” basis, solely for illustrative purposes, and without warranty or representation of any kind. This solution is experimental, unsupported and provided solely for your convenience. Your use of it is subject to your agreements with Google, as applicable, and may constitute a beta feature as defined under those agreements. To the extent that you make any data available to Google in connection with your use of the solution, you represent and warrant that you have all necessary and appropriate rights, consents and permissions to permit Google to use and process that data. By using any portion of this solution, you acknowledge, assume and accept all risks, known and unknown, associated with its usage, including with respect to your deployment of any portion of this solution in your systems, or usage in connection with your business, if at all.*
# + [markdown] id="iTsKaxLPY37t"
# # Crystalvalue Demo: Predictive Customer LifeTime Value using Synthetic Data
#
# This demo runs the Crystalvalue python library in a notebook using generated synthetic data. This notebook assumes that it is being run from within a [Google Cloud Platform AI Notebook](https://console.cloud.google.com/vertex-ai/notebooks/list/instances) with a Compute Engine default service account (the default setting when an AI Notebook is created) and with a standard Python 3 environment. For more details on the library please see the readme or for a more in-depth guide, including scheduling predictions, please see the `demo_with_real_data_notebook.ipynb`.
# + id="mKWzgxQ1Y37v"
import pandas as pd
from src import crystalvalue
# + id="gCVphBgXY37w"
# Initiate the CrystalValue class with the relevant parameters.
pipeline = crystalvalue.CrystalValue(
project_id='your_project_name', # Enter your GCP Project name.
dataset_id='a_dataset_name' # The dataset will be created if it doesn't exist.
)
# + id="QThn6fzB8Xew"
# Create a synthetic dataset and load it to BigQuery.
data = pipeline.create_synthetic_data(table_name='synthetic_data')
# + id="o-6953VX9YRv"
# Create summary statistics of the data and load it to Bigquery.
summary_statistics = pipeline.run_data_checks(transaction_table_name='synthetic_data')
# + id="yK5Lfb1SY37w"
# Feature engineering for model training with test/train/validation split.
crystalvalue_train_data = pipeline.feature_engineer(
transaction_table_name='synthetic_data')
# + id="CSK1FtHVY37y"
# Train an AutoML model.
model_object = pipeline.train_automl_model()
# + id="dxA48EdTcs3c"
# Deploy the model.
model_object = pipeline.deploy_model()
# + id="iCYiuW8pEEsQ"
# Evaluate the model.
metrics = pipeline.evaluate_model()
# + id="GDleZHjEEJ72"
# Create features for prediction.
crystalvalue_predict_data = pipeline.feature_engineer(
transaction_table_name='synthetic_data',
query_type='predict_query')
# Predict LTV for all customers.
predictions = pipeline.predict(
input_table=crystalvalue_predict_data,
destination_table='crystalvalue_predictions' # The table will be created if it doesn't exist.
)
# + [markdown] id="R7RczTD6S1_O"
# # Clean Up
# + [markdown] id="6FyiJ93rTWhW"
# To clean up tables created during this demo, delete the BigQuery tables that were created. All Vertex AI resources can be removed from the [Vertex AI console](https://console.cloud.google.com/vertex-ai).
# + id="1wQI6E8US3Hc"
pipeline.delete_table('synthetic_data')
pipeline.delete_table('crystalvalue_data_statistics')
pipeline.delete_table('crystalvalue_evaluation')
pipeline.delete_table('crystalvalue_train_data')
pipeline.delete_table('crystalvalue_predict_data')
pipeline.delete_table('crystalvalue_predictions')
| demo_quick_start_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Advanced Wind Turbine Simulation Workflow
#
# * A more advanced turbine workflow will include many steps (Which have been showcased in previous examples)
# * The process exemplified here includes:
# 1. Weather data extraction from a MERRA dataset (windspeed, pressure, temperature)
# 2. Spatial adjustment of the windspeeds
# 3. Vertical projection of the wind speeds
# 4. Wind speed density correction
# 5. Power curve convolution
# 6. Capacity Factor Estimation
# +
import reskit as rk
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# # Simulate a Single Location
# +
# Set some constants for later
TURBINE_CAPACITY = 4200 # kW
TURBINE_HUB_HEIGHT = 120 # meters
TURBINE_ROTOR_DIAMETER = 136 # meters
TURBINE_LOCATION = (6.0,50.5) # (lon, lat)
# +
# 1. Create a weather source, load, and extract weather variables
src = rk.weather.MerraSource(rk.TEST_DATA["merra-like"], bounds=[5,49,7,52], verbose=False)
src.sload_elevated_wind_speed()
src.sload_surface_pressure()
src.sload_surface_air_temperature()
raw_windspeeds = src.get("elevated_wind_speed", locations=TURBINE_LOCATION, interpolation='bilinear')
raw_pressure = src.get("surface_pressure", locations=TURBINE_LOCATION, interpolation='bilinear')
raw_temperature = src.get("surface_air_temperature", locations=TURBINE_LOCATION, interpolation='bilinear')
print(raw_windspeeds.head())
# +
# 2. Vertically project wind speeds to hub height
roughness = rk.wind.roughness_from_clc(
clc_path=rk.TEST_DATA['clc-aachen_clipped.tif'],
loc=TURBINE_LOCATION)
projected_windspeed = rk.wind.apply_logarithmic_profile_projection(
measured_wind_speed=raw_windspeeds,
measured_height=50, # The MERRA dataset offers windspeeds at 50m
target_height=TURBINE_HUB_HEIGHT,
roughness=roughness)
print(projected_windspeed.head())
# +
# 3. Apply density correction
pressure_corrected_windspeeds = rk.wind.apply_air_density_adjustment(
wind_speed=projected_windspeed,
pressure=raw_pressure,
temperature=raw_temperature,
height=TURBINE_HUB_HEIGHT)
pressure_corrected_windspeeds.head()
# +
# 4. Power curve estimation and convolution
power_curve = rk.wind.PowerCurve.from_capacity_and_rotor_diam(
capacity=TURBINE_CAPACITY,
rotor_diam=TURBINE_ROTOR_DIAMETER)
convoluted_power_curve = power_curve.convolute_by_gaussian(scaling=0.06, base=0.1)
# +
# 5. Capacity factor estimation
capacity_factors = convoluted_power_curve.simulate(wind_speed=pressure_corrected_windspeeds)
capacity_factors
# -
plt.plot(capacity_factors)
plt.show()
# # Simulate multiple locations at once (recommended)
# +
TURBINE_CAPACITY = 4200 # kW
TURBINE_HUB_HEIGHT = 120 # meters
TURBINE_ROTOR_DIAMETER = 136 # meters
TURBINE_LOCATION = np.array([(6.25,51.), (6.50,51.), (6.25,50.75)]) # (lon,lat)
# 1
raw_windspeeds = src.get("elevated_wind_speed", locations=TURBINE_LOCATION, interpolation='bilinear')
raw_pressure = src.get("surface_pressure", locations=TURBINE_LOCATION, interpolation='bilinear')
raw_temperature = src.get("surface_air_temperature", locations=TURBINE_LOCATION, interpolation='bilinear')
# 2
roughness = rk.wind.roughness_from_clc(
clc_path=rk.TEST_DATA['clc-aachen_clipped.tif'],
loc=TURBINE_LOCATION)
projected_windspeed = rk.wind.apply_logarithmic_profile_projection(
measured_wind_speed=raw_windspeeds,
measured_height=50, # The MERRA dataset offers windspeeds at 50m
target_height=TURBINE_HUB_HEIGHT,
roughness=roughness)
# 3
pressure_corrected_windspeeds = rk.wind.apply_air_density_adjustment(
wind_speed=projected_windspeed,
pressure=raw_pressure,
temperature=raw_temperature,
height=TURBINE_HUB_HEIGHT)
convoluted_power_curve = power_curve.convolute_by_gaussian(scaling=0.06, base=0.1)
# 4
power_curve = rk.wind.PowerCurve.from_capacity_and_rotor_diam(
capacity=TURBINE_CAPACITY,
rotor_diam=TURBINE_ROTOR_DIAMETER)
# 5
capacity_factors = convoluted_power_curve.simulate(wind_speed=pressure_corrected_windspeeds)
# Print result
capacity_factors
# -
capacity_factors.plot()
plt.show()
| examples/3.05b-Wind-Turbine_Simulation_Workflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# # Sampling pose DKF trained on H3.6M
# + deletable=true editable=true
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
import os
import addpaths
from load import loadDataset
from h36m_loader import insert_junk_entries
import os
import numpy as np
from scipy.signal import convolve2d
# Stupid hack to make parameter loading actually work
# (ugh, this is super brittle code)
import sys
del sys.argv[1:]
# CONFIG_PATH = './chkpt-h36m/DKF_lr-8_0000e-04-vm-R-inf-structured-dh-50-ds-10-nl-relu-bs-20-ep-2000-rs-600-ttype-simple_gated-etype-mlp-previnp-False-ar-1_0000e+01-rv-5_0000e-02-nade-False-nt-5000-uid-config.pkl'
# WEIGHT_PATH = './chkpt-h36m/DKF_lr-8_0000e-04-vm-R-inf-structured-dh-50-ds-10-nl-relu-bs-20-ep-2000-rs-600-ttype-simple_gated-etype-mlp-previnp-False-ar-1_0000e+01-rv-5_0000e-02-nade-False-nt-5000-uid-EP375-params.npz'
# sys.argv.extend('-vm R -infm structured -ds 10 -dh 50'.split())
# CONFIG_PATH = './chkpt-h36m/DKF_lr-8_0000e-04-vm-L-inf-structured-dh-50-ds-10-nl-relu-bs-20-ep-2000-rs-600-ttype-simple_gated-etype-mlp-previnp-False-ar-1_0000e+01-rv-5_0000e-02-nade-False-nt-5000-past-only-config.pkl'
# WEIGHT_PATH = './chkpt-h36m/DKF_lr-8_0000e-04-vm-L-inf-structured-dh-50-ds-10-nl-relu-bs-20-ep-2000-rs-600-ttype-simple_gated-etype-mlp-previnp-False-ar-1_0000e+01-rv-5_0000e-02-nade-False-nt-5000-past-only-EP25-params.npz'
# sys.argv.extend('-vm L -infm structured -ds 10 -dh 50 -uid past-only'.split())
# CONFIG_PATH = './chkpt-h36m/DKF_lr-8_0000e-04-vm-L-inf-structured-dh-50-ds-10-nl-relu-bs-20-ep-2000-rs-600-ttype-simple_gated-etype-conditional-previnp-False-ar-1_0000e+01-rv-5_0000e-02-nade-False-nt-5000-past-only-cond-emis-config.pkl'
# WEIGHT_PATH = './chkpt-h36m/DKF_lr-8_0000e-04-vm-L-inf-structured-dh-50-ds-10-nl-relu-bs-20-ep-2000-rs-600-ttype-simple_gated-etype-conditional-previnp-False-ar-1_0000e+01-rv-5_0000e-02-nade-False-nt-5000-past-only-cond-emis-EP25-params.npz'
# sys.argv.extend('-vm L -infm structured -ds 10 -dh 50 -uid past-only-cond-emis -etype conditional'.split())
CONFIG_PATH = './chkpt-h36m/DKF_lr-8_0000e-04-vm-L-inf-structured-dh-50-ds-10-nl-relu-bs-20-ep-2000-rs-600-ttype-simple_gated-etype-mlp-previnp-False-ar-1_0000e+01-rv-5_0000e-02-nade-False-nt-5000-cond-True-past-only-config.pkl'
WEIGHT_PATH = './chkpt-h36m/DKF_lr-8_0000e-04-vm-L-inf-structured-dh-50-ds-10-nl-relu-bs-20-ep-2000-rs-600-ttype-simple_gated-etype-mlp-previnp-False-ar-1_0000e+01-rv-5_0000e-02-nade-False-nt-5000-cond-True-past-all-act-cond-EP1075-params.npz'
sys.argv.extend('-vm L -cond -infm structured -ds 10 -dh 50 -uid past-only'.split())
sys.argv.extend(['-reload', WEIGHT_PATH, '-params', CONFIG_PATH])
# + deletable=true editable=true
dataset = loadDataset(use_cond=True)
# + deletable=true editable=true
from parse_args_dkf import parse; params = parse()
from utils.misc import removeIfExists,createIfAbsent,mapPrint,saveHDF5,displayTime
from stinfmodel_fast.dkf import DKF
import stinfmodel_fast.learning as DKF_learn
import stinfmodel_fast.evaluate as DKF_evaluate
# + deletable=true editable=true
if 'h36m_action_names' in dataset:
act_names = dataset['h36m_action_names']
print('Action names: ' + ', '.join(map(str, act_names)))
one_hot_acts = {}
hot_vec_size = len(act_names)
for hot_bit, name in enumerate(act_names):
one_hot_acts[name] = (np.arange(hot_vec_size) == hot_bit)
else:
print('No action names found')
# + deletable=true editable=true
use_cond = bool(params.get('use_cond', False))
params['savedir']+='-h36m'
# createIfAbsent(params['savedir'])
# Add dataset and NADE parameters to "params" which will become part of the
# model
for k in ['dim_observations','data_type']:
params[k] = dataset[k]
mapPrint('Options: ',params)
if params['use_nade']:
params['data_type']='real_nade'
# Remove from params
removeIfExists('./NOSUCHFILE')
reloadFile = params.pop('reloadFile')
pfile=params.pop('paramFile')
# paramFile is set inside the BaseClass in theanomodels
# to point to the pickle file containing params"""
assert os.path.exists(pfile),pfile+' not found. Need paramfile'
print 'Reloading trained model from : ',reloadFile
print 'Assuming ',pfile,' corresponds to model'
dkf = DKF(params, paramFile = pfile, reloadFile = reloadFile)
# + deletable=true editable=true
def smooth_seq(seq):
assert seq.ndim == 2, "need 2d seq (real shape %r)" % (seq,)
kernel = [0.1, 0.25, 0.3, 0.25, 0.1]
full_kernel = np.array(kernel).reshape((-1, 1))
rv = convolve2d(seq, full_kernel, mode='valid')
assert rv.ndim == 2
assert rv.shape[1] == seq.shape[1]
assert rv.shape[0] <= seq.shape[0]
return rv
# + deletable=true editable=true
if not use_cond:
# No need to do conditional nonsense!
oodles_of_samples = dkf.sample(nsamples=50, T=1024)
sample_X, sample_Z = oodles_of_samples
print('Output shape: %s' % str(sample_X.shape))
mu = dataset['h36m_mean'].reshape((1, 1, -1))
sigma = dataset['h36m_std'].reshape((1, 1, -1))
real_X = insert_junk_entries(sample_X * sigma + mu)
dest_dir = './generated/'
try:
os.makedirs(dest_dir)
except OSError:
pass
for i, sampled_times in enumerate(real_X):
dest_fn = os.path.join(dest_dir, 'seq-%i.txt' % i)
print('Saving %s' % dest_fn)
np.savetxt(dest_fn, sampled_times, delimiter=',', fmt='%f')
# Do the same thing, but smoothed
smooth_dest_dir = './generated-smooth/'
try:
os.makedirs(smooth_dest_dir)
except OSError:
pass
for i, sampled_times in enumerate(real_X):
dest_fn = os.path.join(smooth_dest_dir, 'seq-%i.txt' % i)
print('Saving %s' % dest_fn)
smooth_times = smooth_seq(sampled_times)
np.savetxt(dest_fn, smooth_times, delimiter=',', fmt='%f')
# + deletable=true editable=true
if use_cond:
seqs_per_act = 2
seq_length = 256
dest_dir = './generated-wacts/'
try:
os.makedirs(dest_dir)
except OSError:
pass
# start by generating some sequences for each action type
for act_name, one_hot_rep in one_hot_acts.items():
print('Working on action %s' % act_name)
U = np.stack([one_hot_rep] * seq_length, axis=0)
oodles_of_samples = dkf.sample(nsamples=seqs_per_act, T=seq_length, U=U)
sample_X, sample_Z = oodles_of_samples
mu = dataset['h36m_mean'].reshape((1, 1, -1))
sigma = dataset['h36m_std'].reshape((1, 1, -1))
real_X = insert_junk_entries(sample_X * sigma + mu)
for i, sampled_times in enumerate(real_X):
dest_pfx = os.path.join(dest_dir, 'act-%s-seq-%i' % (act_name, i))
dest_fn = dest_pfx + '.txt'
print('Saving ' + dest_fn)
np.savetxt(dest_fn, sampled_times, delimiter=',', fmt='%f')
dest_fn_smooth = dest_pfx + '-smooth.txt'
print('Saving ' + dest_fn_smooth)
smooth_sampled_times = smooth_seq(sampled_times)
np.savetxt(dest_fn_smooth, smooth_sampled_times, delimiter=',', fmt='%f')
# now choose random pairs of (distinct) actions and simulate
# a transition at half-way point
num_pairs = 10
nacts = len(act_names)
chosen_idxs = np.random.permutation(nacts * (nacts-1))[:num_pairs]
act_pairs = [(act_names[idxp%nacts], act_names[idxp//nacts]) \
for idxp in chosen_idxs]
# act_pairs = [('walking', 'eating'), ('eating', 'walking'),
# ('walking', 'smoking'), ('smoking', 'walking'),
# ('smoking', 'eating'), ('eating', 'smoking')]
for act1, act2 in act_pairs:
print('Computing sequence for action %s -> %s' % (act1, act2))
len1 = seq_length // 2
len2 = seq_length - len1
rep1 = one_hot_acts[act1]
rep2 = one_hot_acts[act2]
U = np.stack([rep1] * len1 + [rep2] * len2, axis=0)
oodles_of_samples = dkf.sample(nsamples=seqs_per_act, T=seq_length, U=U)
sample_X, sample_Z = oodles_of_samples
mu = dataset['h36m_mean'].reshape((1, 1, -1))
sigma = dataset['h36m_std'].reshape((1, 1, -1))
real_X = insert_junk_entries(sample_X * sigma + mu)
for i, sampled_times in enumerate(real_X):
dest_pfx = os.path.join(dest_dir, 'trans-%s-to-%s-seq-%i' % (act1, act2, i))
dest_fn = dest_pfx + '.txt'
print('Saving ' + dest_fn)
np.savetxt(dest_fn, sampled_times, delimiter=',', fmt='%f')
dest_fn_smooth = dest_pfx + '-smooth.txt'
print('Saving ' + dest_fn_smooth)
smooth_sampled_times = smooth_seq(sampled_times)
np.savetxt(dest_fn_smooth, smooth_sampled_times, delimiter=',', fmt='%f')
# + deletable=true editable=true
| expt-h36m/sample-dkf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={} tags=[]
# <img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# + [markdown] papermill={} tags=[]
# # Notion - Scheduled Updates from Gsheets
# <a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Notion/Notion_Scheduled_Updates_from_Gsheets.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
# + [markdown] papermill={} tags=[]
# **Tags:** #notion #gsheet #productivity #naas_drivers #operations #automation
# + [markdown] papermill={} tags=[]
# **Author:** [<NAME>](https://www.linkedin.com/in/pooja-srivastava-bb037649/)
# + [markdown] papermill={} tags=[]
# This notebook does the following tasks:
# - Schedule the notebook to run every 15 minutes
# - Connect with Gsheet and get all rows using the gsheet driver's get method
# - Connect with NotionDB and get all pages using the Notion Databases driver's get method
# - Compare the list of pages from notion db with the rows returned from gsheet and add non matching rows to a list
# - Create new pages in Notion db for each row in the non matching list using the Notion Pages driver's create method
# - For each created page, add the properties and values from gsheet and update in notion using the Notion Pages driver's update methods
# + [markdown] papermill={} tags=[]
# Pre-requisite:
# 1. Gsheets : Please share your Google Sheet with our service account
# For the driver to fetch the contents of your google sheet, you need to share it with the service account linked with Naas. 🔗 <EMAIL>
# 2. Notion : Create a notion integration and use the secret key to share the database. <a href = 'https://docs.naas.ai/Notion-7435020d01a549a9a0060c47ea808fd4'> Refer the documentation here</a>
# + [markdown] papermill={} tags=[]
# ## Input
# + [markdown] papermill={} tags=[]
# ### Import library
# + papermill={} tags=[]
import pandas as pd
import naas
from naas_drivers import notion, gsheet
# + [markdown] papermill={} tags=[]
# ### Scheduler
# + papermill={} tags=[]
#Schedule the notebook to run every 15 minutes
naas.scheduler.add(cron="*/15 * * * *")
# + [markdown] papermill={} tags=[]
# ### Setup Notion
# + papermill={} tags=[]
# Enter Notion Token API and Database URL
notion_token = "****"
notion_database_url = "https://www.notion.so/YOURDB"
#Unique column name for notion
col_unique_notion = 'Name'
# + [markdown] papermill={} tags=[]
# ### Setup Gsheet
# + papermill={} tags=[]
# Enter spreadsheet_id and sheet name
spreadsheet_id = "****"
sheet_name = "YOUR_SHEET"
#Unique column# for gsheets
col_unique_gsheet = 'Name'
# + [markdown] papermill={} tags=[]
# ## Model
# + [markdown] papermill={} tags=[]
# ### Get Gsheet and Notion data
# + papermill={} tags=[]
#Connect with Gsheet and get all data in a dataframe
gsheet.connect(spreadsheet_id)
df_gsheet = gsheet.get(sheet_name=sheet_name)
# + papermill={} tags=[]
#Connect with Notion db and get all pages in a dataframe
database = notion.connect(notion_token).database.get(notion_database_url)
df_notion = database.df()
# + [markdown] papermill={} tags=[]
# ### Compare Gsheet and Notion data
# + papermill={} tags=[]
#Iterate through all rows in Gsheet and find match in Notion db
#If no match is found then add data to df_difference dataframe
df_difference = pd.DataFrame()
for index,row in df_gsheet.iterrows():
x = row[col_unique_gsheet]
if not (x == df_notion[col_unique_notion]).any():
df_difference = df_difference.append(df_gsheet.loc[index])
# + [markdown] papermill={} tags=[]
# ## Output
# + [markdown] papermill={} tags=[]
# ### Create a new page in Notion if it does not match with Gsheet
# + papermill={} tags=[]
#Create a new page in notion db for each row in df_difference dataframe
if(df_difference.empty == False):
for index, row in df_difference.iterrows():
page = notion.connect(notion_token).page.create(database_id=notion_database_url, title=row[col_unique_gsheet])
#Add all properties here and map with respective column in row
page.select('Type', row['Type'])
page.select('Status', row['Status'])
page.rich_text('Summary', row['Summary'])
page.update()
print("The gsheets rows synced successfuly to Notion DB")
else:
print("No new rows in Gsheet to sync to Notion DB")
| Notion/Notion_Scheduled_Updates_from_Gsheets.ipynb |